Mini: Voter ID

This is the first general election with a requirement for ID documentation to vote. I’m opposed to elections, so this is not a concern of mine either way. However, I used to argue against voter ID. There are other arguments though, and it’s quite an interesting subject.

My old argument was that the government does not and should not run elections, and relying on government issued documents gives them too much control. Of course, postal voting makes this argument irrelevant, but I opposed that too for the same reasons.

The safeguard on elections was always transparency. Polling stations are public, you identify yourself in public and are given a paper in public, ballot boxes are visible and collected and counted locally in public. To commit electoral fraud you have to do it in public.

I recently discussed some of this in Holland with an assortment of Europeans. Their attitude was different —they have always had official government ID used routinely for many purposes. I always sneered a bit at that; our government does not need to track us all individually.

Today though I recognise that that model is just not appropriate in an era of mass income taxation, welfare state, and mass immigration. The British govt has never tracked and identified every person. But it is unfortunately past time it did.

So on voter ID, I still don’t care, but the government’s half-arsed attempts to cobble national identity management out of birth registration, national insurance registration, driver licensing and passports do actually need to be rationalised into true national ID. I hate it.

(Originally a tweet thread, 4 July 2024)

The Senate and People of Ukia

After the 2019 British General Election produced a large conservative majority for Prime Minister Boris Johnson, I wrote a “projection” / fantasy of how Britain could progress to a one-party state.

A one-party state on the Chinese model isn’t my ideal form of government. I would prefer an absolute hereditary monarchy such as the one I described in 2012. (Next year we will pass the half-way point of the 25 years between when I wrote it and when I set it, so I will review that then). But I never put forward a mechanism for getting to the absolute monarchy, only vaguely having in mind some serious political collapse and recovery. One-party states do exist today and some of them are governed much better than multi-party democracies. They are equally oligarchic, but the oligarchies are more rational, effective, and marginally less embroiled in infighting.

The central point of neoreactionary theory is that the root problem of our society is its structure of government. The most obvious problem is the people in charge, and if you look a bit deeper you see bad and harmful ideologies, but the theory is that the ideologies are the expected product of internal competition within an oligarchy, and that the people are the product of the structure and the ideologies.

If that is accepted, then the critical step is to change the system. Changing the system will in time change the ideologies and the people. So movement away from a system of oligarchic competition is a benefit, whether the one party is Labour or Conservative. It doesn’t matter whether a cat is black or white, if it catches mice it is a good cat.

Admittedly, when I imagined Borisland, it was very much as a monarchical form with a Supreme Leader. I have heard suggestions that Xi is effectively sovereign over the PRC, but I don’t know and if I were to guess I would think it unlikely. Is Starmer a man who can dissolve ministerial responsibility? Or maybe there is a more ambitious successor waiting in the wings? Either could work. Every Prime Minister who is not universally pilloried as baffled and ineffectual (and some who are) is accused of introducing presidential government; it does not appear to be an impossibility.

Again, I would prefer not to be dragging even the pretence of democratic legitimacy behind the monarch, but, after all, the Roman Empire managed it.

What does the incoming Starmer administration have going for it? Quite a bit:

  1. Weak parliamentary opposition
  2. A prominent internal opposition
  3. A large majority to enable it to combat the internal opposition
  4. A leader who intimately understands the permanent government
  5. A leader young enough to last a couple of decades
  6. The support of the permanent government and the press (at least to start with)

The weak conservative opposition means that the government will not initially be too pressed to compete with it for popularity. My expectation will be that the government’s biggest fights for the first year will be against the left of the Labour party, and particularly the Islamic / pro-Palestine elements, plus the independent MPs that were elected specifically on that platform. Starmer’s pragmatic programme, coupled with his Jewish family, mean he will never be able to satisfy that wing, and would be unwise to try. Losing the Labour party’s traditional support from that population will be initially affordable given the huge parliamentary majority, and in the medium term will gain him much more support from the wider population.

In the modern democratic and media environment, the best way to advance a programme is to have unpopular people oppose it, and the worst way is to have unpopular people support it. If Reform are wise, they will keep a low profile for the next few years, take the money and quietly build an organisation. The government is much more likely to take action on immigration because George Galloway is against it than because Nigel Farage is in favour of it.

The knowledge of the permanent government is very important. In my lifetime, only two Prime Ministers have shown any real evidence of being in charge. Margaret Thatcher and Tony Blair were both lawyers. They both had allies in the civil service (which was much more conservative 40 years ago than it is now). Kier Starmer and Harriet Harman are coming into government with an agenda that we can assume is very much in line with that of the permanent government. But they now have their own role and their own personal goals, and if, over time, they find they need to act against the wishes of that permanent government — they know where the bodies are buried. They know how the system functions, where its strong points and weak points are.

Again, the neoreactionary theory is that if they want to exercise power they will inevitably come into conflict with the permanent government. They want results that look good in the press. The most obvious reason that the Conservatives were useless is that they were just incompetent. The next most obvious reason is that they were traitors to conservatism. The deeper reason is that actually achieving any conservative goals was impossible, so many of them adopted more liberal positions because only by doing so could they avoid being ridiculous failures.

(For people my age, the most vivid examples are Michael Portillo and John Redwood; the two Conservatives seen as the ideological heirs of Thatcher, and the thorns in the right side of the moderate John Major, both of whom over time moved steadily more and more left decade by decade, finishing well to the left of Blair)

Achieving conservative goals was impossible for the Conservatives because the permanent government was united against them, and could obstruct them with legal and administrative bullshit to the point that anything they did achieve would cost them politically far more than it was worth (the two years of failure of the Uganda scheme is of course the prime example, but the pattern was everywhere). If I am right about the advantages that Starmer’s past experience gives him, he might not find things so impossible.

I do expect these conflicts to happen. Starmer will not want to deport illegal immigrants in order to get Sun front pages that will impress Essex Man — but he may find he wants to deport illegal immigrants in order to get the crime rate down and the welfare bill down, and to prevent his own children being blown up in their synagogue. He will want it to just happen, quietly. Can he do that? That’s the question.

If in five years’ time the economy is a bit better (and there is a ton of scope to achieve that by removing obstacles), the immigration situation is no worse, and the Conservatives are still in disarray (the huge error I made five years ago was in thinking that Labour would today still be largely engaged in fighting off Corbynist holdouts, so that’s a big open question), then he could carry as big a majority into the next decade. Technology today is very favourable to absolutism. A leader who is seen as legitimate will have many mechanisms available to him to cement his position.

I’m not going to try to imagine details. Armies under the absolute control of an Emperor carried the standard of the Senate and People of Rome, a Britain that has become “UK” (the latest constitutional proposals apparently include a Senate), perhaps without even being any longer an official kingdom, could also be directed by a single hand.

31st January 2020

I went down to London last night to mark Britain’s exit from the European Union. I no longer see it as a vitally important thing — I neither voted in nor blogged about the referendum — but for many years, through the nineties and the oughts, leaving the EU was at the centre of my political position. That includes the early stages of blogging, with posts like this and this and this, and by joining in the celebrations I was, in a way, acknowledging my younger self.

I also wanted to be able to say I was there, to stand on Parliament Square and cheer and sing songs and generally larp at being part of a movement for a couple of hours. It was cheaper than going to a Luton Town game.

Also, while my opinion is that the exit doesn’t change anything fundamentally, it’s worth noting that I have claimed first that the referendum is a bad thing because the establishment media will so dominate that Remain is bound to win and they’ll just use it to shut up debate for another generation, and then that even though Leave won, they wouldn’t actually leave, they’d just hold it up and eventually drop it. With this track record of being consistently wrong, I have a slight lack of confidence in my current pessimistic projections.

My explanation for being wrong is that I have been overestimating the competence and power of the establishment. The atomisation of society is now degrading the strength of the political parties themselves, being media-driven and bioleninist is reducing the competence of establishment leaders, new media is making democracy more real and less fake in a very damaging way.

The central event of last night’s celebration, after the terrible singing and before the countdown to 23:00 GMT, was of course the appearance of Nigel Farage. People were calling his name from the time the lights went on, and every warm-up speaker remarked that none of this could have happened without him.

That is surely true. And that says something very interesting about the way democratic politics works. Because Farage does not really seem to be a “Great Man” of the kind who are supposed by some theories to be able to shift history by themselves. He can speak on television OK, but he is no great orator or demagogue, or even an entertainer like Boris or Trump. He is intelligent and competent but he is no master strategist, or prophet, or technical genius. Anyone who could successfully run a corporate department with thirty employees could have done what he did. But without him there could have been no sustained UKIP. UKIP caused the referendum by costing the Conservative Party seats. The referendum led to Brexit.

Why say UKIP could not have sustained itself without Farage? Because every time it tried, it failed. Other than him, all the leadership of the party after the Alan Sked pressure-group era were insane, stupid, or lazy. Farage was competent enough to run the party, worked very hard on it, and caused it to continue existing.

It is truly remarkable that there were over four million people1 willing to vote for UKIP, but there was only one capable person willing to run it.

Farage devoted most of his adult life to the cause, out of idealism. Many of the other four million would have been as capable as he was, but they had better things to do with their lives. None of the other few dozen people who were in the leadership of the party were of the two or three percent of people who have the abilities needed to do it successfully.

Many politicians are idealistic, but it is easier to be idealistic where there is a career path. There is no career path to being a fringe anti-establishment politician. Farage got an MEP’s salary for thirty years, but that was by no means guaranteed. Victorious, he will pick up some media bucks, but he will never be treated as an elder statesman. Nobody else with the “corporate department head” level of ability showed up to discard their career and do the work.

There are strong echoes here of the situation with academia. For every competent right-wing intellectual working full time with donor funding or their own money, there are hundreds of left-wing intellectuals with a stable academic career. Tens of thousands of people shouting Nigel Farage’s name on Parliament Square give a hint of how important that fact is.

Party Leadership Elections are Undemocratic

Originally posted on Medium as Jago Couch on Aug 22, 2015. It’s potentially confusing for me to criticize something as “undemocratic”, which is why I didn’t post it here, but the argument of the post is relevant to my recent posts so I now prefer to have it here to refer to.

We’ve all had our laugh at the Labour party’s leadership election, but it’s time to get serious.

“Internal party democracy” is deeply stupid. You could even say it is undemocratic.

The purpose of a party is to provide a choice — one among several — to voters in public elections.

If every party stands for “whatever its members say”, and each party’s membership is open, then there is no reason to expect the parties to differ from each other. No choice would be provided at the public elections.

Not only at the level of voting, but at the level of support (funding, campaigning), each individual can choose which party, if any, is theirs. But that choice can only be made sensibly if the citizen can tell what a party stands for, and what it is likely to stand for in future. To have value, a party has to stand for something specific and reasonably constant. This goal is not consistent with internal democracy.

The ideal organisational form for a party is for it to be run by a small self-selecting clique. That provides both consistency and the possibility of gradual adaptation to changing circumstances. A fixed constitution is not likely to work, and if it did work would completely freeze the party, making it unable to adapt. Any other arrangement (including single-person control) will produce unpredictable changes in position, reducing the value of supporting the party.

Note I’m not arguing against parties having large membership, or against the membership having influence. I am arguing that ordinary potential party members have *greater* influence by being able to join a party with a consistent predictable position, than by having a vote that can be overwhelmed by random motivated entryists. Because membership in a party is and should be voluntary, it is a case where influence should be entirely exerted through the force of “exit”, rather than “voice”. It is better to be a member of a party that is controlled by a small self-selecting clique whose opinions you know and agree with, than to be a member of one which is controlled by a vote of thousands of members, including yourself.

The Labour Party organisation is attempting to be reasonable about choosing which new members should be able to vote, but it is impossible because there is no rationale for allowing any of them to vote at all. If it’s legitimate for a member to change the direction of a party, then it’s legitimate to join the party in order to change its direction.

This contradiction has been brought to a head by Labour’s introduction of very low subscription fees to join as a voting “supporter”, but charging more is not an absolute defence against hostile entryism. It just postpones things until there’s an election which is close enough, and for high enough stakes to make an attack viable. Of course, the internet makes organising such an attack as easy as creating a hashtag.

Bitcoin as a Model Voting System

[note: this is the thread that I was in the process of tweeting when my account got locked]

 

Bitcoin is fundamentally a voting system. That is the clever bit. Answering “Did A transfer his funds to B?” is a trivial piece of public-key cryptography. “Did A transfer his funds to B before attempting to transfer them to C?” is decided by vote. How is that not terrible???

Reason 1 is that it’s costly to vote. That has to be better than letting any scumbag vote for free. But not that much better.

Reason 2 is that you are then rewarded for voting, if and only if you voted for the winning side.

What that means is that there is a huge incentive to vote for what you believe to be true. Trying to vote against it is bound to fail, because everyone else has huge incentive to vote for it, and because it will cost you big money.

Forget the fashionable silliness of applying “blockchain” to every problem. Think about applying “give people incentives to get it right” to every problem. Think about making “non-human value maximisers” out of people.

Related: are secret ballots the worst thing you can do to a democracy?

Modelling Failures

Nothing really new here, but pulling a few things together.

Start with Joseph K’s observation:

This is a good point, and I added that the failure of financial risk models in 2008 was essentially the same thing.

The base problem is overconfidence. “People do not have enough epistemic humility”, as Ben Dixon put it.

The idea in all these fields is that you want to make some estimate about the future of some system. You make a mathematical model of the system, relating the visible outputs to internal variables. You also include a random variable in the model.

You then compare the outputs of your model to the visible outputs of the system being modelled, and modify the parameters until they match as closely as possible. They don’t match exactly, but you make the effects of your random variable just big enough that your model could plausibly produce the outputs you have seen.

If that means your random variable basically dominates, then your model is no good and you need a better one. But if the random element is fairly small, you’re good to go.

In polling, your visible effects are how people answer polling questions and how they vote. In social science, it’s how subjects behave in experiments, or how they answer questions, or how they do things that come out in published statistics. In finance, it’s the prices at which people trade various instruments.

The next step is where it all goes wrong. In the next step, you assume that your model—including its random variable to account for the unmeasured or unpredictable—is exactly correct, and make predictions about what the future outputs of the system will be. Because of the random variable, your predictions aren’t certain; they have a range and a probability. You say, “Hillary Clinton has a 87% chance of winning the election”. You say “Reading these passages changes a person’s attitude to something-or-other in this direction 62% of the time, with a probability of 4.6% that the effect could have been caused randomly”. You say, “The total value of the assets held by the firm will not decrease by more than 27.6 million dollars in a day, with a probability of 99%”.

The use of probabilities suggests to an outsider that you have epistemic humility–you are aware of your own fallibility and are taking account of the possibility of having gone wrong. But that is not the case. The probabilities you quote are calculated on the basis that you have done everything perfectly, that you model is completely right, and that nothing has changed in between the production of the data you used to build the model and the events that you are attempting to predict. The unpredictability that you account for is that which is caused by the incompleteness of your model—which is necessarily a simplification of the real system—not on the possibility that what your model is doing is actually wrong.

In the case of the polling, what that means is that the margin of error quoted with the poll is based on the assumptions that the people polled answered honestly; that they belong to the demographic groups that the pollsters thought they belonged to, that the proportion of demographic groups in the electorate are what the pollsters thought they were. The margin of error is based on the random variables in the model: the fact that the random selection of people polled might be atypical of the list they were taken from, possibly, if the model is sophisticated enough, that the turnout of different demographics might vary from what is predicted (but where does the data come from to model that?)

In the social sciences, the assumptions are that the subjects are responding to the stimuli you are describing, and not to something else. Also that people will behave the same outside the laboratory as they do inside. The stated probabilities and uncertainties again are not reflecting any doubt as to those assumptions: only to the modelled randomness of sampling and measurement.

On the risk modelling used by banks, I can be more detailed, because I actually did it. It is assumed that the future price changes of an instrument follow the same probability distributions as in the past. Very often, because the instruments do not have a sufficient historical record, a proxy is used; one which is assumed to be similar. Sometimes instead of a historical record or a proxy there is just a model, a normal distribution plus a correlation with the overall market, or a sector of it. Again, lots of uncertainty in the predictions, but none of it due to the possibility of having the wrong proxy, or of there being something new about the future which didn’t apply to the past.

Science didn’t always work this way. The way you do science is that you propose the theory, then it is tested against observations over a period of time. That’s absolutely necessary: the model, even with the uncertainty embedded within it, is a simplification of reality, and the only justification for assuming that the net effects of the omitted complexities are within error bounds is that that is seen to happen.

If the theory is about the emission spectra of stars, or the rate of a chemical reaction, then once the theory is done it can be continually tested for a long period. In social sciences or banking, nobody is paying attention for long enough, and the relevant environment is changing too much over a timescale of years for evidence that a theory is sound to build up. It’s fair enough: the social scientists, pollsters and risk managers are doing the best they can. The problem is not what they are doing, it is the excessive confidence given to their results. I was going to write “their excessive confidence”, but that probably isn’t right: they know all this. Many of them (there are exceptions) know perfectly well that a polling error margin, or a p-value, or a VaR are not truly what the definitions say, but only the closest that they can get. It is everyone who takes the numbers at face value that is making the mistake. However, none of these analysts, of whichever flavour, are in a position to emphasise the discrepancy. They always have a target to aim for.

For a scientist, they have to get a result with a p-value to publish a paper. That is their job: if they do it, they have succeeded, otherwise, they have not. A risk manager, similarly, has a straightforward day-to-day job of persuading the regulator that the bank is not taking too much risk. I don’t know the ins and outs of polling, but there is always pressure. In fact Nate Silver seems to have done exactly what I suggest: his pre-election announcement seems to be been along the lines “Model says Clinton 85%, but the model isn’t reliable, I’m going to call it 65%”. And he got a lot of shit for it.

Things go really bad when there is a feedback loop from the result of the modelling to the system itself. If you give a trader a VaR budget, he’ll look to take risks that don’t show in the VaR. If you campaign so as to maximise your polling position, you’ll win the support of the people who don’t bother to vote, or you’ll put people off saying they’ll vote for the other guy without actually stopping them voting for the other guy. Nasty.

Going into the election, I’m not going to say I predicted the result. But I didn’t fall for the polls. Either there was going to be a big differential turnout between Trump supporters and Clinton supporters, or there wasn’t. Either there were a lot of shy Trump supporters, or there weren’t. I thought there was a pretty good chance of both, but no amount of Data was going to tell me. Sometimes you just don’t know.

That’s actually an argument for not “correcting” the polls. At least if there is a model—polling model, VaR model, whatever—you can take the output and then think about it. If the thinking has already been done, and corrections already applied, that takes the option away from you. I didn’t know to what extent the polls had already be corrected for the unquantifiables that could make them wrong. The question wasn’t so much “are there shy Trump voters?” as “are there more shy Trump voters than some polling organisation guessed there are?”

Of course, every word of all this applies just the same to that old obsession of this blog, climate. The models have not been proved; they’ve mostly been produced honestly, but there’s a target, and there are way bigger uncertainties than those which are included in the models. But the reason I don’t blog about climate any more is that it’s over. The Global Warming Scare was fundamentally a social phenomenon, and it has gone. Nobody other than a few activists and scientists takes it seriously any more, and mass concern was an essential part of the cycle. There isn’t going to be a backlash or a correction; there won’t be papers demolishing the old theories and getting vast publicity. Rather, the whole subject will just continue to fade away. If Trump cuts the funding, as seems likely, it will fade away a bit quicker. Lip service will occasionally be paid, and summits will continue to be held, but less action will result from them. The actual exposure of the failure of science won’t happen until the people who would have been most embarrassed by it are dead. That’s how these things go.

How to Kill Democracy

Tweetable link: https://t.co/aArQQv8QQj?amp=1

I’ve heard quite a few times that we can’t get rid of democracy, because we can’t get the votes.

Now, I’m not in any great hurry to get rid of democracy. It’s not ideal, but it sort of works, and when it goes things could get messy.

However, if you wanted to do away with democracy, it wouldn’t be all that difficult. I identified the method back before it was my aim.

The introduction of postal and electronic voting makes elections enormously easy to sabotage. I ranted about the danger, back in 2005, and then gradually lost interest in the subject once I ceased to care who actually won any given election. And the main safety margin is that nobody really cares who wins enough to cheat.

But cheating and getting away with it is hard — messing things up enough that nobody knows for sure who ought to have won is much easier, just as it is easier to take down a website than to take control of it.

And what would happen, if you did successfully DOS an election?  It would be pretty spectacular. The nearest we’ve seen was the 2000 US presidential election. That stirred up a lot of trouble, but it eventually more or less settled down. That was not what I have in mind though — there was no obvious large-scale fraud then, rather the problem was that the election was so close that the ordinary minor deceptions and inaccuracies made the difference.

In a near dead-heat like that, it will be accepted that you just can’t be sure. But experts have identified a number of local elections in various counties in the US where it can’t be determined who should have won, because of problems with the voting machines. Still, those involve cases where there is no evidence of determined large-scale deliberate fraud.

If we had a general election in Britain, and it emerged that, because of fraud, it wasn’t clear who won, or that it was very close, I don’t know what would happen. We don’t have the same extreme respect for the judiciary, or even the clear formal rules, that allowed the US Supreme Court to settle Florida 2000.

I suspect that in the event, the parties and the civil service would sew it up as best they could, and the business of government would go on. But in the process it would have lost the legitimacy of democracy.

That is why it was a mistake for me to stop paying attention to voting when I stopped caring who won. Because, the way I look at democracy now, it is the impression that the government represents a popular choice that is important: the actual influence of popular will on government is both minor and mostly harmful. But it is the impression that is endangered by unreliable voting systems, so they constitute a bigger risk to the system as I see it than as a democrat would look at it.

Britain is still on paper votes, so it is only through postal votes that the system is vulnerable at the moment, and that only to a quite large-scale attack. But if the system is changed in the direction of networked or electronic voting, then we know what we have to do if we decide to get rid of democracy.

What a Shame

Well, this is embarrassing.

Only weeks after explaining that I didn’t care about the AV referendum, I now find that I’m really pissed off with the result.

I haven’t actually changed my position, that “I think AV would give voters slightly more influence than they have now. I am quite unsure as to whether that’s a good thing or a bad thing”. I think what really has me upset is that it would have have been so interesting to see how party politics would have developed under AV.

Would any of the major parties have split? Would we have got a lot of independents running, and some of them winning? Would the total vote of the three main parties have dropped to about 50%, with several outsiders each picking up 10-20% of 1st preference votes in most constituencies? Now we’ll never know. It’s like having a favourite TV programme cancelled half way through.

In case that sounds shallow, I should point to a few old posts, where I developed the case that the entertainment value of voting actually outweighs any political value. Because this was back in 2007-8, it applies even if, unlike me today, you do believe that voting has some political value.

One Man One Vote

Sometimes the way to get to a good explanation is to start with a bad one.

The opponents of AV make the claim that it means that voters for fringe parties get their vote counted more than voters for major parties. This seemed a stupid objection, but I couldn’t quite explain why, clearly and simply.

Yesterday I read John Humphrys’ complete failure to explain why (via Matt Ridley), and it became obvious:

Yes, in AV, your vote can be counted more than once — whether you vote for a fringe party or a winner or runner-up. If there are only two rounds of counting in a particular example, then the person A who votes for the eliminated candidate gets their vote counted twice: for their first choice in the first round, and for their second choice in the second round.

The voter B for any other candidate also gets their vote counted twice, for their first choice both times.

So in the last round, the one that actually decides the winner, voter A gets counted for their second choice and voter B for their first.

That doesn’t settle the larger argument of course: you can still argue whether AV has a tendency to produce centrist coalitions and whether that is a bad thing. But there should be no argument claiming that AV is less fair than FPTP, for what that’s worth.

(Disclaimer: I argue about this out of habit, not because I think it matters)

The AV vote

I’ve discussed some of the arguments about the AV referendum, but not really drawn a conclusion (beyond “whatever“)

The main valid argument for AV is that it isn’t as sensitive as FPTP to which candidate people think is going to win. It may get rid of the truly inane feature that I reported on at the last general election, where the parties argued more about who was likely to win than about who ought to win.

A second valid argument for AV is that it encourages the expression of non-mainstream views, by not penalising voters for unpopular parties. It doesn’t actually give unpopular parties any more representation, as PR does, but it gives them more visibility.

The main valid argument against AV is that it is likely to produce centrist coalitions, whatever the changes in views of the voters.

Putting the three points together, I have to be in favour. In my theory, the value of democracy is that it has perceived legitimacy, reducing the amount that the ruling establishment hsa to do to protect itself. The one anti argument actually helps in this regard, as it makes the establishment even more secure.

However, the pro arguments are still applicable, as it is valuable to make the unconventional more visible, as that will aid thinking about what we should do when and if the current establishment does fail.