Yes2ID

There’s a specifically English tradition that the government doesn’t concern itself with the identities of the ordinary men and women of the country. Prior to the twentieth century, births and deaths were registered by the church, taxes were collected on land or trading of particular goods. There was never a national bureaucracy keeping records of individuals.

(There’s a famous quote about how prior to 1914 Britons would hardly have any routine contact with any officials of the government. Orwell? Keynes? I can’t find it and I’m quite annoyed).

A census was introduced in 1801 to guide recruitment strategy for the Napoleonic wars, and National Registration was brought in in 1939 for the Second World War, and abolished later. Measures such as National Registration smacked of Napoleonic totalitarianism. The government exists to serve the people, not the people for the government. My life is no business of the government until I bring myself to its notice, by committing a crime, or travelling abroad, or handling large amounts of money, etc.

I was firmly aligned with that tradition, supporting No2ID, opposing Voter ID, even grumbling incoherently about CCTV cameras.

I still really like the idea of such a light-touch, minimalist state that has no reason to know how many people live in a town or what that bloke’s name is who is sitting on the bench outside Costa. Warm feelings of free Anglo-Saxons and the Wintagemot, and all that (although of course in pre-modern societies nobody had anonymity, so that’s a kind of fantasy).

But we don’t live in such a state, or anything remotely resembling one. Today we live in a state which relies on at least a quarter of the money earned by each member of the working population for its survival, which provides an array of services from traffic direction to heart surgery to everyone, and also in which a dozen private companies already know how many people live in each street and what the bloke on the bench outside Costa watched on TV last week.

As I mentioned at the weekend, the state also has a register of births, a passport database, a register of electors, a driver licensing database, a National Insurance database.

We are not talking in 2024 about whether or not identity details are a concern of government, we are only talking about whether the government should manage its identity database efficiently or inefficiently.

People who are of any positive value to society are massively visible to the state. Citizens of the nation of car drivers, taxpayers, glow in .gov.uk cyberspace like planes approaching an airport. The only people moving in darkness are illegal immigrants, gypsies and underclass, flashing on just once a fortnight to collect their cheques.

Totalitarian is a strong word, but it is obviously the case today that to the extent that a government of an advanced country leaves any area of its citizens’ lives alone today, that is a policy choice, and not either a result of any limit of capability or of tradition. For better or worse, limitation on government today comes from government, and there’s no sense pretending otherwise.

I’ve written a few times before that Feudalism cannot exist today because it was caused by the technological incapability of central government to supervise regions. It seems equally true that the individualism of classical liberalism cannot exist in a world of £20 CCTV cameras and 4TB SSDs. It depends not on limited government but hogtied government.

Of course surveillance does not directly impact our freedom of action. It doesn’t necessarily mean we will become much more tightly limited in our actions. But of course, in practice we already are. We can’t say what we like, we can’t burn what we like, we can’t buy or sell what we like —not those of us with regular jobs and fixed addresses and cars, anyway. Why weep over the hostile underclass facing the same supervision?

Is growing totalitarianism the only future? Yes, probably; as I say, it’s a matter of technology. I would prefer otherwise, but if you’re going to act politically as if the world were other than it is, you might just as well be an anarcho-communist.

Ineffective government is bad government. Effective government is often bad government too, but at least there’s a chance. My view is that the intense stupidity of politics is to a large extent an effect of the practical impotence of politicians. Make those with responsibility less impotent, and at least there’s an incentive for them to become less stupid. (The aligning of power with responsibility is the other requirement, the central NRx principle, but doing that is a separate question. Today it’s the case that nobody has power).

I feel bad writing this. I am betraying what I once stood for. Give me a programme for achieving personal freedom that starts with keeping government databases more incomplete and inaccurate than Amazon’s, and I’ll recant.

Mini: Voter ID

This is the first general election with a requirement for ID documentation to vote. I’m opposed to elections, so this is not a concern of mine either way. However, I used to argue against voter ID. There are other arguments though, and it’s quite an interesting subject.

My old argument was that the government does not and should not run elections, and relying on government issued documents gives them too much control. Of course, postal voting makes this argument irrelevant, but I opposed that too for the same reasons.

The safeguard on elections was always transparency. Polling stations are public, you identify yourself in public and are given a paper in public, ballot boxes are visible and collected and counted locally in public. To commit electoral fraud you have to do it in public.

I recently discussed some of this in Holland with an assortment of Europeans. Their attitude was different —they have always had official government ID used routinely for many purposes. I always sneered a bit at that; our government does not need to track us all individually.

Today though I recognise that that model is just not appropriate in an era of mass income taxation, welfare state, and mass immigration. The British govt has never tracked and identified every person. But it is unfortunately past time it did.

So on voter ID, I still don’t care, but the government’s half-arsed attempts to cobble national identity management out of birth registration, national insurance registration, driver licensing and passports do actually need to be rationalised into true national ID. I hate it.

(Originally a tweet thread, 4 July 2024)

The Senate and People of Ukia

After the 2019 British General Election produced a large conservative majority for Prime Minister Boris Johnson, I wrote a “projection” / fantasy of how Britain could progress to a one-party state.

A one-party state on the Chinese model isn’t my ideal form of government. I would prefer an absolute hereditary monarchy such as the one I described in 2012. (Next year we will pass the half-way point of the 25 years between when I wrote it and when I set it, so I will review that then). But I never put forward a mechanism for getting to the absolute monarchy, only vaguely having in mind some serious political collapse and recovery. One-party states do exist today and some of them are governed much better than multi-party democracies. They are equally oligarchic, but the oligarchies are more rational, effective, and marginally less embroiled in infighting.

The central point of neoreactionary theory is that the root problem of our society is its structure of government. The most obvious problem is the people in charge, and if you look a bit deeper you see bad and harmful ideologies, but the theory is that the ideologies are the expected product of internal competition within an oligarchy, and that the people are the product of the structure and the ideologies.

If that is accepted, then the critical step is to change the system. Changing the system will in time change the ideologies and the people. So movement away from a system of oligarchic competition is a benefit, whether the one party is Labour or Conservative. It doesn’t matter whether a cat is black or white, if it catches mice it is a good cat.

Admittedly, when I imagined Borisland, it was very much as a monarchical form with a Supreme Leader. I have heard suggestions that Xi is effectively sovereign over the PRC, but I don’t know and if I were to guess I would think it unlikely. Is Starmer a man who can dissolve ministerial responsibility? Or maybe there is a more ambitious successor waiting in the wings? Either could work. Every Prime Minister who is not universally pilloried as baffled and ineffectual (and some who are) is accused of introducing presidential government; it does not appear to be an impossibility.

Again, I would prefer not to be dragging even the pretence of democratic legitimacy behind the monarch, but, after all, the Roman Empire managed it.

What does the incoming Starmer administration have going for it? Quite a bit:

  1. Weak parliamentary opposition
  2. A prominent internal opposition
  3. A large majority to enable it to combat the internal opposition
  4. A leader who intimately understands the permanent government
  5. A leader young enough to last a couple of decades
  6. The support of the permanent government and the press (at least to start with)

The weak conservative opposition means that the government will not initially be too pressed to compete with it for popularity. My expectation will be that the government’s biggest fights for the first year will be against the left of the Labour party, and particularly the Islamic / pro-Palestine elements, plus the independent MPs that were elected specifically on that platform. Starmer’s pragmatic programme, coupled with his Jewish family, mean he will never be able to satisfy that wing, and would be unwise to try. Losing the Labour party’s traditional support from that population will be initially affordable given the huge parliamentary majority, and in the medium term will gain him much more support from the wider population.

In the modern democratic and media environment, the best way to advance a programme is to have unpopular people oppose it, and the worst way is to have unpopular people support it. If Reform are wise, they will keep a low profile for the next few years, take the money and quietly build an organisation. The government is much more likely to take action on immigration because George Galloway is against it than because Nigel Farage is in favour of it.

The knowledge of the permanent government is very important. In my lifetime, only two Prime Ministers have shown any real evidence of being in charge. Margaret Thatcher and Tony Blair were both lawyers. They both had allies in the civil service (which was much more conservative 40 years ago than it is now). Kier Starmer and Harriet Harman are coming into government with an agenda that we can assume is very much in line with that of the permanent government. But they now have their own role and their own personal goals, and if, over time, they find they need to act against the wishes of that permanent government — they know where the bodies are buried. They know how the system functions, where its strong points and weak points are.

Again, the neoreactionary theory is that if they want to exercise power they will inevitably come into conflict with the permanent government. They want results that look good in the press. The most obvious reason that the Conservatives were useless is that they were just incompetent. The next most obvious reason is that they were traitors to conservatism. The deeper reason is that actually achieving any conservative goals was impossible, so many of them adopted more liberal positions because only by doing so could they avoid being ridiculous failures.

(For people my age, the most vivid examples are Michael Portillo and John Redwood; the two Conservatives seen as the ideological heirs of Thatcher, and the thorns in the right side of the moderate John Major, both of whom over time moved steadily more and more left decade by decade, finishing well to the left of Blair)

Achieving conservative goals was impossible for the Conservatives because the permanent government was united against them, and could obstruct them with legal and administrative bullshit to the point that anything they did achieve would cost them politically far more than it was worth (the two years of failure of the Uganda scheme is of course the prime example, but the pattern was everywhere). If I am right about the advantages that Starmer’s past experience gives him, he might not find things so impossible.

I do expect these conflicts to happen. Starmer will not want to deport illegal immigrants in order to get Sun front pages that will impress Essex Man — but he may find he wants to deport illegal immigrants in order to get the crime rate down and the welfare bill down, and to prevent his own children being blown up in their synagogue. He will want it to just happen, quietly. Can he do that? That’s the question.

If in five years’ time the economy is a bit better (and there is a ton of scope to achieve that by removing obstacles), the immigration situation is no worse, and the Conservatives are still in disarray (the huge error I made five years ago was in thinking that Labour would today still be largely engaged in fighting off Corbynist holdouts, so that’s a big open question), then he could carry as big a majority into the next decade. Technology today is very favourable to absolutism. A leader who is seen as legitimate will have many mechanisms available to him to cement his position.

I’m not going to try to imagine details. Armies under the absolute control of an Emperor carried the standard of the Senate and People of Rome, a Britain that has become “UK” (the latest constitutional proposals apparently include a Senate), perhaps without even being any longer an official kingdom, could also be directed by a single hand.

The horror of foreign policy

I’ve not said much about the whole Gaza / Israel thing since October. I have a pretty strong dislike of islamic terrorists, and no equivalent antipathy to Jews, although I do worry from time to time about their understandable but inconvenient tendency to oppose any kind of nationalism (except their own). So my inclination is towards the Israeli side. However, I try to stifle this on the grounds that I don’t know all the facts, though I’m swimming in propaganda, and it isn’t really any of my business.

While discussing yesterday’s General Election, yesterday, it became clear that the main way that that terrible, bloody conflict affects me is through its impact on British politics. Specifically, if British Muslims become estranged from the Labour Party over it, that will significantly change national politics, and will completely overturn local politics where I live.

Now, I don’t generally concern myself with practical politics, for a number of reasons explained at length on this site. I paid attention to the election for entertainment value rather than because I needed to know anything about it. But that’s just me, it’s an unusual view to take. For many people deeply concerned with politics, these questions of party alignment are among the most important things in their lives. Most people with influence over policy fall into that category.

For those people, the most important question about any actual or potential thing that could happen in the Middle East is: would that help me or my enemies in my local political struggle?

Think about that for a while. Peace talks, escalations, terrorist attacks, blockades — how do they affect my department, my constituency association, my party, parliament? Are they good for me, or bad for me?

I have written before that intervention in foreign conflicts tends to be harmful in humanitarian terms, even when specifically predicated on humanitarian aims.

I have seen it alleged (and don’t know whether to believe), both that Hamas intended a vast catastrophe to be inflicted on Palestinians, and that Israeli Prime Minister Netanyahu intended atrocities to occur against Israelis, in both cases because their political positions depend on the conflict continuing and escalating. If true, these are instances of the same thing, but less clear cut because the participants are much more connected to the direct harms of the conflict than remote foreigners. If someone in Ramallah or Tel Aviv is willing to stir things up in order to strengthen his position, then it is surely much easier for someone in Birmingham or Hendon to come to a similar conclusion.

So expecting the foreign policy directed by people in that position to be humanitarian in effect is very optimistic.

Elite Misinformation

I kind of like Matthew Yglesias. He comes out with some wild things occasionally, but mostly he’s careful and reasonable, even though I don’t share his values.

Now I understand him a bit better, including some of the wild stuff. His main problem is that he is spectacularly naive.

His recent piece, “Elite misinformation is an underrated problem” is, in itself, a good piece. He notes that “misinformation” research is embarrasingly one-sided, and draws attention to a couple of claims that have been widely circulated in mainstream elite media, which are somewhere between misleading and outright lies.

Good stuff. But then he says, “There’s lots of this going around”.

No! There’s not “lots”. This is absolutely fucking everything you read. All of it. From all sides. All the time. He’s still describing them as if they’re the exception. Everything is exaggerated, nobody is honest. Except him. And me. Sometimes.

It’s the universality of exaggeration and misleading information that makes it impossible to hold anyone responsible.

If what you say is 80% false, because everything you read is misinformation, or if what you say is 85% false, because everything you read is misinformation plus you exaggerated a bit yourself, what’s the difference? Can anyone really blame you?

If someone hears something deliberately misleading, and repeats it in such a way that it is factually false because they believe the thing that was deliberately implied but carefully not said outright, is that their fault? This is the real damage of the situation that we’re in. It’s not that “we” are being consistently lied to by “them” — it’s that everyone including “them” believes a ton of stuff that isn’t true.

I write on the morning after the first 2024 presidential debate. Everyone I read in my ideological bubble, including a few outsiders like Yglesias, are saying that Biden did disastrously badly. I didn’t watch it and am not going to. But many people are saying “they must have known he was like this.” But most of them probably didn’t. They know their opponents lie and exaggerate (they do!). Their friends were telling them it was OK.

I’m inclined to suspect it was always like this, but there are clues that it might not have been. In Britain, before my time, it was spoken of as a rule that a Minister would resign if it was shown he had “misled the house” even once. Something like that, applied not only to politicians but media too, is the only way to be different, since it’s impossible for holding anyone accountable for telling untruths while swimming in an ocean of untruth. And there isn’t a way to get there from here. (Actually my guess is that the rules were always applied selectively, but as I say it was before my time).

The ocean of untruth is what makes it impossible to change, too. You can appear wise and balanced, like Yglesias, by picking one or two things that your side is promoting and pointing out the weaknesses. But if you go through every single thing said, and rule out a third as simply false, and identify the misleading implications and exaggerations of the other two, you are massively harming your side, and your opponents will just pile in gleefully while repeating all their own lies and half-truths.

(Possibly Yglesias knows this, and that is why he is pretending to be naive. My interpretation is that he’s serious, though).

Boring Russell Brand Take

The man is a scumbag. He’s always been a scumbag, and it was never a secret that he was a scumbag.

The reports over the years from women who claim to have been mistreated by him are what you expect to hear about a scumbag. Some of them are reports of criminal behaviour, and probably few if any of them are practically prosecutable.

The women in question should have known better and probably should have been better advised. That doesn’t begin to justify any of his behaviour, whether criminal or merely scummy.

Not actively celebrating his scumminess would be helpful to women in the position that they were in. Scumbags like him should not be paid to advertise products or participate in elite media. It is disgraceful that he was widely promoted in the past, and a further disgrace that promotion is being withdrawn directly he becomes inconvenient to the establishment through whatever it is he’s been pissing them off with lately. I’m not even sure if it’s stuff I agree with or disagree with — why would I care what an idiot scumbag like him thinks?

On a generally open forum like X or YouTube, people shouldn’t be excluded from participation, including payments, just because someone thinks (correctly) that they’re a scumbag. But choosing to promote scumbags is scummy behaviour.

AI Doom Post

I’ve been meaning for a while to write in more detail why I’m not afraid of superintelligent AI.

The problem is, I don’t know. I kind of suspect I should be, but I’m not.

Of course, I’m on record as arguing that there is no such thing as superintelligence. I think I have some pretty good arguments for why that could be true, but I wouldn’t put it more strongly than that. I would need a lot more confidence for that to be a reason not to worry.

I think I need to disaggregate my foom-scepticism into two distinct but related propositions, both of which I consider likely to be true.

Strong Foom-Scepticism — the most intelligent humans are close to the maximum intelligence that can exist.

This is the “could really be true” one.

But there is also Weak Foom-Scepticism — Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic.

That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a superintelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide.

I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficently high intelligence can be prevented from doing this.

The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function

Joscha Bach (@Plinz), 18 Apr 2018

@Alrenous quoted this and said “… Humans can’t hack their reward function”

I replied “It’s pretty much all we do.” I stand by that: I think all of education, religion, “self-improvement”, and so on are best described as hacking our reward functions. I can hack my nutritional reward function by eating processed food, hack my reproductive reward function by using birth control, my social reward function by watching soap operas. Manipulating the outside universe is doing things the hard way, why would someone superintelligent bother with that shit?

(I think Iain M Banks’ “Subliming” civilisations are a recognition of that)

The recent spectacular LLM progress is very surprising, but it is very much in line with the way I imagined AI. I don’t often claim to have made interesting predictions, but I’m pretty proud of this from over a decade ago:

the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

Speculations regarding limitations of Artificial Intelligence

I don’t think we’ve hit any limits yet. The current tech probably does what it does about as well as it possibly can, but there’s a lot of stuff it doesn’t do that it easily could do, and, I assume, soon will do.

It doesn’t seem to follow structured patterns of thought. When it comes up with an intriguingly wrong answer to a question, it is, as I wrote back then, behaving very like a human. But we have some tricks. It’s a simple thing, that GPT-4 could do today, to follow every answer with the answer to a new question: “what is the best argument that your previous answer is wrong”. Disciplined human thinkers do this as a matter of course.

Reevaluating the first answer in the light of the second is a little more difficult, but I would assume it is doable. This kind of disciplined reasoning is something that should be quite possible to integrate with the imaginative pattern-matching/pattern-formation of an LLM, and, on todays tech, I could imagine getting it to a pretty solid human level.

But that is quite different from a self-amplifying superintelligence. As I wrote back then, humans don’t generally stop thinking about serious problems because they don’t have time to think any more. They stop because they don’t think thinking more will help. Therefore being able to think faster – the most obvious way in which an AI might be considered a superintelligence – is hitting diminishing returns.

Similarly, we don’t stop adding more people to a committe because we don’t have enough people. We stop adding because we don’t think adding more will help. Therefore mass-producing AI also hits diminishing returns.

None of this means that AI isn’t dangerous. I do believe AI is dangerous, in many ways, starting with the mechanism that David Chapman identified in Better Without AI. Every new technology is dangerous. In particular, every new technology is a threat to the existing political order, as I wrote in 2011:

growth driven by technological change is potentially destabilising. The key is that it unpredictably makes different groups in society more and less powerful, so that any coalition is in danger of rival groups rapidly gaining enough power to overwhelm it.

Degenerate Formalism

Maybe an AI will get us all to kill each other for advertising clicks. Maybe an evil madman will use AI to become super-powerful and wipe us all out. Maybe we will all fall in love with our AI waifus and cease to reproduce the species. Maybe the US government will fear the power of Chinese AI so much that it starts a global nuclear war. All these are real dangers that I don’t have any trouble believing in. But they are all the normal kind of new-technology dangers. There are plenty of similar dangers that don’t involve AI.

Housekeeping

It is three years since I first discovered that Twitter was hiding tweets with links to my blog.

I’m pretty sure the root cause is the “.party” domain I used (because when I migrated from Blogger it was really cheap and I thought it kind of made sense for something political, though really I’m explicitly not party-political). Twitter seems to treat links to these little-used top level domains as probable spam.

There was an interesting incident at around the same time: links to the World Health Organisation on its “.int” domain got the same treatment. This was early in the pandemic.

Anyway, I had a workaround which was to tweet the link, copy the “t.co” shortened form, and then delete the first tweet and tweet it again with the shortened form. I think that worked at first, but not after a while.

Then I used a free subdomain, pointed it here and wrote a little static web page that could pull the path out of the URL, and generate a link to the right one. That was clunky as hell and fiddly to use.

So I have finally given in, spent a few more quid on a boring .co.uk domain, so https://www.anomalyblog.co.uk/ is the official address of this blog now.

All the old anomalyuk.party addresses still work, and I hope will do for a long time, unless it gets pointlessly expensive to keep renewing it. Most of the links on old posts here don’t work; it’s a great shame how history disappears. I was very impressed, looking at one 2011 post, that links to Robin Hanson’s “overcomingbias” still work, although he has migrated to Substack since then.

Inspired by that, I’ve done some coding to fix up the redirection from blogger, so that even the old anomalyuk.blogspot.com links now work properly, as they did when I first migrated but which I had let rot since then.

To complete the housekeeping, I’ve switched themes: as far as I can see the default themes that come with wordpress are better for a simple plain text-centric blog than most alternatives, including the one I picked back in 2017. I’ve taken one of the old ones.

On the Culture War

In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”1 whipping up the culture war for ad clicks, and we need to somehow prevent this.

However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.

It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.

Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.

What can I practically do about it?

Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.

I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.

What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.

An anonymous2 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.

The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.3 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.

Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them4. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.

This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.

From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.

Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.

Better Without AI

How to avert an AI apocalypse… and create a future we would like

Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.

It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.

The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.

He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).

All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.

He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you

Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.

And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.

The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

Anomaly UK: Defining the Facebook Era

this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

Anomaly UK: Epiphenomena

(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)

Putting these two things together, what you get is:

The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)

AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).

Better Without AI: At war with the machines

This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 1, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.

As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.2

AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.

Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.


Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else

We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.

My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.

In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.

The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:

Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.

Better Without AI: Stop Obstructing Science

I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.

Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.

The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).

Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.

you do not have an “objective function”
you do not have any “terminal goal”
your activity is not the result of “planning” or “deciding”
you do not have any “ethics”
these are all malign rationalist myths
they make you miserable when you take them seriously
you are reflexively accountable to reality
    not to your representations of it
your beneficent activity arises
    as spontaneous appreciative responsiveness

Better Without: This is About You

It would be nice to end on that note, but I have to shoehorn my own conclusion in:

I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.

This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.

This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.

Update: more on the culture war