I’ve been meaning for a while to write in more detail why I’m not afraid of superintelligent AI.
The problem is, I don’t know. I kind of suspect I should be, but I’m not.
Of course, I’m on record as arguing that there is no such thing as superintelligence. I think I have some pretty good arguments for why that could be true, but I wouldn’t put it more strongly than that. I would need a lot more confidence for that to be a reason not to worry.
I think I need to disaggregate my foom-scepticism into two distinct but related propositions, both of which I consider likely to be true.
Strong Foom-Scepticism — the most intelligent humans are close to the maximum intelligence that can exist.
This is the “could really be true” one.
But there is also Weak Foom-Scepticism — Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic.
That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a superintelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide.
I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficently high intelligence can be prevented from doing this.
The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function
Joscha Bach (@Plinz), 18 Apr 2018
@Alrenous quoted this and said “… Humans can’t hack their reward function”
I replied “It’s pretty much all we do.” I stand by that: I think all of education, religion, “self-improvement”, and so on are best described as hacking our reward functions. I can hack my nutritional reward function by eating processed food, hack my reproductive reward function by using birth control, my social reward function by watching soap operas. Manipulating the outside universe is doing things the hard way, why would someone superintelligent bother with that shit?
(I think Iain M Banks’ “Subliming” civilisations are a recognition of that)
The recent spectacular LLM progress is very surprising, but it is very much in line with the way I imagined AI. I don’t often claim to have made interesting predictions, but I’m pretty proud of this from over a decade ago:
the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability. But the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.
Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.
I don’t think we’ve hit any limits yet. The current tech probably does what it does about as well as it possibly can, but there’s a lot of stuff it doesn’t do that it easily could do, and, I assume, soon will do.
It doesn’t seem to follow structured patterns of thought. When it comes up with an intriguingly wrong answer to a question, it is, as I wrote back then, behaving very like a human. But we have some tricks. It’s a simple thing, that GPT-4 could do today, to follow every answer with the answer to a new question: “what is the best argument that your previous answer is wrong”. Disciplined human thinkers do this as a matter of course.
Reevaluating the first answer in the light of the second is a little more difficult, but I would assume it is doable. This kind of disciplined reasoning is something that should be quite possible to integrate with the imaginative pattern-matching/pattern-formation of an LLM, and, on todays tech, I could imagine getting it to a pretty solid human level.
But that is quite different from a self-amplifying superintelligence. As I wrote back then, humans don’t generally stop thinking about serious problems because they don’t have time to think any more. They stop because they don’t think thinking more will help. Therefore being able to think faster – the most obvious way in which an AI might be considered a superintelligence – is hitting diminishing returns.
Similarly, we don’t stop adding more people to a committe because we don’t have enough people. We stop adding because we don’t think adding more will help. Therefore mass-producing AI also hits diminishing returns.
None of this means that AI isn’t dangerous. I do believe AI is dangerous, in many ways, starting with the mechanism that David Chapman identified in Better Without AI. Every new technology is dangerous. In particular, every new technology is a threat to the existing political order, as I wrote in 2011:
growth driven by technological change is potentially destabilising. The key is that it unpredictably makes different groups in society more and less powerful, so that any coalition is in danger of rival groups rapidly gaining enough power to overwhelm it.
Maybe an AI will get us all to kill each other for advertising clicks. Maybe an evil madman will use AI to become super-powerful and wipe us all out. Maybe we will all fall in love with our AI waifus and cease to reproduce the species. Maybe the US government will fear the power of Chinese AI so much that it starts a global nuclear war. All these are real dangers that I don’t have any trouble believing in. But they are all the normal kind of new-technology dangers. There are plenty of similar dangers that don’t involve AI.
In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”1 whipping up the culture war for ad clicks, and we need to somehow prevent this.
However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.
It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.
Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.
What can I practically do about it?
Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.
I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.
What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.
An anonymous2 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.
The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.3 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.
Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them4. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.
This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.
From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.
Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.
Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.
It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.
The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.
He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).
All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.
He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you“
Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.
And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.
The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.
Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.
this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.
(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)
Putting these two things together, what you get is:
The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)
AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).
This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 1, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.
As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.2
AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.
Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.
We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.
My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.
In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.
The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:
Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.
I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.
Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.
The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).
Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.
you do not have an “objective function” you do not have any “terminal goal” your activity is not the result of “planning” or “deciding” you do not have any “ethics” these are all malign rationalist myths they make you miserable when you take them seriously you are reflexively accountable to reality not to your representations of it your beneficent activity arises as spontaneous appreciative responsiveness
It would be nice to end on that note, but I have to shoehorn my own conclusion in:
I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.
This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.
This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.
I was seriously wrong. I had underestimated the decline of conspiracy, the degree to which it is impossible in the modern age to sustain insincerity1.
I also ignored everything I knew about the Cathedral. The media is part of the ruling structure; if the media believes something, then by definition the ruling structure believes it.
My mental model, at the time, was that the media promoted the climate scare because it was good TV. The politicians went along with it because it was good politics. But at the end of the day, real action on the climate would be superficial, fake, or indefinitely postponed to the future, because the sensible people behind the scenes would never actually cripple our entire civilisation over something so silly.
What an idiot.
In reality the climate scare was and is primarily a political phenomenon — one of the non-partisan runaway manias I discussed recently, under the title Loyalists without a cause. As I tweeted, “Since the end of the cold war, the most damaging movements have been non-partisan: environmentalism, social justice, global democracy.”
In the modern system, where nobody is responsible for results, and everyone is responsible for tomorrow’s papers, it is just very much easier to support something that makes you seem selfless or kind than to oppose it. If it is actually a live partisan issue, then you can and should take your side, in order to appeal to your party, but only a few things can be live partisan issues at once. Those are the important issues, and if you weaken your position by taking an unattractive position on an unimportant non-partisan issue, you risk concrete losses on the important partisan issues. (You also risk your own personal advancement.)
I did touch on this, back in 2010 — the left-wing commentator Jonathan Hari claimed that 91% of Conservative MPs “don’t believe man-made global warming exists.” And yet, I emphasised, they ran on a manifesto commitment to reduce greenhouse gas emissions.
In late 2018, I pointed out that “It is a feature of any large movement that pretending to believe something is effectively the same as believing it.” If Tory MPs in 2010 did not believe that man-made global warming existed, that made no difference. They effectively did believe it. There were no sensible people behind the scenes, keeping the power stations open.
There’s also a generational effect. The 2010 parliamentary conservative party might have been pretending, but newcomers coming in weren’t in on the joke.
There’s also no absolute limit on how far things can go, as Sri Lanka is in the process of demonstrating. There is no fuel on the island, no money to buy any because the export industries have been crippled, and the mob yesterday stormed the presidential palace. Because of environmentalism.
At the same time, it isn’t actually inevitable. To take one of my favourite themes, the unthinkable can become thinkable very fast. This could happen tomorrow.
The easy way to save civilisation, without looking an idiot on climate change, is just to not talk about it. It all got going because the media would happily report the conflict between “nice” pro-environment politicians and “nasty” anti-environment politicians, and nobody wanted to appear nasty. If the left-wing media see that banging on about climate change is bad for their politicians, they will keep their mouths shut. The population will forget all about it in a matter of weeks. If it stays a non-partisan issue, then politicians will as always take whatever side of the story gives them better press.
Over a longer timescale, when the fanatics counterattack, then an actual counter-narrative will gradually be built. The dangers were over-hyped. Adaptation is feasible. Warm weather is actually good. Those of us who have been saying all of this for decades will be completely ignored, but our talking points, suitably laundered, will be everywhere. As I said before, decades from now the question will be recorded in history as a media fad that got out of hand.
A bunch of scientists will have funding dry up. But this was never really about science. The whole climate scare is fundamentally political, not scientific. Because of that, if the politics change everything else will just topple. In the early years of this blog, I wrote very frequently about the science, or lack thereof, of global warming. There is a small amount of very bad science making the case for a catastrophe. There is a truly vast amount of science explicitly taking that as a given, and wrapped in verbiage that seems to support it, but not itself adding any evidence. There are a lot of papers whose conclusions are phrased to give support to the dominant political narrative, but whose concrete findings are wholly compatible with “negligible effect”. Change the political incentives, and all these papers can be repeated, with identical results and “nothing need be done” abstracts. Again, history will not describe this as a scientific story.
The active propagandists of global warming always knew that this could happen. You can see that very clearly in the climategate emails that leaked in 2009 — they were desperate to keep control of the media narrative, even though to casual observers it looked like their opponents were very few and weak.
I’m not actually particularly confident that it is going to break like that now. Sri Lanka shows that it is not inevitable. But it could happen.
For most of the history of this blog, I didn’t drive a car. I studied or worked in London for twenty-five years, and London has very comprehensive public transport and not much parking space. I also love walking.
So, when it comes to transport, I am by no means a car fanatic. It’s true that I wrote in 2013 that the advantages of rail travel would come to an end, but that is based on future technological changes that have yet to occur. For the time being, there is still much to be said for rail, and other alternatives to cars have a longer future. So to those who see car usage as a problem to be reduced, I am not really hostile
The thing is that arguments about cars as transport are only addressing part of the story. There is another significant aspect to cars in our society today — a practical aspect, not any psychological mumbo-jumbo about fetishism.
Particularly in libertarian circles, there is an idea that there could be “Virtual Nations”. Instead of belonging to a country filled with the horrible people who just happen to live near you, you can form a virtual nation along with people like you. You spend all day on the internet anyway, so these people are your real neighbours. You can pay taxes to your virtual nation, vote for its government, invest in online common infrastructure, and make up a really cool flag. It’s been a while since I came across any of these manifestos, but these days blockchains would definitely be involved.
Obviously this is really stupid1 even without blockchains. As Russia has just reminded us, nation states are fundamentally about force. If you don’t have a border you can defend, you ain’t a country. Your relationship with your horrible neighbours is the problem, and a nation-state is the solution. Additional features of nation-states, such as flags, football teams and welfare states, are secondary.
Your country is tied to your geography. It is, however, possible to make a mini-country within a country. Devolution, federalism, subsidiarity are formal mechanisms, but there is an informal kind of partial seccession that goes down to the level of gated communities, office parks, and so on. These are not quite virtual nations, but, being based on physical separation, it is something real.
In the last few decades in our societies it has become something highly prized by the rich. It is a definite social shift, triggered by the rhetoric of equality and enabled by technology, that the rich have much less contact with the rest of the population than ever before. The rich no longer have servants in significant numbers, whereas as I’ve mentioned before, it used to be that 25% of the population worked as servants. Where the rich still rely on service work by humans, huge effort has gone into depersonalising the relationship. This allows us to pretend that we are all equal, that we all do our different jobs. I might be working for you right now, but then you might be working for me later on – there is no relationship of superior to inferior. There is some truth to this, but only some. There are plenty of people who have the practical status of masters over people with the practical status of servants, but they are all theoretically equal and we maintain that illusion by minimising any personal contact that would either dispel it or break the economic relationship.
More importantly, we now live in a society of pervasive violent crime. I have written much about this over the years, because it is controversial, but I think it is possibly the most important single fact about the modern world. My summary is here, and this whole piece is a restatement and elaboration of that one. There are vastly more people in our societies today whose behaviour is dangerously criminal than there were when our civilisation was at its peak, which I would put very vaguely as 1800-1939. To the extent that this isn’t overwhelmingly obvious through crime statistics, it is because of the phenomenon I describe here — people are protecting themselves from crime by physically separating themselves from the criminals.
The polity of drivers
And this is why discussing car usage solely in terms of transport is so pointless. Virtual Nations are in general stupid, but “people with cars” actually do effectively make a virtual nation. To be a citizen of Great Britain you don’t need much paperwork, but to be a citizen of the nation of car drivers you have to register yourself with the bureaucracy and keep your information with them up to date. Because you own an expensive piece of equipment that the state knows all about, you have something that they can easily take from you as a punishment. In fact, they can take it even without going through the endless palaver of a court case. In the last few years, you are even required to constantly display your identification which can be recognised and logged by cameras and computers, so the state for much of the time knows exactly where you are.
I used to find this outrageous, and it is still not my preferred way for a government to govern a country effectively. But it is a way to govern a country, and, unlike Great Britain, the country of British car-drivers is actually governed.
But what about the objection to virtual nations? The virtual nation of car-drivers is not a true province, like Wales or Texas, but it is physically separated from the rest of the nation. That is the point of suburbia, of the windy housing estates full of dead ends, with no amenities and no through roads. If you drive a car, you can quite easily have a home that is not accessible to anyone without a car. When you do have to venture among the savages, you do so in a metal box with a lockable door.
The above image is taken from a 2020 twitter thread by @JonnyAnstead. It is an excellently written thread, and makes perfect sense if you ignore the question of crime. In the absence of that key item, he is left to think that all these car-centric features are either a mistake, or some weird conspiracy of car manufacturers or road builders. In reality there is massive demand for housing in this form, because it permits the buyers to immigrate into the virtual nation of car drivers. As I tweeted at the time, “The cars vs people question is just another aspect of the central issue: the biggest value of a car is that it enables you to stay away from the people who don’t have cars.”
The alternative to cars
There are reasonable alternatives to cars for transport (in a lot of cases, anyway), but we need an alternative to cars as a safe virtual nation to live in.
If you want a society that is not centred on the car, for everyone who can afford one, then put the criminals in prison. That’s it, end of tweet.
OK, this isn’t a tweet I suppose. How exactly to put the criminals in prison is a somewhat bigger question, but it has to be done. I have written about it many times, but, aside from the post linked already, there is this one, where I mention how it should be, and this one, where I describe how it is today. The police and court system is just too inefficient to function. Issues like antiracism, sentimentality, checklist culture have all have their impact, but I don’t think there is any one cause. It has just got steadily less efficient because it was allowed to, and it probably has to be scrapped and rebuilt from scratch. “Tough on crime” politics is totally useless, because no politician inside the system can actually admit how bad things are, so they always rely on showy but incremental items that have negligible practical effect.
Update: Did a little editing that I should have done before posting. Also, the discussions of town planning that this post arose from were referring to Britain; I didn’t generalise to the US. But Candide tweeted: Uhm. What do you think white flight was if not mass emigration to the nation of America-with-cars? — which seems pretty persuasive to me.
This is another of these posts written to be a reference point for something that’s been talked about quite a bit.
There was once this political philosophy called Liberalism. It was based on the idea that a person shouldn’t be under the authority of another more than was absolutely necessary.
(For the purposes of this post, I am referring to advocates of this philosophy as liberals — do not confuse that with later users of the same label.)
Codified by twentieth-century autists, this became the Non-Aggression Principle — that the only justifiable reason to interfere with anyone else’s actions is because those actions harm someone else.
In its less rigid form, from the 18th century on, the idea that individuals should have wide latitude over their own behaviour, subject to protection of other people, and also subject to various unprincipled exceptions that I’ll get to in a moment, was the foundation of the modern world. Industry and science flourished in conditions of freedom.
Successful and beneficial as liberalism was, it was never entirely logically coherent. First, there were many restrictions on freedom that didn’t have to be justified because they were too obvious to question. Most early liberals were Christian. Even those that weren’t had all been raised in Christian society, and absorbed some degree of Christian morality, often weakened but still present. The few who managed to overcome any trace were far from the mainstream. (Thomas Paine comes to mind).
Second, not every form of liberalism respected private property, but all the ones that worked did. There are theoretical arguments for why liberalism necessarily implies private property, but as I wrote once before, they aren’t very convincing.
Third, and most crucially, the limits of what constitutes harm from one person’s actions on another person are entirely arbitrary. Every action has an expanding and diminishing wave of effects. Every fire has smoke, every building has a shadow, every animal produces waste. Harms such as slander or distress can be caused simply by speaking, even by speaking the truth.
(In the twitter thread that this started from, I linked this excellent piece by Ed West, on just how much outcomes on people’s lives depends on the behaviour of their neighours)
Liberalism worked because there were fairly common understanding of what harms were “de minimis” and what were not, that had been inherited from former much less liberal societies. This common understanding wasn’t rational, it was only traditional. Now those traditions have been lost, there is no way to get them back.
The chief harm that is recognised today, that makes liberalism a dead letter, is not a new one. It is the one that opponents of liberalism always advanced as its chief cost, and which has a history going all the way back to the trial of Socrates.
In the twenty-first century, any public action at all can be seen by one group or another as corrupting the minds of the youth.
So be it. I quite like the results of old-school liberalism, but as a philosophy it is bunk. Since everyone now acts in accordance with the idea that the minds of the youth should be protected from corruption, it is defeatist to be half-hearted about it. Twitter today is full of two controversies: a mild joke about women, and drag shows for children. If the war is to be fought over what corrupts the minds of the youth more, let battle begin.
Some new theoretical justification for freedom would be nice, but it can wait until the cult of universal queerdom has been, if not defeated, then at least fought to a truce to the extent of being one religion among many, not the compulsory True Faith.
First, what is considered normal comes from subcultures. People get their ideas of what is normal from the people they interact with regularly. Different subcultures can exist in close physical proximity – for example, different social classes traditionally had very different views of what was normal behaviour.
Speculation: are people today more ignorant or dismissive of other subcultures? I observed previously, for example, that the rich used to have more personal contact with the poor – they had servants, tenants, etc. that they knew as actual people, though not the same sort of person. Today technology makes it easier for the rich to avoid dealing with people from other classes, and an ideology of equality makes it embarassing to do so, since you are supposed to believe that they are of the same culture as you, even though they blatantly aren’t.
Social class is just one example, as another, there are obvious differences in the way of life between urban, suburban and rural environments. Young people in cities can meet each other in the evenings easily – young people in suburbia are more isolated from each other.
Really important point: people’s behaviour is much more constrained by what they consider normal, from their subculture, than by what they believe to be true intellectually.
Next consequence of this: crime and order. If, by effective enforcement, you make law-abiding behaviour normal among most subcultures, you will not have much crime. This is really the only way to not have much crime.
A society where it is not normal to commit crimes can do all sorts of things that are otherwise impossible. This goes back to a post I made way back in 2005. The biggest cost of crime is the forgone opportunity – all the things we could do, but don’t because we would run too much risk of crime. As I mentioned on twitter this week, the concept of a supermarket — goods displayed in the open for customers to pick for themselves and bring to a checkout — depends on an assumption that people just walking out with the stuff will be rare enough that you can handle it. (That assumption is apparently starting to fail in some areas now, such as parts of San Francisco). In Britain in the 19th and 20th Century, rarity of crime was one of the basic presumptions that people didn’t have to think about.
Aside: Not only that, but, in accordance with my original point, what crime there was was largely in certain subcultures — the immigrant “rookeries” of London’s East End, for example. Away from those subcultures, it was rarer than average statistics suggest. Even today, much of the civilised world still lives in an environment of very low crime. (That’s a point Steve Sailer makes from time to time).
This basic presumption obviously gets taken for granted. That’s the root of my divergence from libertarianism — given the presumption of an ordered society, it is fine. However, that ordered society needs to be actively preserved.
When I made the point about supermarkets on Twitter, obviously there was a lot of feedback to the effect that, as in the Dickensian rookeries, it is in minority subcultures that the law-abiding norms are not present. Even accepting that, though, it is possible for effective law enforcement to change what is normal in those subcultures. Obviously the story in San Fransisco is that the abdication of law enforcement is the immediate trigger. (I say “story” deliberately — I’m always cautious about pretending to understand what is going on so far away, and the reality may be a lot more complex than what I can see. However, I will stand by the logic of what I am saying here while being open to more information on the detail).
I want to think a little about the psychological happenings in the West in response to the war in Ukraine. I’m not really concerned with the war itself. Doing a few searches here, I seem to have done a mostly OK-ish job overtheyears of avoiding falling into Putin fanboyism — better than I managed with Trump, for instance. I do not accept that the invasion of Ukraine is unprovoked, but I am not going to bother trying to claim that it is justified. Actual geopolitical conflict is above my pay grade. I also note that if I were in Russia I would be (about equally) ill-advised to argue for the foreign side. I think it would be traitorous of me to take up the Russian side of the argument here, and insincere to resort to mostly-forgotten slogans like “liberalism” or “freedom” that I don’t believe in. I don’t think I can quite bring myself to wave a Ukrainian flag, but maybe for the purposes of what I actually want to discuss, we can assume that I am doing so.
What is bothering me is that when Russia controlled not only Ukraine, but Romania, Czechoslovakia, Bulgaria, Hungary, Poland, and half of Germany, our orchestras still played Tchaikovsky. We still played chess tournaments against Russians.
I always thought of Jingoism (a term, of course, originally referring to the propaganda of a war fought against Russia in Ukraine) as being an extremist thing. There is a pro-war party, quite likely nationalist in outlook, which seeks to hype up hostility against a foreign nation and to encourage war against it. The strongest supporters of this position produce the most hate-filled and inflammatory propaganda.
Over this last week I have found myself frequently doing a double-take on Twitter, as I see the most outrageously jingoistic statements (“Putin is insane!”, “Firing artillery at cities is a war crime!”) coming from what I think of as moderate, centrist accounts.
Maybe this is something new in the world. But I suspect that it is not, and that my former assumption was just wrong. Historical episodes of Jingoism, especially WWI, look very much like this.1
The real horror of the current frenzy is its non-partisan-ness. There’s this idea that partisan politics is harmful because it makes people approach questions with the attitude of “which answer helps my side” rather than treating them all on their merits.
Like so much social pseudoscience, that is built on the hidden assumption that people are perfectly rational unless affected by Phenomenon X, and we can assess the effects of Phenomenon X by examining how the behaviour of those affected by it differs from perfect rationality. (see also: religion).
I’m starting to think that political partisanship is a protection. A political partisan will approach a political problem with the attitude, “does this help my side or the other side”. An enlightened person free of this handicap, it seems, will approach a political problem with the attittude “OH MY GOD THIS IS THE MOST IMPORTANT THING EVER WE MUST ALL DO ABSOLUTELY EVERYTHING WE CAN ABOUT IT IMMEDIATELY NOTHING ELSE COMPARES TO THIS”. Say what you will about partisanship, at least it’s a context.
How much of the insanity of the 21st Century is due to this? I wrote before that it is the decline of conspiracy which allows sentiment to overwhelm strategy. This is another angle on that. It seems a bit questionable to describe the fanaticism of today’s social revolution as “non-partisan”, but I think at least in the psychological sense it is. To the extent that there is political opposition to it, it is a largely fictional caricature.
I’m getting deep into the weeds here, but it’s relevant and important: the modern right, it is clear, is not in any sense a conservative tradition. It is a combination of LARPers trying to recreate one, and left radicals with cold feet. It is not independent of the non-partisan mainstream, it is that mainstream’s own fantasy of its enemy, made more or less real, as Satanists are to Christianity. (Moldbug has said things like this many times). Also, from the point of view of the non-partisan mainstream, the opposition is remote. Day to day, members do not encounter opponents and have to think about how to defeat them, but every day they encounter rivals and have to compete with them. This is the whole “virtue-signalling” scenario – Mao’s mangoes and all that.
I don’t know much about Ukraine. One thing I do know is that there was a revolution there in 2014 that was supported by the US, which overthrew a government that was friendly with Russia and replaced it with one that was unfriendly to Russia.
(There’s a whole lot I suspect or think I know about that revolution, and the relationship of the new government to the USA and particular figures within the USA, that isn’t necessarily reliable.)
It could be worse, however. One of the points that Kedourie cleared up for me was the origin of the Armenian genocide. Did you ever wonder why, exactly, the Young Turks decided to murder their Armenians? Did you think it was just because they were evil, or because they were Turkish, or because they didn’t have an electoral college and a bicameral legislature?
Well, all three of these things may be true. But until I read Kedourie, I had only heard two sides of the story—the Turkish side, which is that it didn’t happen, and the Armenian side, which is that it did. History, unfortunately, often comes with far more than two sides:
No means but insurrection: this was clear and it was meant seriously. The leaders of the Armenian nationalist movement had already decided that autonomy was their goal and they thought they had a strategy to achieve it. And these leaders took care that Armenians would not be found to help with the reforms. For it was not in vain that they surveyed the history of Europe from the French Revolution, and not in vain that they meditated on the liberation of Greece, Serbia, Rumania and Bulgaria from the Ottoman yoke. They would make insurrection and they would bring the Armenian Question ‘to the front’. Then the Powers would have to deal with it, and if they failed to deal with it according to the desires of the nationalists, why, there were always other means of keeping the Armenian Question ‘to the front’.
The aim of nationalists is clear. It was to create ‘incidents’, provoke the Turks to excesses, and thereby bring about the intervention of the Powers. The British Blue Books of the period before the massacres are full of reports of attacks by Armenian agents or bands on Turks and Kurds, of the distribution of seditious prints, of the discoveries by Ottoman authorities of caches of bombs and arms, of demonstrations organized by Armenians in Constantinople and the provinces. In most cases, the incidents would have no immediate far-reaching consequences, but some of them, either owing to circumstances or to the ill-will of Ottoman officials, led to serious results. In Sasun in 1894, in Zeitun in 1895, the incidents led to armed risings by the Armenians of these localities which were, of course, bloodily suppressed. An outcry was the result, consular commissions were appointed to investigate, and the Armenian leaders had the consolation of knowing that another blow had been struck in the cause of Armenian independence.
The Blue Books also record another class of incident, quite as large as the first, created by the nationalists, but this much more sinister. It seems that the nationalists had to convince not only the Ottoman government and the Powers of the wisdom of satisfying their desires, they had to convince the generality of the Armenian people as well. This must be the explanation of the attack organized by them on the patriarch as he was officiating in the cathedral of Koum Kapou at Constantinople in July 1890, as a result of which he had to resign his office; of a subsequent attempt to assassinate another patriarch in 1894; of the recurrent reports of Armenians executed for being ‘informers’, for refusing to contribute to nationalist funds, for ‘collaborating’ with the Ottoman government. Nor did the nationalists try to hide or excuse these activities. Here is a passage from a revolutionary placard posted in Sivas in December 1893:
Osmanlis!… The examples are before your eyes. How many hundreds of rascals in Constantinople, Van, Erzerum, Alashkert, Harpout, Cesarea, Marsovan, Amassia and other towns have been killed by the Armenian revolutionaries? What were these rascals? Armenians! And again Armenians! If our aim was against the Mohamedans or Mohamedanism, as the government tries to make you think, why should we kill the Armenians?
The Armenians were forced to be free.
What did the Ottoman government have to say to all this? Its attitude was as clear as that of the nationalists: this agitation would have only one result, to invite Europe to meddle again in the affairs of the Ottoman empire. This was not to be tolerated; the Armenians had to desist or they would take the consequences.
And the incidents continued to be organized. In 1897, just after the massacres of 1895–6, and in 1905, there are records of minor insurrections also leading to massacres. And on the eve of the Young Turk coup d’etat of 1908, there was still the same tension in Ottoman Armenia fed and tended by the revolutionaries. This the American ambassador in a dispatch of 5 August 1907 speaks of ‘a considerable degree of disaffection and revolutionary movement on the part of a portion of the Armenian population in the district of Van. Several cold-blooded murders have been committed even in the streets of that city and a certian feeling of apprehension and unrest appears generally to prevail’; and in another dispatch he reports several more disturbances in Van, revolutionaries killing and wounding seventeen Ottoman soldiers, executing a ‘traitor’, and a considerable store of rifles, cartridges and dynamite seized. Later, when the catastrophe was final, complete, irredeemable, the nationalists were still indignant that their methods had had such untoward consequences. They could not understand why salvation was so recalcitrant in coming, why the easy path which the examples of so many European revolutions had promised should have proved so full of vipers and of nettles. The desolate wind of futility blows through the report the Dashnaks presented to the International Socialist Congress in Hamburg in 1923.
Every time that, through the irresistible force of things, the movement of Armenian emancipation expressed itself in revolutionary action, every time that the party of the Armenian Risorgimento tried, at the head of the conscious elements of the country, to draw the attention of the world, by armed insurrections or peaceful demonstrations, to the intolerable fate of the Armenian people, the Turkish government threw the Armenian masses, peaceful and disarmed, to the mercy of its troops, its bachi-bazouks and of the Turkish and Kurdish mob.
I’ve been thinking a lot about a conversation I had on twitter with a private mutual.
We were looking at the question of reliable sources of information, and I brought up Patrick McKenzie’s tweets emphasising that there are independent commenters (Andy Ngo, Scott Alexander etc.) who are by now just predictably much more reliable than credentialed experts.
“It is February 2020. You can choose one and only one of a) the top-voted lesswrong coronavirus explainer and b) the entirety of the public health field to bet on. Bet will be called in December 2021.”
The response was that many previously good sources have been spectacularly full of shit over the pandemic.
“I’ve watched smart people whose opinions I’ve found worthwhile on a broad array of political topics become completely consumed by disinfo on this pandemic. They’re the new Russia hoaxers. Intelligence is not immunity.”
My response: “People who specifically do politics are rarely reliable on reality. I’ve seen a lot of cool fun people get seriously deranged (in different directions). People who seemed grounded and reliable before, still seem grounded and reliable. More so than MSM or officially credentialed.”
And that’s basically my position. I follow a bunch of cool fun political commentators, who write well and have insights, but I am not shocked that a number of them have been completely wrong about a factual issue. The people who are reliable are generally very careful to avoid being explicitly political.
That’s not the interesting bit here. What I have been dwelling on is that I am one of the political ones, not one of the reliable ones.
As an example, take this tweet of mine.
This, by my standards, is a pretty good tweet. It is a fact that Britain has had to go to unusual lengths to keep the electricity running lately, and has spent billions bringing on extra power at unprecedented market prices. It is also not something that has much mainstream attention.
When I say we are not going to have reliable electricity for the next twenty years — well, that might be true. But I don’t really know. I’ve jumped a bunch of steps of reasoning for the sake of a “Take”. I’m against shifting away from gas and coal and building tons of wind power because I think they’re wasteful, and I’ve exaggerated my confidence to defend my position.
Maybe paying these very expensive spot prices and balancing mechanism charges now and again is actually completely manageable, and we can go on like this another twenty years. I haven’t even really dug into it to that level of detail, and my sources of news on the issue are people who are even more partisan on the question than me.
I still think I’m right. But that’s partly guesswork, and stating a bald prediction as I did means I can’t be one of the “reliable sources” I was discussing above.
I have to stop doing this shit. Why do I even do it? For a “good take” that gets 2000 impressions and 200 engagements on Twitter? That’s not my job. To influence the government, move them democratically to better policies? I don’t believe in that stuff at all. I’m just playing at being a pundit.