In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”1 whipping up the culture war for ad clicks, and we need to somehow prevent this.
However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.
It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.
Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.
What can I practically do about it?
Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.
I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.
What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.
An anonymous2 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.
The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.3 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.
Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them4. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.
This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.
From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.
Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.
Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.
It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.
The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.
He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).
All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.
He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you“
Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.
And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.
The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.
Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.
this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.
(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)
Putting these two things together, what you get is:
The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)
AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).
This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 1, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.
As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.2
AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.
Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.
We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.
My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.
In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.
The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:
Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.
I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.
Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.
The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).
Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.
you do not have an “objective function” you do not have any “terminal goal” your activity is not the result of “planning” or “deciding” you do not have any “ethics” these are all malign rationalist myths they make you miserable when you take them seriously you are reflexively accountable to reality not to your representations of it your beneficent activity arises as spontaneous appreciative responsiveness
It would be nice to end on that note, but I have to shoehorn my own conclusion in:
I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.
This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.
This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.
The tone of the articles is that journalism is moving to paywalls so the poor underprivileged folks will be denied all this valuable journalism, and suffer as a result.
If the mass population were to be denied access to journalism, that would be about the best thing that could possibly happen, but of course it is not conceivable. They will continue to get what they want to consume; the stuff that is moving behind paywalls is the niche stuff that the profitable mass media no long sees a reason to subsidise.
Nevertheless, that is significant and could have large effects in the long run. I wrote about some of the issues a decade ago, when I reviewed “Flat Earth News”.
Mass-market news is primarily entertainment. Most people watch news to engage their minds and have something to talk about, not because they actually benefit from the information. (see also: Politics as Entertainment).
There is a long tradition, though never dominant and much reduced in recent decades, of including true information in news media. This was a product of paternalism, idealism, and the fact that actual news was kicking around anyway and was easy to throw in.
There has always been a minority of news consumers who actually need true information from the news for practical reasons. They used to be served by the same media industry as the mass market. (Not necessarily the same publications, but the same organisations and meta-organisations of media).
When the same industry produced facts for the minority and entertainment for the majority, that made it cheap to include facts in entertainment. If it bifurcates, the infotainment side will no longer have access to or focus on true information.
It is not clear that “premium news” of the type described in the axios piece is the factual news I am discussing, as opposed to just being a market segment of infotainment. It might be, but “business intelligence” services are more obvious candidates.
The “factual news consumers” I am thinking of are primarily business and government. If you want to know what is really going on in the world today, in order to make business decisions, do you read a daily newspaper or watch TV news? I don’t think so — you read specialised industry analyses.
The story is interesting in its own right. Youtube observs responses from users, both to videos being listed in their screens, and to actually watching the videos, runs some Machine Learning models1 over that feedback information, and selects what to list to them next to keep them watching and engaging. (This is widely understood)
(In a tiny, tiny fraction of high-profile cases, it then applies human moderation to advance the company’s interests, its political and social biases, and so on. That’s not what I’m writing about today)
As is known, this feedback loop can lead people in some highly unexpected directions. Recreational lock-picking, really? There are also some less mysterious tendencies — any activity is more watchable if it’s being done by attractive young women. But the particular instance Pargin finds — of an innocuous third-world fishing video getting ten times the views if it mildly hints at a tiny bit of indecency that isn’t even really there — would have been very difficult to predict. Note that it’s not as simple as “ten times as many people want to see the videos with the not-quite-upskirt thumnail”. Because of the feedback, more people get the suggestion to watch that video, and many of them might have equally watched the other ones too, but didn’t get the opportunity. The behaviour of a smaller number of unambitious creeps is driving the behaviour of a (probably) larger number of ordinary viewers.
Pargin makes the wider point that this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.
I want to make a wider point still. We can understand, roughly, how this particular mode of media comes to produce some kinds of content and not others. That does not mean that without this particular mode of media, you get “normal, natural” kinds of content. You just get different incentives on producers, and consequently different content.
It’s not just media, either. Different structures of organisation and information flow produce different incentives for participants, and consequently different behaviour. Financing a business by selling equity into a highly liquid public market produces certain specific behaviours in management. Running a sport where teams can prevent players moving between them produces certain behaviours in the players. Organisations may be designed to incentivise certain desired behaviours, but many others will arise spontaneously because the system as a whole unexpectedly rewards them.
This is what Moldbug means when he says “The ideology of the oligarchy is an epiphenomenon of its organic structure.” We do not have woke ideology because a deep centuries-long woke conspiracy has taken over. We do not have it because someone sat down and worked out that a particular structural relationship between civil service, universities, and television would tend to promote ideological shifts of particular kinds. We have it because a structural relationship was created between civil service, universities, and newspapers and it turns out that that structural relationship just happens to result in this kind of insanity. You can trace through all the details — the career path of academics, the social environment of civil servants. You can spot historical parallels — this bit Chris Arnade found on pre-revolutionary French intellectuals. Moldbug attributes this epiphenomenon primarily to the separation of power from responsibility. I’m sure he’s right, but it’s a bit like Jason Pargin saying “yes, the internet really is that horny”. The particular ways in which irresponsibility or horniness express themselves in systems are still somewhat unexpected.
Early 20th century politics was organised around printing presses. To be a party, you needed printing equipment. Today’s establishment is the group of people who got control of television. There’s no other worthwhile definition.
Whatever the ‘mainstream’ is, it’ll never again have a monopoly on an ability to raising large amounts of money quickly, reaching millions of people, coordinating logistics on the ground. The money, experience and machinery of the political mainstream matters a lot less now.
Half my timeline is now trying to fight to keep that true. I think they’re going to lose.
Mass participation in the internet was reached a little before 2000
With that, anyone could communicate with anyone else, or with any group, easily and free of charge.
That did not mean that anyone could whip up ordinary people with ordinary interests into political hysteria like Black Lives Matter or QAnon. Ordinary people with ordinary interests would not pay attention to that stuff.
Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.
The Trump presidency was a glorious carnival, but a carnival is all that it was. When the Saturnalia ends the slaves go back to work. I said when he was elected that it was a setback for neoreaction, and it probably was.
I got a lot wrong though. I did not expect the anti-Trump hysteria to endure. Facebook-era politics was too new, then, for me to have understood how it works.
The Facebook era of politics ends today. As with the Trump presidency, I will miss the fun and excitement. I miss eating a packet of biscuits a day too. But man was not meant to eat that much sugar, and democracy was not meant to exist with uncontrolled access to mass media. From the invention of journalism until the twenty-first century, ability to reach the public with your propaganda was power, and power had its say on who could do it. A decade of unconstrained mass media gave us Trump and Brexit and the Gilet Jaunes1, and it also gave us Open Borders, Trans Rights, Russiagate2 BLM, PornHub, and QAnon. It was destroying our society, and it was going to be stopped sooner or later.
We only really had one thing to say to the normies – that democracy was an illusion, and they were not in charge. I don’t think we need Twitter to tell them that any more. The events of the last week have exposed the relationship between government and media much more obviously than weird technical blog posts.
I spent the night bitching about the hypocrisy and dishonesty of the censors. I suppose I had to get it out of my system.
The pogrom will go a bit wider at first, but in the end I don’t think it will do more than roll back to 2005 or so. I do not expect to be censored, because I do not speak to voters. It was the frictionlessness of the Facebook news feed that pulled normies into these games — if you have to go out of your way to find me, then I am doing the regime no harm, and I expect to be ignored, at least if I get through the next few months.
This, of course, is also the system in China. And I admire the Chinese system. When I tried to imagine neoreactionary victory, I struggled a bit with how a monarchical regime could exist in a world of uncensored internet. I don’t have to worry now.
Some practical resilience steps are sensible. Back up everything. Try not to depend on the Silicon Valley giants (GMail is nice, but you’re not the customer you’re the product). It’s possible that something like RSS could make a comeback if it’s awkward enough to use that the normies aren’t included, but don’t chase after the holy grail of a censorship-resistant mass media because that’s a coup-complete problem. Keep your head down, keep the channels open. I had this blog working as a Tor hidden service once, I’ll revisit that but I don’t expect to need it.
Back in 2014 I wrote a short piece on the somewhat forgotten fact that when sexual liberation was being pushed in a big way in the 60s and 70s, sex with children was part of the movement, and was supported by mainstream liberal voices — the National Council for Civil Liberties, and so forth.
The question for historians to ask about the 1970s is not, “how could respectable people have supported paedophilia back then?”, rather, it is “how did they not succeed?” My original answer was that as the rebels became the establishment, they were forced to take some small measure of responsibility for keeping society together, and withdrew from a few of their most dangerous demands. That’s no more than a hypothesis really, since I have no particular evidence for it. The truth could possibly be even more interesting.
The question has recently come up again, with this NY Times article, tweeted by Sam Bowman, who thinks, “It’s really fucked up how mainstream paedophilia was during the 1960s and 1970s”
PARIS — The French writer Gabriel Matzneff never hid the fact that he engaged in sex with girls and boys in their early teens or even younger. He wrote countless books detailing his insatiable pursuits and appeared on television boasting about them. “Under 16 Years Old,” was the title of an early book that left no ambiguity.
Still, he never spent a day in jail for his actions or suffered any repercussion. Instead, he won acclaim again and again. Much of France’s literary and journalism elite celebrated him and his work for decades. Now 83, Mr. Matzneff was awarded a major literary prize in 2013 and, just two months ago, one of France’s most prestigious publishing houses published his latest work.
As I said in 2014, the question is not how the cultural revolutionaries who overthrew much of what society had previously thought right or moral could possibly have supported this, it’s how they failed, when they succeeded in so much else. Not only did they fail, but paedophilia inspires a level of opposition and revulsion today that to me always feels a little bit deranged. I’m perfectly happy to say that it’s harmful to young people to have sexual relations with adults and should be illegal. I’m also OK with saying that at least sex with younger children — say 13-year-olds and younger — is not just harmful but perverse (though I’m not clear why that counts for anything in 2020). But I struggle with the aura of evil — and that’s most often the word that’s used — when pretty much nothing else you can think of is today considered evil.
That attitude clearly wasn’t around in the 70s. I think it really dates from the late 80s onwards.
In discussion, though, I came up with a much more boring answer. I think the explanation is that a series of very heavily reported child murders created a strong association in the popular consciousness between paedophiles and murderers, and that’s what caused attitudes to harden so dramatically.
This theory is disproved if there was repeated heavy coverage of child sex murders before the 1970s. The biggest story, in the UK, is the Moors Murders, for which Ian Brady and Myra Hindley were arrested in 1965. If that was the beginning, and I vaguely remember it being a repeating theme through the 70s and 80s, it works as an explanation. (It doesn’t matter if there actually were murders before Brady, only if they got the same kind of media treatment).
It can also be looked at internationally. The USA seems to have followed a similar pattern, of it being naughty stuff done by wild rock stars in the 60s and early 70s, and being the definition of evil from the 90s on. I don’t know the specific cases, but they have the “missing children on milk cartons” thing going, at least from the 80s.
Maybe France hasn’t had that kind of crime, or not the same kind of media treatment, and that explains the softer attitude there.
It also gives clues to the future. Over the years I’ve often seen suggestions that “they” are going to be making paedophilia mainstream next, and I’ve tended to pooh-pooh them on the grounds that “they tried that before and failed”. But if there aren’t murdered kids in the papers, maybe they have a chance. In the UK, the last big media circus was Soham, almost 10 years ago now. Maddie McCann who disappeared in 2007 is probably still higher in the public consciousness, because nobody knows what happened to her. A few more years might be enough.
I don’t understand Syria, and I’m not going to, and I’m OK with that. Trump’s pullout may be bad for America for all I know.
The concrete harmful impact of Russia having a lot of influence in Syria (as it did in the 1980s) isn’t spelled out, instead we just get innuendo.
I tweeted that Kurds will always be allies in destabilising, and always be enemies of peace, because of their situation as a stateless cross-border group. That’s simplistic, but if it’s not true someone needs to explain how. Peace in any of the countries in which they have large populations has to include either (a) they give up their claim to statehood, or (b) they achieve their own state, and I have never heard anyone suggest that (b) is a realistic possibility. There is a chance in any one country that you could get an autonomy-based settlement short of statehood which is beneficial for them, but while the other countries in which they have large populations are unstable, that can’t be a peaceful settlement, because they will still be fighting in the others. As I tweeted, none of this is their fault — it seems they were completely screwed in the 20th Century but this is the position today.
If there’s any coherent view coming from the US establishment, it’s anti-Iran. They may have a good reason for that, but I don’t know what it is. The reason probably has a lot to do with either Israel or Saudi or both, but I don’t expect to ever find an answer I can be sure is true.
Syria has been a bloodbath since the beginning of the Arab Spring attempt to depose Assad. Anyone suddenly upset about the humanitarian impact this week can be dismissed out of hand.
“Kurds were our allies”. How is that, exactly? I asked on twitter, sarcastically, for links to the announcements of and debates of this policy. It was made ad-hoc by the military and civil service. The president never talked to the electorate about it. Quite possibly the president (Obama) never even knew about it. Which is perfectly OK. But there is sleight of hand here. The line we are getting is: “We allied with the Kurds and relied on them, now we need to stand up for them”. The two “we” in there are two different groups. The opaque Washington foreign-policy establishment allied with the Kurds, without input from or notification of the general public. Now the voters are being asked by the media to stand by some implied commitment they played no part in making.
1) So much context has been lost and recent history revised in the coverage of this growing crisis between Turkey and Syria. US always assured Ankara that their support for the YPG was ‘temporary, tactical and transactional’ – a US diplomat quoted here in my new book on Erdogan
From around 14th October, the Kurds have made some kind of arrangement with the Syrian Government, and the narrative has switched from “it’s terrible to abandon the Kurds” to “Now the Russians are winning”. This is utterly disgraceful. It entirely proves that the complaints about the fate of the Kurds the previous days were insincere. Had the concern really been for the Kurds, then Monday would have been a day of rejoicing at their safety. Instead, the opposition to the withdrawl policy stays the same but the reasons change.
It is because of this sort of thing that I automatically disregard all foreign policy arguments that are made on humanitarian grounds. I don’t even consider the possibility that they might be well-founded. The concept of intervening internationally to protect civilians is 100% discredited in my eyes.
Around 500,000 human beings were killed in Syria while Barack Obama was president and leading for a “political settlement” to that civil war Media has been more outraged in the last 72 hours over our Syria policy than they were at any point during 7 years of slaughter Ask why
Kinda telling that the intensity of Online Outrage expressed by Smart People today over the Kingsman-meme isn’t any perceptibly different than the Online Outrage they were emoting yesterday or the day before over, like, The Kurds being slaughtered it’s all a video game
Trump, though I find him amusing, I consider no more trustworthy than the rest of them. I am not able to judge whether his policies are good or bad, but he is the only person who makes arguments for his Syria policy which make sense. The arguments against are always obviously dishonest (like the ABC gun show footage), insincere, or rest on vague unstated assumptions (such as that nothing that Russia wants can be allowed).
The FSA leader who John McCain took a picture with is now part of the invasion of Northern Syria, which the hawks are insisting we must oppose.
There’s another related point, more subtle but much more general. Modern thought does not admit of a distinction between crimes of commission and crimes of omission. To a naive rationalist, causing harm and allowing harm to happen are equivalent. But like so many arguments you hear today, the equivalence rests on an entirely unrealistic level of certainty towards the assumptions that are being made about the results of action or inaction. The potential for very large unexpected harmful effects is very much greater in military action than it is in inaction, and the expected benefits of action have to be large enough to outweigh that category of risk. That is equally true whether the harms and benefits in question are political, financial or humanitarian.
I happen to have picked @LionBlogosphere — I follow a whole lot of weirdos on twitter, but he is much closer to a mainstream American Conservative.
Anyway, not that it’s really any of my business, I agree with him entirely; I think that’s a totally reasonable position for him to take. And, while the whole issue has dragged on, not 100% resolved, his side has at any rate not definitively lost. I think it has come out slightly ahead, and the leagues and teams have mostly taken the view that their players should not insult their spectators.
But now, of course, we have the China thing. NBA basketball is huge in China, and there’s some kind of protest movement in Hong Kong that I don’t know much about, and some basketball people made sympathetic noises about the HK protesters, and the Chinese government was very upset.
I’m trying hard to remain ignorant of the Hong Kong thing. A new extradition law was brought in, or something, and that’s maybe against what the Chinese government had previously promised in terms of HK’s autonomy, and there have been protests going on for a month or three, which have been getting the whole unquestioning popular support in the West that I hate so much. So my reflex view is on the side of the CPC, but of course they may really be doing something bad for all I know.
Either way, right or wrong, the Chinese government are not going to be friendly to foreigners who take sides against them. They are not going to allow them a public platform in their country, any more than they allow their enemies within the country a public platform. And if you are a basketball team that’s going to do tours and broadcast games in China, the Chinese government is your customer. And, as @LionBlogosphere said, “If I inflicted my political opinions on my employer’s customers I’d be fired”.
So I think we’re all agreed.
Except, of course, that @LionBlogosphere today retweeted Ted Cruz saying,
It is outrageous that the Chinese Communist Party is using its economic power to suppress the speech of Americans inside the United States:
Again, I’m not especially objecting to Lion, I just follow him and not, say, Ted Cruz, who in September 2016 said
Here’s a peaceful protest: never buy another shoe, shirt, or jersey of rich spoiled athletes who dishonor our flag. https://t.co/GrGPYX8HCh
If you are putting economic pressure on sports teams, and you expect them to respond, well, so can their other customers, and maybe it’s reasonable for the teams and leagues to respond to that too.
Surely, you can draw a distinction between #BLM and “Free Hong Kong”. The teams in question are all American, and you can demand of them a loyalty to America while they have no equivalent duty of loyalty to China. But they would no doubt claim they were being loyal to America by seeking to change it in the way they sought — the real objection is they were offending their customers. Another distinction is that it wasn’t the US government putting pressure on the teams to censor themselves, but the Chinese government is doing so. Well, the relationship between citizens and government is different in China than it is in the US. International sport has long depended on not bringing one country’s politics into another. Most Americans probably think that China should be a democracy with free speech. But it isn’t. If Ted Cruz thinks that means the NBA shouldn’t do business with China, that’s a coherent position. But if he doesn’t think that, then obviously the NBA will take steps to make their product marketable there, if there is commercial reason to. And if he thinks that this is a matter of the principle of free speech, which should outweigh that commercial reason — then why did he think the opposite in 2016?
Now, you can point to a contradiction between the relative willingness of the entertainment industry to allow opposition to the US government, and their very rapid arse-covering with respect to the Chinese government. That reflects both their own political biases, and the relative power of the US and Chinese governments over their ability to do business. But if the other side is contradicting itself, it doesn’t help to contradict yourself, even worse, in the opposite direction. Ted Cruz was right in 2016, he (and @LionBlogosphere) could very justifiably spend these weeks banging on about the inconsistency in sportsmen demanding the right to insult symbols of the American nation while being careful to avoid insulting the Chinese nation. Instead they just destroy their own previous arguments.
Finally, the 2016 argument was the more important of the two. By flipping now, they are putting themselves in the wrong for next time. You said that political protests at sporting events was a matter of free speech.
[context: I never actually made a decision to step away from blogging and twitter, I just had more interesting things to do]
Ten to twenty years ago, one of the big buzzwords was decentralisation. New communication technology means that things that used to have to be organised by a central body can now be done spontaneously between users.
By decentralising an activity, you remove the bottleneck of the central coordinator. You reduce the status war of having a “leader” (who leads Extinction Rebellion?) You can evade countermeasures.
Also, it was generally assumed, you create choice. If you can have one decentralised network, you can have two, or ten, or a million.
Some of that was hype, some of it is true. But the issue of choice has turned out to be the most interesting. Replacing a hierarchical organisation (of authority, or of communication framework) with a decentralised network does do away with the nominal leader or controller. But it doesn’t do away with network effects. Indeed, by removing some of the barriers to scale it can greatly increase them.
The result of decentralisation plus network effects is the decentralised monopoly.
Outside of one special case I’ll come back to, I think the best examples of decentralised monopolies are open-source software projects. The essence of these projects is that there is no control; anyone can take the source code, change it, build it, and distribute it. That goes even for large widely-used things like the Linux kernel or the Apache webserver. But, most of the time, there’s no good reason to do so. The result is a voluntary centralisation.
The recent trend has been to recentralisation: the old centralised Television and newspapers give way to decentralised blogs and podcasts, which gives way to recentralised Facebook and Youtube. But that is still voluntary. The newspapers and television stations had control because they owned the actual infrastructure. Facebook and Google own some infrastructure, but in comparison to their actual business that’s negligible. Alphabet market cap is apparently USD 825bn, their balance sheet lists plant & equipment at USD 60bn.
That’s not really counterintuitive. It’s just network effects, and/or Schelling points. I’d love to see that recentralisation reversed, but I don’t think it’s possible — if there’s going to be a monopoly because of network effects, then a business that can pay to market its network is bound to outcompete a network that doesn’t have a central owner.
Where you have something that is made of decentralised contributions, the network effects get so much stronger than they do for pure consumers. You have to put real work into contributing, and the return on doing that work depends on the contributions of others.
Wikipedia is a prime example of this. Now it isn’t 100% decentralized, as somebody owns the servers and the domain name, so while anyone can contribute, there is an actual hierarchy with a root of sorts. But if you could design out that root authority, I don’t think it would change much. There is still just one Wikipedia, and all the work that is contributed to it can’t be contributed anywhere else instead without severely reducing its value. The decentralised selection of content (selection is of course the main work of an encyclopedia) still has to be done by those people who show up to do it, coordinated in some way that enough of them can put up with. If you fork it to produce some rival, as has been done a few times, your rival has none of the value.
That’s not to deny Wikipedia’s many flaws. There are many areas where it is systematically bad. But I think Wikipedia is what it necessarily must be. That’s the real point of decentralised monopoly — if it is not under some central control, then there is nobody who can make it other than what it is. This echoes, somewhat, the repost about political parties: how can one political party, open to anyone and run by its members, be different from any other political party open to anyone and run by its members? In a sense, complete openness is the most unyielding authority of all.
The one special case of decentralised monopoly is, of course, the bitcoin blockchain. It’s special because being decentralised and a monopoly are not incidental attributes, but the central aims of its design. As such, it bears the same relationship to a study of decentralised monopoly as dog breeding does to natural selection.
I explained the essence of what makes bitcoin before: it’s a voting system where you are fined for voting on the losing side. That rule is guaranteed to produce a consensus, and the consensus is likely to be “correct” from the point of view of the contributors. That’s the design aim, but see how similar it is to making a contribution to any other open collaborative project. Work that is put into maintaining a Myspace page, or keeping a presence on Gab, is nearly as wasted as the work spent mining an orphan bitcoin block. Either I’m contributing to the project that everyone else is using, or I’m shouting in the wilderness.
The most important aspect that drives this authority is probably not decentralisation as such, or even openness to contributions, it’s being public. Ed West tweeted yesterday that he wished he could maintain different “flavours” of his twitter stream — a toned-down one for the normies, and a more hard-hitting one for the fans. I’ve tried to do that sort of thing numerous times, but it never worked; it failed on the same point: if I wasn’t blogging or tweeting as AnomalyUK, I was losing most of my audience. Moldbug outed himself because he couldn’t resist discussing his technical work on UR. He didn’t have to do that — it is possible to be two people online, but it’s not possible to be one person with two public faces. They automatically become one under the pressure of being public.
That basically is the same point as I made about the decline of conspiracy: the political mode we are now in is the one that you get when nobody conspires to prevent it. As such, you would expect it to be the historical norm, unless it destroys itself. And it isn’t the historical norm. (This is just a restatement of Jim on left singularities).