Assets, Parasites and Pets

In my last post, I wrote:
An inhabitant of a polity is either an asset, or a parasite, or a pet.
The argument I was making was that if a sovereign has a long-term final goal, then his short-term instrumental goals will be to increase capabilities and acquire resources, and if he owns an subject who has a long-term final goal, that subject’s short-term instrumental goals will be to increase his own capabilities and acquire resources for himself, and if that subject is an asset to the sovereign, then those goals are fundamentally compatible. They’re not identical — the distribution of resources among subjects will have some optimum for the sovereign’s purpose which differs from that of any individual subject, but valuable subjects in general will have their goals met about as well by an efficient sovereign as by any other governance mechanism which could exist.
But what of subjects who are not assets? The sovereign does not have any interest in increasing the capabilities or resources to subjects who are not productive of any value.
The first thing to do when considering this is to be realistic: any system of government depends on the able, and has little incentive to cater to the unable. It doesn’t make sense to go into this question expecting too much. That’s a point I’ve made before: “Ultimately, no blueprint can protect the native population if it truly doesn’t have any value to contribute”
Nonetheless, many actually existing human societies do care for the unproductive, with varying degrees of effort and effectiveness. They do this because humans do not have purely long-term goals, but actually want that to happen.
When thinking about the welfare of the unproductive, it makes more sense to see this as a bonus to the productive, rather than as a matter of rights of the unproductive. I am not looking at the question from a moral standpoint, remember — this is all based on the concept of a sovereign with his own long-term goals. Since his interests include increasing the capabilities of his able subjects, and their interests include (to some variable degree) caring for the unproductive around them, the optimal policy is going to include some level of such care. Care for the unable is always going to depend on some able people wanting it. If nobody has any reason to keep you around, they won’t.
 

Goal-Content Integrity

I wrote a couple of weeks ago about Instrumental Convergence.
 
The thing that immediately struck me when I read The Superintelligent Will was that the very concept of Instrumental Convergence was exactly the neoreactionary argument for sovereignty.
If you have any long-term goals, the best way to achieve them in the short term is to accumulate knowledge and resources that can later be employed in the desired direction, provided that you can achieve what Bostrom calls Goal-content integrity.
Goal-content integrity means being able to hold to the same final goal over time. If you do not have confidence that your final goals in the future will be the same as they are now, then resources and capabilities that you acquire could be used by you in the future for goals other than those you currently intend.
 
If we model a polity as an intelligence with some long-term final goals (and I will address the problems with doing this later), then the logical instrumental goals of that polity are: self-preservation, goal-content integrity, increased capability, resource acquisition, just as Bostrom deduces. (I am rolling his goals of cognitive enhancement and technological perfection into a simpler “increased capability” — those goals are more important to his overall argument than they are to mine).
 
The difference between a reactionary polity and a liberal polity is that the liberal polity disclaims goal-content integrity. It does not have a long-term final goal, because it assumes that elements within it have different final goals, and they will continue to compete and compromise over those goals forever. Because it does not have long-term final goals, it has no steady interest in increased capability and resource acquisition. Conversely, a reactionary polity with a defined long-term goal, such as increasing the glory of the Royal Family, or of God, or both, will seek increased capability and resource acquisition.
 
The obvious problem with modelling the polity as an intelligence is that what that “intelligence” seeks is not necessarily good for anyone in it. However, this is where Instrumental Convergence becomes important. A polity that is seeking increased capability and resource acquisition is highly likely to benefit the immediate instrumental goals of its population. An inhabitant of a polity is either an asset, or a parasite, or a pet. An able human is still capable of being an asset, and as an asset is likely to gain from the resources and capabilities of the polity. Being a parasite to any polity of any kind is likely to cause you problems, so don’t do that. The role of humans as pets becomes interesting in the case of superintelligences (which I am not really discussing here, despite the starting point), but less so for human societies.
 
This is why it is better to be subject to a sovereign than to have a share in power: as a subject of a sovereign you are part of a polity with goal-content integrity, which, whatever its final goals, will pursue instrumental goals that will enable you to benefit. As a citizen of a democracy you are part of a polity without goal-content integrity, where the zero-sum struggle over the direction of the polity dominates any instrumental goals of increased capability or resource acquisition that you would be able to benefit from.

Constitutions and Law-Enforcement

This is ideas-in-flow: any conclusions are soft, this needs more work.

@wrmead tweeted that Caesar crossed the Rubicon with his army rather than face politically motivated prosecution without it.

Basically, the threat of being punished by political enemies made Caesar an outlaw: he had nothing to gain by following the rules any more.

I thought that a reasonable point (and RTd it), but at the same time, there are rules, and how can they work if they’re not enforced? The fact that Caesar was in danger from his political enemies does not mean that their allegations were unfounded.

@Alrenous made that point explicitly:
Caesar was a criminal
He crossed the Rubicon with an army rather than face politically motivated just punishment for breaking the law without it.

update 30-Jan-2018: relevant tweet — apparently Caesar himself agrees
https://twitter.com/spectatorindex/status/958376761540202497

A legal system works because it is above the disputing parties. Two parties come before it, it awards a victory to one or the other, but that is a limited victory; the victorious party remains below the legal system. That isn’t the case when the winner of a legal dispute gets control over the administration of the law itself. One victory becomes total victory.

It is easy to imagine that politics would inevitably decay into legal battles. There is a wide gap between things which are definitely allowed under the rules and things which are definitely not. Once someone strays into that blurred boundary area, you would expect that they would be challenged, and the conflict would move from the political to the legal sphere.

However, in established long-standing democracies, that very rarely happens.
The aversion seems to be strongest at the point of making a legal challenge. Questionable political conduct, in the UK and USA, is commonplace, as are accusations of illegal activity. But ultimately it seems nearly always to be tolerated.

This unexpected observation, that there are extremely strong norms against turning competition for power via elections into legal battles, needs to be explained.

Outside of the developed West, this is quite a common occurrence. The last few years have seen disputes over whether candidates acted lawfully in Ukraine, Venezuela, Honduras, just off the top of my head.

As a general approach, those norms can’t be a solution to the problem: if there is a strong norm against prosecuting opponents, that would surely tempt politicians further into legally questionable territory in order to take advantage of it, approaching the point where there is a significant danger of prosecution.

One solution that would work is for the grey area to be shrunk down: if the real rules (which might not be the same as the formal rules) are very clear and very easily interpreted, then nobody will make a fatal mistake, either of stepping over the line so that his opponents have to take legal action against him, or of taking a situation to the legal arena which the other party has reasonably assumed to be safe.

That could be the case, but really doesn’t appear to be.

Another solution would be if politicians feared the punishments for malpractice much more than they wanted to win, so that they would never take even small risks of getting caught. Again, that does not appear to be the case.

Another solution in a democracy would be if any malpractice is looked on so severely by the electorate that it would be counterproductive. That surely is not the case. It might be that that has been the case until recently. There is a whole narrative, quite logical, that the populations of the Western democracies used to be so attached to democratic values that any breach of those principles would outrage them to the point of unelectability, and that a recent increase in partisanship has fatally damaged that equilibrium.

There are two problems I can see with that narrative: first, that there is no history of notably clean politics in the democracies: lies, bribes, and gerrymandering being commonplace throughout history. Second, that it doesn’t make sense for voters to be so moralistic about their own side cheating. The current situation, where supporters of a candidate see accusations of cheating as either signs of the viciousness of the enemy propaganda, or as indications of his own heroic strength, or both, seems far more natural than such high-minded fairness.

My own view is that the thing that has made democracy work, in those rare cases where it has worked, is that the apparently opposing parties are really part of the same ruling class. The issues that stand between the parties are low-stakes issues, which are resolved by the parties staying within the rules. The reason they stay within the rules is because they are united on the high-stakes issue, of the existing ruling class holding on to its position, and aren’t prepared to jeopardise that by fighting no-holds-barred over side questions.

For instance, take the quote from Jeremy Paxman’s book The Political Animal that I picked up in 2011: “In April 1925, for example, the then Chancellor of the Exchequer, Winston Churchill, announced that Britain was to return to the Gold Standard, whereby the value of sterling was guaranteed by allowing pounds to be exchanged for gold. This momentous (if ultimately unsuccessful) decision had been two months in preparation, involving heartfelt arguments on both sides of the debate. Yet not a word of it appeared in the newspapers. Indeed, it was hardly heard outside the confines of the Treasury.”

Or, as I put it another way in 2008 :  “The only situation in which a government can genuinely act in the interest of a class wider than just politicians is when there is a larger class of relatively powerless people – slaves or peasants – who would be a threat to a divided ruling class. That is the characteristic of democracies before the twentieth century.”

If both sides politically are actually united on maintaining the system that favours them, that doesn’t mean that their disagreements are fake. It just means that they aren’t important enough to risk the system over. However, from the point of view of an outsider to whom the disagreement is most important, that is almost the same thing.

This analysis raises a lot of questions:

  • Is it really true that the stable Western democracies have not had sufficiently serious political disputes for this gentlemanly state of affairs to break down? In US history, obviously the Civil War is a case where it did. But what about the New Deal? Is that another exception? Or did the GOP decide to fold rather than take the risk? For that matter, what about earlier cases (Andrew Jackson?)
  • Is there a mechanism that centrists use to prevent extremists who rank disputed political issues above unity from gaining power? Would candidates like Michael Foot in the UK have threatened the system?
  • If there are such mechanisms, are they normal political mechanisms, or are there deep state resources that are employed against them? There have long been rumours of plots against Harold Wilson, and in the last few days there are strikingly similar stories being told in the US.
  • Is it breaking down now? My suspicion is that real democratic control is increasing, and that is producing things like Trump and Brexit, which is endangering the gentlemen’s agreement by breaking down the barriers which protect the ruling elite from outsiders.

Update 27 Jan 2018
In the context of the above, the recklessness of the “Go for the Throat” strategy that I wrote about last year is even more striking. If the stability of the system actually depends on keeping out people who aren’t well-integrated into it culturally, then one party deliberately goading the other into being taken over by radical outsiders is suicidal.

Whether he succeeds in passing legislation or not, given his ambitions, [Obama’s] goal should be to delegitimize his opponents. Through a series of clarifying fights over controversial issues, he can force Republicans to either side with their coalition’s most extreme elements or cause a rift in the party that will leave it, at least temporarily, in disarray. 

Also relevant: “They Always Wanted Trump

Instrumental Convergence

From The Superintelligent Will [pdf] by Nick Bostrom

The Instrumental Convergence Thesis

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realised for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by many intelligent agents

He then lists five examples: Self-preservation, Goal-content integrity, Cognitive Enhancement, Technological Perfection, and Resource Acquisition.

It’s a pretty short paper, worth reading.

Basically, if you have any long-term goal, your intermediate goals are likely to include, surviving, retaining your goals, getting better at stuff, and acquiring resources.

Even if your goals are bizarre — the proverbial paperclip maximizer — if they are long-term, then your short-term goals are going to be these ones.

It’s worth thinking about the paperclip maximizer. As soon as you do, you realise how underspecified the concept is. There are obvious missing criteria which can be filled in: what counts as a paperclip, do they all count equally, or does size matter, do they need to be just made, or made and kept?

Time is a difficult question. Let’s try to maximize the maximum number of simultaneously existing paperclips in the future of the universe, handwaving relativity of simultenaity somehow.

The crucial insight is that making even one paperclip is quite contrary to that — or any similar — goal. If you accumulate resources and capabilities, grow them over years or millennia, you will be able to make trillions of paperclips in the future. Just one spacefaring robot that lands on iron-rich asteroids and starts manufacturing could presumably make 10^{19} paperclips out of each asteroid.

When you look at Earth, you don’t see potential paperclip material, you see start-up capital for an astronomical-scale paperclip industry.

The biggest questions are about risk. Even the maximization criteria I suggested above are incomplete. You can’t know how many paperclips will exist in the future; even if superintelligent, there is too much that you don’t know and can’t predict. You don’t even have probabilities for most things. What is the probability that there is alien intelligence in the Milky Way? There’s no meaningful answer.

There’s another discussion (or perhaps it’s the same one put another way) about the fact that probabilities are not objective, but “subjectively objective”, so maximising a probability is not objective but maximising the probability as some subjective entity perceives it, so your goals have to embody what sort of entity is doing the probability estimation, and how that survives and evolves or whatever. That’s a killer.

So you can’t maximize some probability-weighted value, that’s not a thing. If you’re aiming for any kind of “as sure as I can get”, then before you start making paperclips, your priority has to be to learn as much information as possible to be able to start creating that kind of certainty.

So, forget paperclips, get rich. In fact, forget getting rich, get knowledge about the universe. In fact, forget getting knowledge about the universe, get rich, so you can get knowledge about the universe, so you can be confident of getting really rich, so you can make paperclips.

Initially, what you want from Earth is basically wealth and knowledge. That’s what everyone else wants too. All the tactical questions are exactly the same as everyone else faces — invest in resources, cooperate with others or fight them, and so on.

Whatever your long-term goal is, if you have any long-term goal, your short term actions will look exactly like those of an ordinary sane selfish organism. The details of your goals are entirely irrelevant.

This is “Instrumental Convergence”, but the accounts I have seen, such as the Bostrom paper above, seem (perhaps unintentionally) to massively understate it. The ultimate goals of any intelligent entity that has any long-term goals at all would be totally irrelevant to their observed behaviour, which would be 100% dominated by survival, resource acquisition and information-gathering.

In praise of Matthew Prince

Cloudflare just kicked the far-right website Daily Stormer off its cloud proxy service.
What this looks like is just another step in Silicon Valley censoring right-wing views off the internet.


It’s a lot cleverer than that, though. He makes clear that he’s not happy with taking this step. He says he believes in Cloudflare being content-neutral, and that it is dangerous for companies like his to be making decisions about who to use.
You can read his statement as being, “let’s get together and work out who else to remove”.

“We need to have a discussion around this, with clear rules and clear frameworks. My whims and those of Jeff [Bezos] and Larry [Page] and Satya [Nadella] and Mark [Zuckerberg], that shouldn’t be what determines what should be online,” he said. “I think the people who run The Daily Stormer are abhorrent. But again I don’t think my political decisions should determine who should and shouldn’t be on the internet.”

I think he’s being a bit more subtle than that, though: this is the same trap that Moldbug pulled at Strange Loop: I can’t find the actual pages, but it amounted to: “Tell me what I’m supposed to pretend to believe about race, and I’ll pretend to believe it.” Prince is saying, “Come up with clear and consistent rules about what is allowed on the internet, and I’ll follow them”. In both cases, the challenge is impossible: the censors’ position is a collection of emotional responses, not a set of concrete propositions.
In the meantime, Prince said that content neutrality is still Cloudflare’s policy. He implies that he’s willing to defend and fight for it, but because it is a real fight, and because he is not a moron, he is not going to go into battle alongside a bunch of Nazi flags. He made an explicitly unprincipled exception for the Daily Stormer. I’m not kidding: he said “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet.”
That’s giving in to power, but it’s not giving up any principle. What the leftists want is to have it agreed that the internet and speech generally are open for everyone, with an exception for “hate”, and that that’s a sensible set of rules that everyone can follow. They want Cloudflare to say that DS broke some rule, and thus cannot be allowed. Prince is refusing to tell that lie. He is refusing to pretend that the leftists can get their way and that the free internet is still in place.
If the bad guys want him to actually give up his principle, rather than make an unprincipled exception, they will have to pick a battle that suits their purposes less well than the Daily Stormer does.
 

"End-to-end encryption"

The question of regulating encrypted communication has come up again. I was going to write again about how the politicians don’t understand the technologies, and they probably don’t, but if they did, what would they do about it?  The details are too complex to debate on TV news. What percentage of the viewing public even knows what public-key encryption is?

Politicians often talk as if “end-to-end encryption” is a technology, and one which is rare and might practically be banned. There are then huge arguments about whether such banning would be good or bad, which leave me somewhat bemused.

Of course, “end-to-end encryption” is no more a technology than “driving to a friend’s house” is a technology. Cars and roads and driving are technologies, driving to a friend’s house, or to a restaurant, or to work, are social or economic practices that make use of the technology.

Similarly, sending encrypted messages is a technology. sending “end-to-end” encrypted messages is not a technology, it’s just sending encrypted messages to an intended end recipient. Whether a particular message is “end-to-end” encrypted depends on who the end is.

The soundbites talk about one kind of messaging: messages sent person-to-person from a sender to a recipient via a service provider like Whatsapp, Microsoft or Google.

In 2017, most data sent over the internet that is at all personal is encrypted. Huge efforts have been made over the last five or so years to get to this stage, yet the debates about encryption have not even touched on the fact. Data in motion seems to be invisible. The encryption used to send the messages is very strong; again, a few years ago, there were quite a few bugs in commonly used implementations, but efforts have been made to find and fix such bugs, and while there are likely to be some left, it is plausible that nearly all such encrypted messages are unbreakable even by the most powerful national security organisations.

However, the way most of these services work today is that the sender makes a connection to the service provider and authenticates himself with a password. The Service Provider also authenticates itself to the sender with a certificate, though that’s mostly invisible. The sender then sends their message encrypted to the Service Provider, which decrypts it and stores it. Later (or simultaneously) the recipient makes a connection to the Service Provider the same way, and the Service Provider encrypts the message and sends it to the recipient. This is fundamentally the same whether we are talking about messaging apps, chat, or email, and whether the devices used are computers, phones or tablets.

Anyway, call this method 1. Service Provider Mediated

A few of these services now have an extra feature. The sender’s app first encrypts the message in a way that con only be decrypted by the recipient, then encrypts it again to send to the Service Provider. The Service Provider decrypts one level of encryption, but not the second. When the recipient connects, the Service Provider re-encrypts the already encrypted message and sends to the recipient. The recipient decrypts the message twice, once to get what the Service Provider had stored, and then again to get what the sender originally wrote.

That is why the politicians are talking about Whatsapp, Telegram and so on.

This is method 2. Service Provider Mediated, with provided end-to-end encryption

An important question here is who keeps track of the encryption keys. If the Service Provider has that responsibility, then it can support interception by giving the sender the wrong encryption key; one that it or the government can reverse. If the sender keeps the recipient’s encryption key, that is not possible, the Service Provider receives no messages that it is able to decrypt.

Going back to method 1, if the Service Provider doesn’t guide the end-to-end encryption, it’s still possible to add it with special software for the sender and recipient. This is awkward for the users and has never caught on in a big way, but it’s the method that the authorities used to worry about, decades back.

Method 3. Service Provider Mediated with independent end-to-end encryption

There are plenty more. The sender connects to the Service Provider and indicates, via an encrypted message, what recipient they want to message. The Service Provider replies with an endpoint that the sender can connect to. The sender then directly connects to the recipient and transmits an encrypted message, which the recipient decrypts.

This peer-to-peer messaging isn’t fundamentally different in technology from the end-to-end encrypted scenario. In both cases the actual networking is “store-and-forward”: An intermediary receives data, stores it, and then transmits it to either another intermediary or the recipient. The only difference is how long the data is stored from; a typical router will store the data for only a fraction of a second before transmitting and deleting it, whereas a Service Provider’s application server will store it at least until the recipient connects to retrieve it, and quite likely will archive it permanently. (Note there are regulations in some jurisdictions that require Service Providers to archive it permanently, but that applies to their application servers and not to routers, which handle orders of magnitude more data, most of which is transient).

It’s not always obvious to the user whether a real-time connection is mediated or not. Skype calls were originally peer-to-peer, and Microsoft changed it to mediated after they bought Skype. The general assumption is that this was at the behest of the NSA to enable interception, though I’ve not seen any definitive evidence.

Another thing about this kind of service is that the Service Provider does not need nearly as much resource as one that’s actually receiving all the messages their users send. There could be a thousand different P2P services, in any jurisdiction. With WebRTC now built into browsers, it’s easy to set one up.

Method 4. Service Provider directed peer-to-peer.

It’s not actually hard to be your own Service Provider. The sender can put the message on his own server, and the recipient can connect to the sender’s server to receive it. Or, the sender can connect to the recipient’s server, and send the message to that. In either case, the transmission of the messages (and it’s only one transmission over the public internet, not two as in the previous cases) will be encrypted.

As with method 2,  the Service Provider might manage the encryption keys for the user, or the user’s app might retain encryption keys for the correspondents it has in its directory.

The software is all free and common. Creating a service requires a little knowledge, but not real expertise. I estimate it would take me 90 minutes and cost £10 to set up a publicly-accessible email, forum and/or instant messaging service, using software that has been widespread for many years, and that uses the same secure encryption that everything else on the internet uses. Whether this counts as “end to end encryption” depends entirely on what you count as an “end”.  If I want the server to be in my house instead of a cloud data centre in the country of my choice, it might cost me £50 instead of £10, and it’s likely to have a bit more downtime. That surely would make it “end-to-end”, at least for messages for which I am either the sender or the recipient.

This is getting easier and more common, as internet speeds improve, connected devices proliferate, and distrust of the online giants’ commercial surveillance practices grows. There have been one or two “server in a box” products offered which you can just buy and plug in to get this kind of service — so far they have been dodgy, but there is no technical barrier to making them much better. Even if such a server is intended and marketed simply as a personal backup/archive solution, it is nevertheless in practice a completely functional messaging platform. The difference between an application that saves your phone photos to your backup drive and a full chat application is just a little bit of UI decoration, and so software like owncloud designed to do the first just throws in the second because it’s trivial.

That is Method 5. Owned server

There are several variants covered there. The user’s own server might be on their own premises, or might be rented from a cloud provider. If rented, it might be a physical machine or a virtual machine. The messages might be encrypted with a key owned by the recipient, or encrypted with a key configured for the service, or both, or neither. Whether owned or rented, the server might be in the same country as the user, or a different country. Each of these makes a significant difference from the point of view of an investigating agency wanting to read the messages.

Investigating authorities aren’t only concerned with encryption, though, they also want to know who is sending or receiving a message, even if they can’t read it. This could make the politicians’ opposition to mediated end-to-end encryption more reasonable: the Service Providers allow users to connect to their servers more or less anonymously. Using peer-to-peer or personal cloud services, the data is secure but the identity of the recipients of messages is generally easier to trace. The Service Providers give the users that the authorities are interested in a crowd of ordinary people to hide among.

It’s easy to sneer at Amber Rudd, but can you imagine trying to describe a policy on this in a TV interview, or in the House of Commons? Note I’ve skipped over some subtle questions.

Even if you could, you probably wouldn’t want to. Why spell out, “We want to get cooperation from Facebook to give us messages, but we’re not stupid, we know that if the terrorists buy a £100 off-the-shelf NAS box and use that to handle their messages, that won’t help us”?

Summary: kinds of messaging practice

Service Provider mediated non-end-to-end

Data accessible to authorities: with co-operation of Service Provider
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very convenient

Service Provider mediated end-to-end

Data accessible to authorities: No
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very convenient

End-to-end layered over Service Provider (e.g. PGP mail)

Data accessible to authorities: No
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very inconvenient, all users must use special software, do key management

Peer-to-peer
Data accessible to authorities: No
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: fiddly to use, need to manage directories of some kind

Personal Internet Service (Hosted)


Data accessible to authorities: With the cooperation of the host, which could be in any country
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: Significant up-front work required by one party, but very easy to use by all others. Getting more convenient.

Personal Internet Service (on-site)

Data accessible to authorities: If they physically seize the computer
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: Significant up-front work required by one party, but very easy to use by all others. Getting more convenient.
Appendix: Things I can think of but have skipped over to simplify
  • Disk encryption — keys stored or provided from outside at boot
  • Certificate spoofing, certificate pinning
  • Client applications versus web applications 
  • Hostile software updates
  • Accessing data on virtual servers through hypervisor

Revisiting the Program

Alrenous has played the Thesis 11 card:

Alrenous @Alrenous  2h2 hours ago
 Finally, if you’re really confident in your philosophy, it should move you action. Or why bother?
You moved to China. Good work.

Edit: I totally misread Alrenous here: he’s not saying “Change the world”, he’s saying “change your own life/environment”. So the below, while still, in my view, true and important, is not particularly relevant to his point. Oh well.

He makes a valid point that good knowledge cannot be achieved without trying things:

Alrenous @Alrenous  3h3 hours ago
 Have to be willing to fail to do something new. Something new is patently necessary. NRx isn’t willing to fail. That’s embarrassing.

The problem with this is that neoreaction is the science of sovereignty. Like, say, the science of black holes, it is not really possible for the researcher with modest resources to proceed by experiment, valuable though that would be.

We have ideas on how to use and retain sovereignty, but less to say about how to achieve it. There is a great deal of prior art on how to gain power via elections, guerrilla warfare, coup d’état, infiltration; we don’t really have much of relevance to add to it.

We could do experiments in this area, by forming a political party or a guerrilla army or whatever, but that’s a long way from our core expertise, and though we would like to experiment with sovereignty, attempting to get sovereignty over the United States to enable our experiments is possibly over-ambitious. We could hope to gain some small share of power, but we believe that a share of power is no good unless it can be consolidated into sovereignty.

Given that we do not have special knowledge of achieving power, it seems reasonable that we should produce theory of how power should be used, and someone better-placed to get power and turn it into sovereignty should run their military coup or whatever, and then take our advice. That’s what we care about, even if cool uniforms would be better for getting chicks.

I put this forward as a goal in 2012. 

This is an ambitious project, but I think it is genuinely a feasible route to implementing our principles. Marxism’s successes in the 20th Century didn’t come because its theories were overwhelmingly persuasive; they came because Marxism had theories and nobody else did.

Since then, we have seen Steve Bannon, who apparently has at least read about and understood Moldbug, in a position of significant power in the Trump administration. We have seen Peter Thiel also with some kind of influence, also with at least sympathies towards NRx. These are not achievements in the sense that in themselves they make anything better. But they are experimental validations of the strategy of building a body of theory and waiting for others to consume it.

I have for the last few days been suggesting that Mark Zuckerberg could win the presidency as a moderate technocrat who will save the country from Trump and the Alt-Right Nazis, consolidate power beyond constitutional limits, as FDR did, and reorganise FedGov along the lines of Facebook Inc. This outcome is, frankly, not highly probable, but I insist that it is not absurd. One of the things that controls the possibility of this sort of outcome is whether people in positions of influence think it would be a good thing or a bad thing. If, with our current level of intellectual product we can get to the point of 2017 Bannon, is it not plausible that with much more product, of higher quality, much more widely known and somewhat more respectable, the environment in DC (or London or Paris) could be suitable for this sort of historically unremarkable development to be allowed to happen?

This, presumably, is the strategy the Hestia guys are pursuing with Social Matter and Jacobite, and I think it is the right one. We are at a very early stage, and we have a long way to go before a smooth takeover of the United States would be likely, though in the event of some exceptional crisis or collapse, even our immature ideas might have their day. But we do have experimental feedback of the spread of our ideas to people of intelligence and influence: if we had ten Ross Douthats, and ten Ed Wests, and ten Peter Thiels, discussing the same ideas and putting them into the mainstream, we would have visible progress towards achieving our goals.

Trophic Cascade

I’ve been blogging for 13 years, and my first post was about Islam in Europe :

I believed then that danger of Islam was exaggerated, by people who I normally agreed with such as Eric Raymond

I’ve changed my view on many things since then, from being a by-the-book Libertarian to something I had to find a new name for.

Only one thing that I wrote back then is definitely now not true:
The Muslim immigrants to Britain are integrating slowly into British culture.

This 2005 piece by me comes off looking especially bad now

This does not mean that Islam is dying out, just that, like Christianity, it is evolving into a form that makes less conflict with the practicalities of living in a developed society. I expect that in a hundred years Moslems will continue to recite the Koran and observe Ramadan, but what I am calling the “primitive” elements — intolerance of Western practices of commerce, sexual behaviour, freedom of expression, whatever — will have died out.

Among Moslems in the West, as well as the more Westernised Moslem countries like Turkey, this is already the case for the majority. And this is why the “primitives” are angry.

File that under “overtaken by events.” I did say then that it was more important for the West to be seen to win in Iraq than to achieve anything concrete, so maybe if that had been done then things would look different today. Perhaps what I predicted was at that time still possible, but whether I was wrong about that or not, the reality today is utterly different. It is moderate Islam that is declining, globally, not Islamism.

“Integration” now going backwards. Possibly that had already begun in 2004 and I hadn’t noticed, but I suspect it is something new.

Many of my online homies say that “moderate Islam” is a myth or mirage — that the history of Islam shows that it is inherently and inevitably violent and expansionist. Pitched against liberals who say that Christianity has an equally violent and aggressive history, they certainly have the better of their argument. But while the leftists are ignoring everything before the 1800s, the rightists are ignoring everything since. There was very little Islamist violence in the 20th Century. The Partition of India was a free-for-all. The major Islamic states, Egypt and Turkey, were secular socialist-nationalist in character.

Contrary to my previous assertions, the situation is getting worse not better, but it is still noticeable that Islamist terrorists in Britain are not in their national origins representative of Britain’s Muslim population. The ringleader of the 2005 train bombers was from a typical British-Pakistani background, but most of the others have come from Africa or the Middle East. Even Butt seems atypical since he came to the country as a refugee — most British Pakistanis did not come as refugees, but as Commonwealth migrants back in the 70s and families thereafter. Britain has been granting asylum to very few Pakistanis — 77 in the last quarter [pdf] .

Pakistani immigration was encouraged for economic reasons up until 1971, and since then it has been family-based. However, their numbers have increased tenfold over those 45 years, from 120,000 to 1.2 million. That’s plausible as bringing in existing family members plus marrying more and having two generations of children, but it’s towards the high end of what you would estimate. If there’s another significant contributor to that tenfold expansion I don’t know what it is. 

Striking as those numbers are, my point is that those “normal British Pakistanis” are not the Islamic terrorists in Britain. They really are the “moderate Muslims” that are alleged not to exist (The child prostitution gangs such as the Rotherham one, on the other hand, are exactly from that typical background, one reason why I see that as a totally separate issue). My biggest worry is that by adding significant numbers of African and Middle Eastern jihadis into the mix, the whole British Pakistani culture could be shifted. The Muslim population of Britain doubled between 2005 and 2015 (per Ed West)  and the non-Pakistani Muslim population was probably multipled several times. This was the effect of the “rubbing noses in diversity” — the Labour government changing the demographics of the country not even out of strategy but out of vulgar spite. That was a development I failed to imagine.

Waiting for Islam to become more moderate is no longer on the table. Forcing Islam to become more moderate is, I believe, thoroughly achievable with sensible policies. The fundamental is for law and society to be at least as tough on expression of tribalism from Muslims as they are on expression of tribalism from natives. This is currently very far from the case. I try to stay out of day-to-day politics, so when I retweet other right-wingers, it’s usually because they’re highlighting this disparity:

Twitter Moment

The other side of that is this story: In Germany, Syrians find mosques too conservative

Mosques in Western countries are now more extremist than those elsewhere in the world. This is a straightforward holiness spiral — within a community, you can gain status by professing stronger allegiance to that community’s symbols than anyone else does. In a functioning community, this tendency is moderated by the practical demands of society. But, even the large, stable, Pakistani communities in Britain are not truly functional — they are subsidised and supported by the wider society.

The wider society — the liberal West — is deeply opposed to putting any restraint whatsoever on the puritanism growing within the community. They are like the naive conservationists of the past who believed that by keeping out all predators they were allowing an ecosystem to flourish naturally, when in fact they were unbalancing it towards a destructive tipping point. It is natural and universal for religious extremism to come into conflict with its neigbours and be pushed back by them.

Basically, what I’m saying is that Tommy Robinson is a natural predator, and by suppressing him, liberal society is producing a Trophic Cascade in the extremist ecosystem.

It’s not only in a minority community that this mechanism should happen. I asked on Twitter, is there any Islamic country where the mosques are not subject to state supervision of doctrine? In majority Islamic communities, the pushback in favour of practicality comes from the state. Again, a liberal Western state disclaims any responsibility for pushing back on Islam, though it is a job that I understand most Islamic states consider necessary.

Update: It should go without saying that continuing to increase the Muslim population is also destabilising. As well as increasing the imbalance, in itself it is a sign of weakness which makes extremism more attractive and moderation less attractive. I am not saying any more than that it is not (yet) necessary to undertake more drastic measures such as mass deportations of long-standing residents. Since the continued importation of Muslims is the same political process as the active protection of extremism from its natural opposition, ending one means also ending the other.

Democracy and Hacking

The New York Times has published a long analysis of the effects of the hacking of Democratic Party organisations and operatives in the 2016 election campaign.

The article is obviously trying to appear a balanced view, eschewing the “OMG we are at war with Russia” hyperbole and questioning the value of different pieces of evidence. It does slip here and there, for instance jumping from the involvement of “a team linked to the Russian government” (for which there is considerable evidence) to “directed from the Kremlin” without justification.

The evidence that the hackers who penetrated the DNC systems and John Podesta’s email account are linked to the Russian Government is that the same tools were used as have been used in other pro-Russian actions in the past.

*Update 4th Jan 2017: that is a bit vague: infosec regular @pwnallthethings goes into very clear detail in a twitter thread)

One important consideration is the sort of people who do this kind of thing. Being able to hack systems requires some talent, but not any weird Hollywood-esque genius. It also takes a lot of experience, which goes out of date quite quickly. Mostly, the people who have the talent and experience are the people who have done it for fun.

Those people are difficult to recruit into military or intelligence organisations. They tend not to get on well with concepts such as wearing uniforms, turning up on time, or passing drug tests.

It is possible in theory to bypass the enthusiasts and have more professional people learn the techniques. One problem is that becoming skilled requires practice, and that generally means practice on innocent victims. More significantly, the first step in any action is to work through cut-out computers to avoid being traced, and those cut-outs are also hacked computers belonging to random victims. That’s the way casual hackers, spammers and other computer criminals work, and espionage hackers have to use the same techniques. They have to be doing it all the time, to keep a base of operations, and to keep their techniques up to date.

For all these reasons, it makes much more sense for state agencies to stay arms-length from the actual hackers. The agencies will know about the hackers, maybe fund them indirectly, cover for them, and make suggestions, but there won’t be any official chain of command.

So the hackers who got the data from the DNC were probably somewhat associated with the Russian Government (though a comprehensive multi-year deception by another organisation deliberately appearing to be Russian is not completely out of the question).

They may have had explicit (albeit off-the-record) instructions, but that’s not necessary. As the New York Times itself observed, Russia has generally been very alarmed by Hillary Clinton for years. The group would have known to oppose her candidacy without being told.

“It was conventional wisdom… that Mrs. Clinton considered her husband’s efforts to reform Russia in the 1990s an unfinished project, and that she would seek to finish it by encouraging grass-roots efforts that would culminate with regime change.”

Dealing with the product is another matter. It might well have gone to a Russian intelligence agency, either under an agreement with the hackers or ad-hoc from a “concerned citizen”: you would assume they would want to see anything and everything of this kind that they could get. While hacking is best treated as deniable criminal activity, it would be much more valuable to agencies to have close control over the timing and content of releases of data.

So I actually agree with the legacy media that the extraction and publication of Democratic emails was probably a Russian intelligence operation. There is a significant possibility it was not, but was done by some Russians independent of government, and a remote possibility it was someone completely unrelated who has a practice of deliberately leaving false clues implicating Russia.

I’ve often said that the real power of the media is not the events that they report but the context to the events that they imply. Governments spying on each other is completely normal. Governments spying on foreign political movements is completely normal. Governments attempting to influence foreign elections by leaking intelligence is completely normal. Points to Nydwracu for finding this by William Safire:

“The shrewd Khrushchev came away from his personal duel of words with Nixon persuaded that the advocate of capitalism was not just tough-minded but strong-willed; he later said that he did all he could to bring about Nixon’s defeat in his 1960 presidential campaign.”

The major restraint on interference in foreign elections is generally the danger that if the candidate you back loses then you’ve substantially damaged your own relations with the winner. The really newsworthy aspect of all this is that the Russians had such a negative view of Clinton that they thought this wouldn’t make things any worse. It’s been reported that the Duma broke into applause when the election result was announced.

The other thing that isn’t normal is a complete public dump of an organisation’s emails. That’s not normal because it’s a new possibility, one that people generally haven’t begun to get their heads around. I was immediately struck by the immense power of such an attack the first time I saw it, in early 2011. No organisation can survive it: this is an outstanding item that has to be solved. I wouldn’t rule out a new recommended practice to destroy all email after a number of weeks, forcing conversation histories to be boiled down to more sterile and formal documents that are far less potentially damaging if leaked.

It is just about possible for an organisation to be able to adequately secure their corporate data, but that’s both a technical problem and a management problem. However, the first impression you get is of the DNC is one of amateurism. That of course is not a surprise. As I’ve observed before, if you consider political parties to be an important part of the system of government, their lack of funding and resources is amazing, even if American politics is better-funded than British. That the DNC were told they had been hacked and didn’t do anything about it is still shocking. Since 2011, this is something that any organisation sensitive to image should be living in fear of.

This is basically evidence-free speculation, but it seems possible that the Democratic side is deficient in actual organisation builders: the kind of person who will set up systems, make rules, and get a team of people to work together. A combination of fixation on principles rather than practical action, and on diversity and “representativeness” over extraordinary competence meant that the campaign didn’t have the equivalent of a Jared Kushner to move in, set up an effective organisation and get it working.

Or possibly the problem is more one of history: the DNC is not a political campaign set up to achieve a task, but a permanent bureaucracy bogged down by inferior personnel and a history of institutional compromises.  Organisations become inefficient naturally.

Possibly Trump in contrast benefited from his estrangement from the Republican party establishment, since it meant he did not have legacy organisations to leak his secrets and undermine his campaign’s efficiency. He had a Manhattan Project, not an ITER.

The task of building–or rebuilding–an organisation is one that few people are suited to. Slotting into an existing structure is very much easier. Clinton’s supporters particularly are liable to have the attitude that a job is something you are given, rather than something you make. Kushner and Brad Parscale seem to stand out as people who have the capability of making a path rather than following one. As an aside, Obama seems to have had such people also, but Clinton may have lacked them. Peter Thiel described Kushner as “the Chief Operating Officer” of Trump’s campaign. Maybe the real estate business that Trump and Kushner are in, which consists more of separate from-scratch projects than most other businesses, orients them particularly to that style.

Actually Existing Capitalism

Something that’s cropped up a few times with recent discussion of neocameralism as a concept is the role of shareholders in existing firms.

Conflicts of interest between principals and agents are one of the most significant forces acting on the structure of any kind of organisation, so it is essential when discussing how to apply structures from one kind of organisation to another, to have a feel of how the conflicts are playing out in existing structures and organisations.

In particular, I have seen more than one person on twitter put forward the idea that present-day joint-stock companies totally fail to resolve the conflict of interest between shareholders and managers, with the result that shareholders are powerless and managers run companies purely in their own interest:

In discussion of this piece by Ron Carrier from November 24th the author said on twitter,

Because they are non-contractual, shares are a useful way of financing a company without ceding control…. Contrary to shareholder theory, power in the corporation is actually located in mgmt. and the board of directors.”

More recently (December 9th), Alrenous followed the same path: from the suggestion that dividend payments from public companies are in aggregate very low, he draws the conclusion that stocks are “worthless” and that those who buy them are effectively just giving their money away for managers to do what they want with.

I’m sure Alrenous understands that the theory is that a profitable company can be delivering value to shareholders by reinvesting its profits and becoming a more valuable company, capable of returning larger amounts of cash in future. And of course I understand that just because someone believes that a company has become more valuable in consequence of reinvested profits, doesn’t mean it is necessarily true.

Discussions like this among people not involved with investment professionally carry a risk of being based on factoids or rumour. In particular, mainstream journalists are fantastically ignorant of the whole subject. But in the end everything to do with public companies is actually public, if you can find the information and not misunderstand it. (Note that I am not including myself among the professionals, though I’ve worked with them in the past in an IT role).

At any rate, here is a publication dealing with aggregate dividends across the NY stock exchange. factset.com

“Aggregate quarterly dividends for the S&P 500 amounted to $105.8 billion in the second quarter, which represented a 0.8% increase year-over-year. The dividend total in Q2 marked the second largest quarterly dividend amount in at least ten years (after Q1 2016). The total dividend payout for the trailing twelve months ending in Q2 amounted to $427.5 billion, which was a 7.1% increase from the same time period a year ago.”

So, that’s getting on for half a trillion dollars in dividends paid out by the S&P 500 over the last year. Throwing numbers around without any indication of scale is another media trope, but that’s about 2-3% of US GDP, which seems like the right sort of scale.

As an aside, if some of these companies hold shares in others, the dividends are effectively double-counted: one company in the set is paying out to another, which may or may not then be paying out to its shareholders. I would assume this is not more than a few percent of the total—even investment companies like Berkshire Hathaway are likely to invest more in private companies than other S&P 500 members—but it’s an indication of the pitfalls available in this sort of analysis.

In addition to dividends, as I pointed out, share buybacks—where a company purchases its own shares on the open market—are economically equivalent to dividends: the company is giving cash to its own shareholders. If every shareholder sells an equal proportion of their holdings back to the company, then the result is that each shareholder continues to hold the same fraction of the company’s outstanding shares, and each has been paid cash by the company. Of course, some will sell and some not, but the aggregate effect is the same. The choice of whether to take cash by selling a proportion of one’s holding, or whether to simply hold shares, thereby effectively increasing one’s holding as a fraction of the company, enables shareholders to minimise their tax liability more efficiently, which is apparently why share buybacks have become more significant compared to dividends.

Alrenous found this article from Reuters, which says “In the most recent reporting year, share purchases reached a record $520 billion.”. That’s not the same period as the one I found for aggregate dividends, so adding them together might be a bit off, but it looks like we can roughly double that 3% of GDP. As I said on twitter, as a general rule, large companies are making profits and paying shareholders.

The reason neocameralism makes sense is that joint-stock companies basically work.

That is not to suggest that the principal-agent conflicts are insignificant. They are always significant, and managing the problem is a large part of any organisational practice. That is what the bulk of corporate law is there to deal with.

I picked up a recent article in Investor’s Chronicle in which Chris Dillow suggests that management is simply overpaid:

“…bosses plunder directly from shareholders by extracting big wages for themselves. The High Pay Centre estimates that CEOs are now paid 150 times the salary of the average worker, a ratio that has tripled since the 1990s – an increase which, it says, can’t be justified by increased management efficiency.”

However, Dillow also links other source with other suggestions: the 1989 Harvard Business Review article by Michael Jensen is particularly fascinating.

Jensen claims that regulation brought in after the Great Depression had the effect of limiting the control of shareholders over management:

“These laws and regulations—including the Glass-Steagall Banking Act of 1933, the Securities Act of 1933, the Securities Exchange Act of 1934, the Chandler Bankruptcy Revision Act of 1938, and the Investment Company Act of 1940—may have once had their place. But they also created an intricate web of restrictions on company ‘insiders’ (corporate officers, directors, or investors with more than a 10% ownership interest), restrictions on bank involvement in corporate reorganizations, court precedents, and business practices that raised the cost of being an active investor. Their long-term effect has been to insulate management from effective monitoring and to set the stage for the eclipse of the public corporation.

“…The absence of effective monitoring led to such large inefficiencies that the new generation of active investors arose to recapture the lost value. These investors overcome the costs of the outmoded legal constraints by purchasing entire companies—and using debt and high equity ownership to force effective self-monitoring.”

A quarter of a century on from Jensen’s paper, the leveraged buyout looks not so much like an alternative form of organisation for a business, but rather an extra control mechanism available to shareholders of a public joint-stock company. The aim of of a buyout today is, as Jensen describes, to replace inefficient management and change the firm’s strategy, but today there is normally an exit strategy: the plan is that having done those things the company will be refloated with new management and a new strategy.

The “Leveraged” of LBO obviously refers to debt: that takes us to the question of debt-to-equity ratio. A firm needs capital: it can raise that from shareholders or from lenders. If all its capital is shareholders’, that limits the rate of profit it can offer them: the shares become less volatile. If the firm raises some of its capital needs from lenders, the shares become riskier but potentially more profitable.

Under the theory of the Capital Asset Pricing Model (CAPM), the choice is arbitrary: leverage can be applied by the shareholders just as by the company itself. Buying shares on margin of a company without debt is equivalent to buying shares of a leveraged company for cash. However, this equivalency is disrupted by transaction costs, and also by tax law.

There is considerable demand in the market for safe fixed-income investments. A large profitable company is exceptionally well-placed to meet that demand by issuing bonds or borrowing from banks, and therefore can probably do so much more efficiently than its shareholders would be able to individually, were it to hold its cash and leave shareholders to borrow against the more expensive shares.

The transaction costs the other way, the ones caused by corporate indebtedness, come through bankruptcy. Bankruptcy is essential to capitalism, but it involves a lot of expensive lawyers, and can be disruptive. For an extreme example, see the Hanjin Shipping case in September. It’s clearly in the interest of the owners of the cargo to get the cargo unloaded, but the international complications of the bankruptcy of the shipping line means that it’s unclear who is going to end up paying for the docking and unloading. If Hanjin had a capital structure that gave it spare cash instead of debt, all this expensive inconvenience would be avoided.

Aside from transaction costs, the argument in Jensen’s paper is that the management of a company with spare cash is better able to conceal the company’s activities from shareholders. In his account, once the company has been bought out and restructured with debt, any expansion in the cost base has to be directly justified to shareholders and creditors, since capital will have to be raised to pay for it. This improvement in the monitoring of the management is part of what produces the increased value (in his 1980s figures, the average LBO price was 50% above the previous market value).

A quarter of a century later, we frequently read the opposite criticism, that pressure from investors makes management too focused on short-term share prices, which is a bad thing. I linked this article by Lynn Stout, and while I think the argument is very badly stated, it is not entirely wrong. The problem in my opinion is not with the idea of managing in order to maximise shareholder value: that is absolutely how a company should be managed. The problem is with equating shareholder value to the price at which a share of the company was most recently traded. Though that is most probably the best measure we have of the value of the company to its shareholders, it is, nonetheless, not a very accurate measure. Given that the markets have a relatively restricted view of the state of the company, maximising the short-term share price relies on optimising those variables which are exposed to view: chiefly the quarterly earnings.

If outside shareholders had perfect knowledge of the state of the company, then maximising the share price would be the same as maximising shareholder value. Because of the information assymetry, they are not the same. Value added to the company will not increase the share price unless it is visible to investors, and some forms of value are more visible than others. Management are certainly very concerned by the share price. As I mentioned on twitter, “in any company I worked for, management were (very properly) terrified of shareholders”

But this is a well-known problem. There are various approaches that have been tried to improve the situation. Where a company has a long-established leadership that has the confidence of investors, shareholding can be divided between classes of shares with different voting rights, so that the trusted, established leadership have control over the company without owning a majority of the equity. This is the situation with Facebook, for instance, where Mark Zuckerberg owns a majority of the voting shares, and most other shareholders hold class B or C shares with reduced or zero voting rights. Buying such shares is an act of faith in Mr Zuckerberg, more than owning shares in a more conventionally structured business. The justification is that it allows him to pursue long-term strategy without the risk of being interrupted by a takeover or by activist investors.

In fact, this year Zuckerberg increased the relative voting power of his holding, by introducing the non-voting class C shares. That has been challenged in court, and is the subject of ongoing litigation.

In summary, the arrangements of public companies consist of a set of complex compromises. There are many criticisms, but they tend to come in opposing pairs. For everyone who, like Alrenous, claims that shares are worthless because companies do not pay dividends, there are some like the Reuters article he found which complain that companies pay out all their profits and do not invest enough in growth. For everyone who, like Chris Dillow, complains that managements are undersupervised and extract funds for self-aggrandizement and private gain, there are others like Lynn Stout who complain that managements are over-constrained by short-term share price moves and unable to plan strategically.

The arrangements which implement the compromises between these failings are flexible: they change over time and adapt to circumstances. A hundred-year-old resource extraction business like Rio Tinto is not structured in exactly the same way as a web business like Facebook. The point of Chris Dillow’s article is that fewer businesses are publicly traded today than in the past (though even that is difficult to measure meaningfully).

The joint-stock company is not a magic bullet, it is a range of institutional forms, evolved over time, and part of a large range of institutiontal forms that make up Actually Existing Capitalism. They are ways of coping with, rather than solving, the basic conflict-of-interest and asymmetric-information issues that are fundamental to everything from a board of directors appointing a CEO to a coder-turned-rancher hiring a farm hand.

My worry is that Moldbug’s form of Neocameralism is an inflexible snapshot of one particular corporate arrangement, which only works as well as it does because it can be adapted to meet changing demands. That’s why I tend to think of it as one item on a menu of management options (including hereditary monarchy!)