Better Without AI

How to avert an AI apocalypse… and create a future we would like

Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.

It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.

The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.

He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).

All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.

He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you

Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.

And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.

The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

Anomaly UK: Defining the Facebook Era

this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

Anomaly UK: Epiphenomena

(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)

Putting these two things together, what you get is:

The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)

AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).

Better Without AI: At war with the machines

This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 1, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.

As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.2

AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.

Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.


Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else

We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.

My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.

In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.

The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:

Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.

Better Without AI: Stop Obstructing Science

I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.

Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.

The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).

Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.

you do not have an “objective function”
you do not have any “terminal goal”
your activity is not the result of “planning” or “deciding”
you do not have any “ethics”
these are all malign rationalist myths
they make you miserable when you take them seriously
you are reflexively accountable to reality
    not to your representations of it
your beneficent activity arises
    as spontaneous appreciative responsiveness

Better Without: This is About You

It would be nice to end on that note, but I have to shoehorn my own conclusion in:

I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.

This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.

This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.

Update: more on the culture war

The End of an Era

Tweetable link: https://t.co/t5qlk2FaZG?amp=1

The Internet began somewhere around 1970

The World Wide Web began somewhere around 1990

Mass participation in the internet was reached a little before 2000

With that, anyone could communicate with anyone else, or with any group, easily and free of charge.

That did not mean that anyone could whip up ordinary people with ordinary interests into political hysteria like Black Lives Matter or QAnon. Ordinary people with ordinary interests would not pay attention to that stuff.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

The Trump presidency was a glorious carnival, but a carnival is all that it was. When the Saturnalia ends the slaves go back to work. I said when he was elected that it was a setback for neoreaction, and it probably was.

I got a lot wrong though. I did not expect the anti-Trump hysteria to endure. Facebook-era politics was too new, then, for me to have understood how it works.

The Facebook era of politics ends today. As with the Trump presidency, I will miss the fun and excitement. I miss eating a packet of biscuits a day too. But man was not meant to eat that much sugar, and democracy was not meant to exist with uncontrolled access to mass media. From the invention of journalism until the twenty-first century, ability to reach the public with your propaganda was power, and power had its say on who could do it. A decade of unconstrained mass media gave us Trump and Brexit and the Gilet Jaunes1, and it also gave us Open Borders, Trans Rights, Russiagate2 BLM, PornHub, and QAnon. It was destroying our society, and it was going to be stopped sooner or later.

We only really had one thing to say to the normies – that democracy was an illusion, and they were not in charge. I don’t think we need Twitter to tell them that any more. The events of the last week have exposed the relationship between government and media much more obviously than weird technical blog posts.

I spent the night bitching about the hypocrisy and dishonesty of the censors. I suppose I had to get it out of my system.

The pogrom will go a bit wider at first, but in the end I don’t think it will do more than roll back to 2005 or so. I do not expect to be censored, because I do not speak to voters. It was the frictionlessness of the Facebook news feed that pulled normies into these games — if you have to go out of your way to find me, then I am doing the regime no harm, and I expect to be ignored, at least if I get through the next few months.

This, of course, is also the system in China. And I admire the Chinese system. When I tried to imagine neoreactionary victory, I struggled a bit with how a monarchical regime could exist in a world of uncensored internet. I don’t have to worry now.

Some practical resilience steps are sensible. Back up everything. Try not to depend on the Silicon Valley giants (GMail is nice, but you’re not the customer you’re the product). It’s possible that something like RSS could make a comeback if it’s awkward enough to use that the normies aren’t included, but don’t chase after the holy grail of a censorship-resistant mass media because that’s a coup-complete problem. Keep your head down, keep the channels open. I had this blog working as a Tor hidden service once, I’ll revisit that but I don’t expect to need it.

Aside on Software Bloat

I fell yesterday into the sad bitching about how big and slow software has become. This is a very old complaint — the EMACS editor used to be mocked as “eight megabytes and constantly swapping” back when eight megabytes was a huge amount of memory, but that rounds down to zero pixels on a graph of memory utilisation on a modern laptop.

I retailed the usual whines about electron and so on, but really any disagreements are at the margin: the real underlying reasons for software bloat are, unfortunately, good reasons.

Here’s a more interesting illustration: I recently watched this video. It’s a one-hour presentation by a Microsoft developer explaining MS’s implementation of the new C++ charconv header.

This is a library for converting numbers to and from decimal format. Computers internally work with fractions or large numbers in a binary floating-point format, so you have to be able to convert that format to and from a string of decimal digits.

All computers have to do that. My ZX81 did it 27 years ago1 (though its predecessor the ZX80 couldn’t — it worked only with whole numbers). It was part of the 8K of software built into the machine, along with the full floating-point mathematics support in software.

The new charconv library the Microsoft guy was presenting contains 5300 lines of C++, taking 221K of code and another 400K of data tables.

And — to make it clear — it’s awesome. I was glued to the one-hour video on what they’ve done. The clever bit is getting the right number of decimal digits.

The technical problem is that a fractional decimal number usually doesn’t convert exactly to a binary number. Therefore when you convert from decimal to binary — to do any calculations with the number — you’re getting a slightly different number. That’s OK. But then when you convert back from binary to decimal, you can get an exact decimal representation of the binary approximation of the original decimal number, so it’s a bit different to what you started with. That’s quite annoying. It can even cause program bugs.

The current C++ language standard says the new functions to convert binary to decimal should be able to round to the shortest decimal representation that will exactly convert back to the same binary value. That’s difficult to work out, and really really difficult to work out quickly. In fact a new method of doing it was produced by a guy called Ulf Adams at Google just in 2018, and the Microsoft team have implemented that algorithm for their standard library.

This is all very cool. But the relevance to my point is that when I, in a C++ program, decide to output a floating point number in a decimal form, maybe to save into a database or communicate to another program, and I use this standard to_chars function, I’m invoking all this mass of ingenious code to do the conversion. I may or may not notice that the rounding is now perfect in a way it never was before from 1982 to 2018. I probably won’t notice the 600K of library code that’s being used by my program. If I hadn’t happened to see this video, I would never have had any idea about any of this.

That’s for printing a number! It seems close to the simplest thing a computer program can do. Everything else in my program, dealing with text, or graphics, or networking, or anything has gone through this kind of improvement, often many times. Sometimes your program is getting real benefit from the improvements. Sometimes it’s getting the effect of the improvement, but they don’t make any useful difference for you. Sometimes you aren’t using the new functionality at all, but it still gets included when your program runs. That’s slightly unfortunate, but simplicity is valuable, and grabbing big chunks of functionality is simpler than finely selecting them.

The bottom line is that everything has a cost, even slimming down software, and if you insist on using a low-end 6-year-old computer like I do then it’s not worth most developers’ time to cater to you. I do think there is too much bloat, but it’s about tradeoffs at the margin; there will always be bloat, and that’s OK.

"End-to-end encryption"

The question of regulating encrypted communication has come up again. I was going to write again about how the politicians don’t understand the technologies, and they probably don’t, but if they did, what would they do about it?  The details are too complex to debate on TV news. What percentage of the viewing public even knows what public-key encryption is?

Politicians often talk as if “end-to-end encryption” is a technology, and one which is rare and might practically be banned. There are then huge arguments about whether such banning would be good or bad, which leave me somewhat bemused.

Of course, “end-to-end encryption” is no more a technology than “driving to a friend’s house” is a technology. Cars and roads and driving are technologies, driving to a friend’s house, or to a restaurant, or to work, are social or economic practices that make use of the technology.

Similarly, sending encrypted messages is a technology. sending “end-to-end” encrypted messages is not a technology, it’s just sending encrypted messages to an intended end recipient. Whether a particular message is “end-to-end” encrypted depends on who the end is.

The soundbites talk about one kind of messaging: messages sent person-to-person from a sender to a recipient via a service provider like Whatsapp, Microsoft or Google.

In 2017, most data sent over the internet that is at all personal is encrypted. Huge efforts have been made over the last five or so years to get to this stage, yet the debates about encryption have not even touched on the fact. Data in motion seems to be invisible. The encryption used to send the messages is very strong; again, a few years ago, there were quite a few bugs in commonly used implementations, but efforts have been made to find and fix such bugs, and while there are likely to be some left, it is plausible that nearly all such encrypted messages are unbreakable even by the most powerful national security organisations.

However, the way most of these services work today is that the sender makes a connection to the service provider and authenticates himself with a password. The Service Provider also authenticates itself to the sender with a certificate, though that’s mostly invisible. The sender then sends their message encrypted to the Service Provider, which decrypts it and stores it. Later (or simultaneously) the recipient makes a connection to the Service Provider the same way, and the Service Provider encrypts the message and sends it to the recipient. This is fundamentally the same whether we are talking about messaging apps, chat, or email, and whether the devices used are computers, phones or tablets.

Anyway, call this method 1. Service Provider Mediated

A few of these services now have an extra feature. The sender’s app first encrypts the message in a way that con only be decrypted by the recipient, then encrypts it again to send to the Service Provider. The Service Provider decrypts one level of encryption, but not the second. When the recipient connects, the Service Provider re-encrypts the already encrypted message and sends to the recipient. The recipient decrypts the message twice, once to get what the Service Provider had stored, and then again to get what the sender originally wrote.

That is why the politicians are talking about Whatsapp, Telegram and so on.

This is method 2. Service Provider Mediated, with provided end-to-end encryption

An important question here is who keeps track of the encryption keys. If the Service Provider has that responsibility, then it can support interception by giving the sender the wrong encryption key; one that it or the government can reverse. If the sender keeps the recipient’s encryption key, that is not possible, the Service Provider receives no messages that it is able to decrypt.

Going back to method 1, if the Service Provider doesn’t guide the end-to-end encryption, it’s still possible to add it with special software for the sender and recipient. This is awkward for the users and has never caught on in a big way, but it’s the method that the authorities used to worry about, decades back.

Method 3. Service Provider Mediated with independent end-to-end encryption

There are plenty more. The sender connects to the Service Provider and indicates, via an encrypted message, what recipient they want to message. The Service Provider replies with an endpoint that the sender can connect to. The sender then directly connects to the recipient and transmits an encrypted message, which the recipient decrypts.

This peer-to-peer messaging isn’t fundamentally different in technology from the end-to-end encrypted scenario. In both cases the actual networking is “store-and-forward”: An intermediary receives data, stores it, and then transmits it to either another intermediary or the recipient. The only difference is how long the data is stored from; a typical router will store the data for only a fraction of a second before transmitting and deleting it, whereas a Service Provider’s application server will store it at least until the recipient connects to retrieve it, and quite likely will archive it permanently. (Note there are regulations in some jurisdictions that require Service Providers to archive it permanently, but that applies to their application servers and not to routers, which handle orders of magnitude more data, most of which is transient).

It’s not always obvious to the user whether a real-time connection is mediated or not. Skype calls were originally peer-to-peer, and Microsoft changed it to mediated after they bought Skype. The general assumption is that this was at the behest of the NSA to enable interception, though I’ve not seen any definitive evidence.

Another thing about this kind of service is that the Service Provider does not need nearly as much resource as one that’s actually receiving all the messages their users send. There could be a thousand different P2P services, in any jurisdiction. With WebRTC now built into browsers, it’s easy to set one up.

Method 4. Service Provider directed peer-to-peer.

It’s not actually hard to be your own Service Provider. The sender can put the message on his own server, and the recipient can connect to the sender’s server to receive it. Or, the sender can connect to the recipient’s server, and send the message to that. In either case, the transmission of the messages (and it’s only one transmission over the public internet, not two as in the previous cases) will be encrypted.

As with method 2,  the Service Provider might manage the encryption keys for the user, or the user’s app might retain encryption keys for the correspondents it has in its directory.

The software is all free and common. Creating a service requires a little knowledge, but not real expertise. I estimate it would take me 90 minutes and cost £10 to set up a publicly-accessible email, forum and/or instant messaging service, using software that has been widespread for many years, and that uses the same secure encryption that everything else on the internet uses. Whether this counts as “end to end encryption” depends entirely on what you count as an “end”.  If I want the server to be in my house instead of a cloud data centre in the country of my choice, it might cost me £50 instead of £10, and it’s likely to have a bit more downtime. That surely would make it “end-to-end”, at least for messages for which I am either the sender or the recipient.

This is getting easier and more common, as internet speeds improve, connected devices proliferate, and distrust of the online giants’ commercial surveillance practices grows. There have been one or two “server in a box” products offered which you can just buy and plug in to get this kind of service — so far they have been dodgy, but there is no technical barrier to making them much better. Even if such a server is intended and marketed simply as a personal backup/archive solution, it is nevertheless in practice a completely functional messaging platform. The difference between an application that saves your phone photos to your backup drive and a full chat application is just a little bit of UI decoration, and so software like owncloud designed to do the first just throws in the second because it’s trivial.

That is Method 5. Owned server

There are several variants covered there. The user’s own server might be on their own premises, or might be rented from a cloud provider. If rented, it might be a physical machine or a virtual machine. The messages might be encrypted with a key owned by the recipient, or encrypted with a key configured for the service, or both, or neither. Whether owned or rented, the server might be in the same country as the user, or a different country. Each of these makes a significant difference from the point of view of an investigating agency wanting to read the messages.

Investigating authorities aren’t only concerned with encryption, though, they also want to know who is sending or receiving a message, even if they can’t read it. This could make the politicians’ opposition to mediated end-to-end encryption more reasonable: the Service Providers allow users to connect to their servers more or less anonymously. Using peer-to-peer or personal cloud services, the data is secure but the identity of the recipients of messages is generally easier to trace. The Service Providers give the users that the authorities are interested in a crowd of ordinary people to hide among.

It’s easy to sneer at Amber Rudd, but can you imagine trying to describe a policy on this in a TV interview, or in the House of Commons? Note I’ve skipped over some subtle questions.

Even if you could, you probably wouldn’t want to. Why spell out, “We want to get cooperation from Facebook to give us messages, but we’re not stupid, we know that if the terrorists buy a £100 off-the-shelf NAS box and use that to handle their messages, that won’t help us”?

Summary: kinds of messaging practice

Service Provider mediated non-end-to-end

Data accessible to authorities: with co-operation of Service Provider
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very convenient

Service Provider mediated end-to-end

Data accessible to authorities: No
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very convenient

End-to-end layered over Service Provider (e.g. PGP mail)

Data accessible to authorities: No
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very inconvenient, all users must use special software, do key management

Peer-to-peer
Data accessible to authorities: No
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: fiddly to use, need to manage directories of some kind

Personal Internet Service (Hosted)


Data accessible to authorities: With the cooperation of the host, which could be in any country
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: Significant up-front work required by one party, but very easy to use by all others. Getting more convenient.

Personal Internet Service (on-site)

Data accessible to authorities: If they physically seize the computer
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: Significant up-front work required by one party, but very easy to use by all others. Getting more convenient.
Appendix: Things I can think of but have skipped over to simplify
  • Disk encryption — keys stored or provided from outside at boot
  • Certificate spoofing, certificate pinning
  • Client applications versus web applications 
  • Hostile software updates
  • Accessing data on virtual servers through hypervisor

Democracy and Hacking

The New York Times has published a long analysis of the effects of the hacking of Democratic Party organisations and operatives in the 2016 election campaign.

The article is obviously trying to appear a balanced view, eschewing the “OMG we are at war with Russia” hyperbole and questioning the value of different pieces of evidence. It does slip here and there, for instance jumping from the involvement of “a team linked to the Russian government” (for which there is considerable evidence) to “directed from the Kremlin” without justification.

The evidence that the hackers who penetrated the DNC systems and John Podesta’s email account are linked to the Russian Government is that the same tools were used as have been used in other pro-Russian actions in the past.

*Update 4th Jan 2017: that is a bit vague: infosec regular @pwnallthethings goes into very clear detail in a twitter thread)

One important consideration is the sort of people who do this kind of thing. Being able to hack systems requires some talent, but not any weird Hollywood-esque genius. It also takes a lot of experience, which goes out of date quite quickly. Mostly, the people who have the talent and experience are the people who have done it for fun.

Those people are difficult to recruit into military or intelligence organisations. They tend not to get on well with concepts such as wearing uniforms, turning up on time, or passing drug tests.

It is possible in theory to bypass the enthusiasts and have more professional people learn the techniques. One problem is that becoming skilled requires practice, and that generally means practice on innocent victims. More significantly, the first step in any action is to work through cut-out computers to avoid being traced, and those cut-outs are also hacked computers belonging to random victims. That’s the way casual hackers, spammers and other computer criminals work, and espionage hackers have to use the same techniques. They have to be doing it all the time, to keep a base of operations, and to keep their techniques up to date.

For all these reasons, it makes much more sense for state agencies to stay arms-length from the actual hackers. The agencies will know about the hackers, maybe fund them indirectly, cover for them, and make suggestions, but there won’t be any official chain of command.

So the hackers who got the data from the DNC were probably somewhat associated with the Russian Government (though a comprehensive multi-year deception by another organisation deliberately appearing to be Russian is not completely out of the question).

They may have had explicit (albeit off-the-record) instructions, but that’s not necessary. As the New York Times itself observed, Russia has generally been very alarmed by Hillary Clinton for years. The group would have known to oppose her candidacy without being told.

“It was conventional wisdom… that Mrs. Clinton considered her husband’s efforts to reform Russia in the 1990s an unfinished project, and that she would seek to finish it by encouraging grass-roots efforts that would culminate with regime change.”

Dealing with the product is another matter. It might well have gone to a Russian intelligence agency, either under an agreement with the hackers or ad-hoc from a “concerned citizen”: you would assume they would want to see anything and everything of this kind that they could get. While hacking is best treated as deniable criminal activity, it would be much more valuable to agencies to have close control over the timing and content of releases of data.

So I actually agree with the legacy media that the extraction and publication of Democratic emails was probably a Russian intelligence operation. There is a significant possibility it was not, but was done by some Russians independent of government, and a remote possibility it was someone completely unrelated who has a practice of deliberately leaving false clues implicating Russia.

I’ve often said that the real power of the media is not the events that they report but the context to the events that they imply. Governments spying on each other is completely normal. Governments spying on foreign political movements is completely normal. Governments attempting to influence foreign elections by leaking intelligence is completely normal. Points to Nydwracu for finding this by William Safire:

“The shrewd Khrushchev came away from his personal duel of words with Nixon persuaded that the advocate of capitalism was not just tough-minded but strong-willed; he later said that he did all he could to bring about Nixon’s defeat in his 1960 presidential campaign.”

The major restraint on interference in foreign elections is generally the danger that if the candidate you back loses then you’ve substantially damaged your own relations with the winner. The really newsworthy aspect of all this is that the Russians had such a negative view of Clinton that they thought this wouldn’t make things any worse. It’s been reported that the Duma broke into applause when the election result was announced.

The other thing that isn’t normal is a complete public dump of an organisation’s emails. That’s not normal because it’s a new possibility, one that people generally haven’t begun to get their heads around. I was immediately struck by the immense power of such an attack the first time I saw it, in early 2011. No organisation can survive it: this is an outstanding item that has to be solved. I wouldn’t rule out a new recommended practice to destroy all email after a number of weeks, forcing conversation histories to be boiled down to more sterile and formal documents that are far less potentially damaging if leaked.

It is just about possible for an organisation to be able to adequately secure their corporate data, but that’s both a technical problem and a management problem. However, the first impression you get is of the DNC is one of amateurism. That of course is not a surprise. As I’ve observed before, if you consider political parties to be an important part of the system of government, their lack of funding and resources is amazing, even if American politics is better-funded than British. That the DNC were told they had been hacked and didn’t do anything about it is still shocking. Since 2011, this is something that any organisation sensitive to image should be living in fear of.

This is basically evidence-free speculation, but it seems possible that the Democratic side is deficient in actual organisation builders: the kind of person who will set up systems, make rules, and get a team of people to work together. A combination of fixation on principles rather than practical action, and on diversity and “representativeness” over extraordinary competence meant that the campaign didn’t have the equivalent of a Jared Kushner to move in, set up an effective organisation and get it working.

Or possibly the problem is more one of history: the DNC is not a political campaign set up to achieve a task, but a permanent bureaucracy bogged down by inferior personnel and a history of institutional compromises.  Organisations become inefficient naturally.

Possibly Trump in contrast benefited from his estrangement from the Republican party establishment, since it meant he did not have legacy organisations to leak his secrets and undermine his campaign’s efficiency. He had a Manhattan Project, not an ITER.

The task of building–or rebuilding–an organisation is one that few people are suited to. Slotting into an existing structure is very much easier. Clinton’s supporters particularly are liable to have the attitude that a job is something you are given, rather than something you make. Kushner and Brad Parscale seem to stand out as people who have the capability of making a path rather than following one. As an aside, Obama seems to have had such people also, but Clinton may have lacked them. Peter Thiel described Kushner as “the Chief Operating Officer” of Trump’s campaign. Maybe the real estate business that Trump and Kushner are in, which consists more of separate from-scratch projects than most other businesses, orients them particularly to that style.

Archiving

A couple of casual online conversations:

First, journalist Jamie Bartlett banging on on Twitter about blockchain.

It became fashionable in 2015 to dismiss bitcoin but get excited about blockchain.  I never really got it, because what makes the blockchain work is the fact that there are rewards for building it.  I can download the blockchain and not even know who I am downloading it from, but, because (a) it takes enormous resources to create that data, and (b) that enormous effort is only rewarded if the recent blocks were added to the longest chain that other bitcoin users were seeing at time, I can be very confident that the whole chain, at least up to the last few blocks, is the same one anyone else is seeing, though I don’t know who I got mine from and I don’t know who they would get theirs from.

A blockchain without a cryptocurrency to reward the miners who create the blockchain is just a collection of documents chained by each containing the hash of its parent. In other words, it is just git.

What I hadn’t realised is that the people so excited about blockchains actually didn’t know about git, even though this aspect of bitcoin’s design was explicitly based on git, and even though git is about 100-1000X more widely used than bitcoin. They maybe knew that git was a source control system, and that you could store and share stuff on github.com, but they didn’t know that it is impossible to publish a version of a git project with a modified history that wouldn’t be obvious to anyone who tried to load it but who previously had the true version of that history.  If you publish something via git, anyone can get a copy from you or from each other, and anyone can add material, but if anyone tampers with history, it will immediately show.

So, when Bartlett said “Parliament should put its records on a blockchain”, what I deduced he really meant was “Parliament should check its records into git”. Which, if you happen to care for some reason about the wafflings of that bunch of traitors and retards, is a fairly sensible point.

So much for that. On to incidental conversation the second.

P D Sutherland has been in the news, speaking in his role as Special Representative of the Secretary-General of the United Nations. @Outsideness highlighted a tweet of his as “possibly the most idiotic remark I’ve ever seen”

The interesting thing is I distinctly remember a post on Sutherland, probably 2-3 years ago, on one of the then-young NRx blogs, and a bit of discussion on the comments. It’s interesting because Sutherland is such a stereotype Euro-politician ( Irish bar -> Fine Gael -> Trilateral Commission -> European Commissioner -> United Nations ), to be worth attention. Further, it would be interesting to see what we saw and to what extent we might have anticipated the present.

However, I couldn’t find the post or discussion. Blogs come and go, writers change personas, and either it’s gone or the search engines couldn’t find it.

Putting these two together, we need to archive our valuable materials, and the proper tool for a distributed archive is git. Spidering a blog might work for a dead one like Moldbug’s, but is a poor way of maintaining a reserve archive of numerous live ones.

I’ve written some ruby scripts to convert blog export files and feed files into one file per post or comment, so they can be archived permanently.  All a bit scrappy at the moment, but it seems to work.

The idea (when it’s a bit more developed) would be that a blog owner could offer the blog as a git archive alongside the actual web interface. Anyone could clone that, and keep it updated using the feed. If the blog ever vanishes, the git clones still exist and can be easily shared.

(I wouldn’t advise posting the git archive to a public site like github. The issue is not privacy–the data is all public in the first place–but deniability.  If you decide to delete your blog, then a recognised public archive is something people can point to to use the content against you, whereas a personal copy is less attributable. Of course, you can’t prevent it, but you can’t prevent archive.org or the like either)

Twister

Back in 2012, I looked at the concept of peer-to-peer blogging. It is definitely time to revisit
the environment.

Back then, the main threat I was concerned with was state action directed against service providers being used for copyright infringement. Since then, my political views have become more extreme, while the intolerance of the mainstream left has escalated alarmingly, and so the main threat today is censorship by service providers, based on their own politics or pressure from users and/or advertisers.

Actually publishing content has become easier, due to cheap virtualised hosting and fast residential broadband, making a few megabytes of data available is not likely to be a problem. The difficult bit is reaching an audience. The demise of Bloglines and then Google Reader has been either a cause or a symptom of the decline of RSS, and the main channels for reaching an audience today are facebook and twitter. I don’t actually use facebook, so for me twitter is the vital battleground. If you can build up a following linked to a twitter ID, you can move your content hosting around and followers will barely be aware it’s moved. Last week’s Chuck Johnson affair defines the situation we face. We require a robust alternative to twitter—not urgently but ideally within a 12–24 month timeframe.

I’ve been running the Twister peer-to-peer twitter clone for a couple of weeks, and I think it is OK.

Primarily, it is built on top of the bittorrent protocol. Messages are passed from node to node, and nodes collect messages that are relevant to them.

In addition, it uses the bitcoin blockchain protocol. This is not for content, but for the ID database. Content published by an ID must be signed by the key associated with that ID, and the association of keys with IDs is made via writing entries into the blockchain. Ownership of IDs is therefore “first come, first served”, with the ordering of claims determined by the blockchain (just as the order of transaction attempts is determined for bitcoin, preventing double spends).

As an incentive to build the blockchain, each block can include a “spam message” which will be presented to users.

What that means is that there is no authority who can disable a user ID or take it over. If the ID is registered on the twister blockchain with your public key, it is yours forever.

The application runs, like the bitcoin reference client it is based on, as a daemon offering a JSON-RPC socket interface. It also serves some static web pages over HTTP on the same port, providing a working twitter-lookalike web client.

As far as I can see, it works properly and reliably. I am running it over Tor, and that works fine.

Current Shortcomings

It’s still treated as experimental by the authors, so it’s not surprising if it’s not complete.

The biggest shortcoming is that it’s inconvenient to run. Like bittorrent, it needs to find peers and build a network to exchange data with, and, like bitcoin, it needs to keep up with a blockchain. (It is not necessary to “mine” or build the blockchain to use the service). You really need to start it up and leave it running, if not 24/7, at least for hours at a time.

For the same reason, it doesn’t run on mobile devices. It could be ported, but staying on the peer-to-peer networks would be an inconveniently heavy use of data, battery and processor resources.

Fundamentally, you don’t see all the traffic (that wouldn’t scale), so you can’t conveniently search it. You need to advertise that you are interested in something (by following a user, for instance), and gradually it will start to flow your way.

Future Shortcomings

The network is currently very small-scale, so it remains to be seen how well it would scale up to a useful size. I don’t understand the torrent / DHT side of things all that well, but as far as I can see it should hold up.

The ID blockchain functionality seems more reasonable. If each new user requires of the order of 64 bytes of blockchain space, then ten million users would need about a gigabyte of disk space to archive. A lot, but not prohibitive. As with bitcoin, the hope would be that users would be able to use lightweight clients, with the heavy network functions semi-centralised.

[The useful feature of a peer-to-peer protocol for us in this scenario is not that there is no trust in the system at all, or that there is no centralisation at all; it is that there is no single thing that must be trusted or relied on. The user has the option of doing everything themselves, and, more useful to the ordinary user, they have the option of temporarily and conditionally trusting a provider of their choice]

Also as with bitcoin, the most difficult obstacle is key management. When you want to start using twister, you generate a key pair, and post a transaction associating your public key with your chosen twister ID. You need the private key to post twists, or to see private messages. If you lose the key, you’ve lost your ID. If someone gets your key, they can post as you and read your private messages. Handling keys securely is difficult. For a casual user who isn’t too concerned about surveillance or censorship, it’s prohibitive.

Like bitcoin, the network node, blockchain archive and wallet (user ID) are all managed by a single process. Logically, the private operations of creating authenticated transactions/messages ought to be separate from the maintenance of the network node.

Twister is designed for those who are concerned about surveillance or censorship, but we need to be able to talk to those who aren’t. It needs to provide security for those who need it, while being as easy as possible for those who don’t.

The system seems fairly robust to attacks, including denial-of-service attacks. Media companies have attempted to interfere with bittorrent, but have not as far as I know blocked an actual running torrent, rather concentrating on the chokepoints of communicating knowledge of specific torrents.

The ID subsystem could be flooded with new id requests. There is a proof-of-work requirement on individual “transactions” (new id assignments), separate from the actual block proof-of-work, but that cannot be too onerous, so a determined adversary could probably produce tens of thousands. However, miners could respond by being fussier about what they accept, without breaking the protocol.

The blockchain itself is vulnerable. The hashrate at present is about one quarter-millionth of Litecoin’s (which uses the same hash method), so one block of the twister blockchain currently costs about the same in compute resources as a thirtieth of a cent worth of Litecoin. (I have mined dozens of blocks myself over the past week). Anyone with a serious GPU-based mining rig could mine hundreds of blocks in minutes. The incentive for legitimate miners is always going to be weak, since a customised client can trivially ignore the “spam” messages.  However, it does not seem obvious that that is a real problem. The value of the blockchain is that it established ownership of IDs, but an ID is not really valuable until it has been used for a considerable period, so to take over a valuable ID, you have to fork the blockchain from a long period in the past. Even if you have the hashpower to do that, your blocks are likely to be ignored simply by virtue of being so old.

Suggested Enhancements

The main author has suggested taking the cryptography out of the daemon and into the web client (in javascript). That would be an improvement and a step towards usable lightweight clients.

However, there is another requirement to do that, which is more sophisticated key management. Mobile devices and third-party service providers would hugely improve the convenience and usability of the service, but at a cost of crippling the security, since neither one is sufficiently trustworthy to hold the private key.

What I have suggested is a system of subkeys, with restricted delegated authority.  I create my key pair and post it to the network with my chosen ID, as per the current protocol. Then, I can create a new key pair, and create a transaction signed by my original key (which I call the “master” key), delegating the authority to make posts for a limited time (a week, say) to this new key (which I call a “subkey”). I transfer the private key of the subkey to my phone app, or to a service-provider I trust, and can then make posts using the subkey.

After the week, that subkey is expired and posts made with it will no longer be accepted as valid by other clients or network nodes. If the key is compromised, the damage is limited. I could even post a “revoke” transaction signed by my master key.

Alternatives

@jokeocracy has pointed at Trsst. Also, GnuSocial is quite well established. Both of these are federated client-server architectures. See quitter.se as an example GnuSocial-based service provider. (It would be funny if we were to all move en bloc onto some lefty-oriented “free from capitalism” platform, and perhaps instructive, but not necessarily a long-term solution).

There is some resistance to censorship there, in that if one service provider blocks you, you can switch to another. However, your persistent ID is tied to the service provider you choose, which could take a dislike to you or (equally likely in the early stages) just go away, so it makes it harder to maintain continuity. Also, the federation model does not necessarily prevent the consumer’s service provider from censoring your messages to its customers. The customers can switch if they want to, but not trivially.

In the case of Trsst, it strikes me that this is a mistake: users have private keys, but the association of keys to IDs, unlike in the case of twister, is made by the service provider. If mentions, replies, and subscriptions were by public key instead of by “nickname”, users could migrate more painlessly. However, that registry would have to be distributed, adding complexity.

In the long run, what I would hope to see is a service that looks like quitter.se or Trssst, but acting as a proxy onto the Twister network, ideally with short-lived subkeys as I describe above.

Other relevant projects not ready yet would are Urbit (of course), and chatless (by @_raptros).

Thinking about Urbit

OK, I’ve been driving myself nuts trying to work out how Urbit does I/O when it’s implemented using Nock and Nock doesn’t do I/O.

It’s now the middle of the night and I think I’ve got it.

Since it’s not in the Nock spec, and the Nock spec is defined in terms of nouns, it can only be hidden in the implementation of a noun.

A naive reading of the spec suggests there are two kinds of noun:

  1. a literal value (arbitrary-size integer)
  2. a pair of nouns

The only way it can work is if there are at least four kinds of noun

  1. a literal value
  2. a pair of nouns L and R
  3. the stream of input events
  4. a nock invocation on a pair of nouns A and F

Further, the “opcode 2” reduction in the Nock evaluator is not implemented by recursing the Nock evaluator, but by returning a type 4 noun.

A type 3 noun “counts” as a pair, where L is the next event in the input stream and R is another type 3 noun

The runtime creates a type 4 noun where A is a type 3 noun and F is the system-implemented-in-nock

It then calls a native function output(n) on the noun it created.

output(n) looks at the type of n. If it’s type 1, it treats it as an output event and “performs” it.

If it’s type 2, it calls output on L, then on R

If it’s type 4, it runs the Nock evaluator on it and calls output() on the result.

Can anyone who’s looked into the vere source tell if that is about right?

Social-network threat models

There have been a couple of comments on my peer-to-peer blogging post, both addressing different threat models than I was looking at.

My posts were looking at countermeasures to continue blogging in the event that public web hosting service providers are taken out by IP enforcement action. The aim of such enforcement action is to prevent distribution of copyrighted content: since I don’t actually want to do that I am not trying to evade the enforcement as such, just trying to avoid being collateral damage.  The major challenges are to avoid conventional abuse, and to maintain sufficient availability, capacity and reliability without the resources of a centralised service with a proper data centre.

Sconzey mentioned DIASPORA*.  That is an interesting project, but it is motivated by a different threat model – the threat from the service providers themselves.  Social-networking providers like facebook or google, have, from their position, privileged access to the data people share, and are explicitly founded on the possibilities of profiting from that access. Diaspora aims to free social-networking data from those service providers, whose leverage is based on their ownership of the sophisticated server software and lock-in and network effects.  To use Diaspora effectively, you need a good-quality host.  Blogging software is already widespread – if you have the infrastructure you need to run Diaspora, you can already run wordpress.  The “community pods” that exist for Diaspora could be used for copyright infringement and would be vulnerable to the SOPA-like attacks.

James A. Donald says “we are going to need a fully militarized protocol, since it is going to come under state sponsored attack.” That’s another threat model again. Fundamentally, it should be impossible for open publication: if you publish something, the attacker can receive it. Having received it, he can trace back one step where it came from, and demand to know where they got it from.  If refused, or if the intermediate node is deliberately engineered so messages cannot be traced back further, then the attacker can threaten to shut down or isolate the node provider.

In practice it can be possible to evade that kind of attacker by piggy-backing on something the attacker cannot shut down, because he relies on it himself.  That is a moving target, because what is essential changes over time.

(One could avoid using fixed identifiable locations altogether – e.g. wimax repeaters in vehicles. That’s not going to be cheap or easy).

James seems to be thinking more about private circles, where end-to-end encryption can be used. That’s more tractable technically, but it’s not useful to me. I don’t have a circle of trusted friends to talk about this stuff with: I’m throwing ideas into the ether to see what happens. Any of you guys could be government agents for all I know, so carefully encrypting my communications with you doesn’t achieve anything.

More on peer-to-peer blogging

I was musing a few days ago on how to do blogging if SOPA-like measures take out hosting providers for user content.

Aaron Davies in a comment suggests freenet. I’m not sure about that; because you don’t choose at all what other content you’re hosting, I would expect the whole system to drown in movie rips and porn. The bittorrent idea where the stuff which you help distribute is the stuff which you want to consume seems less vulnerable. alt.binaries didn’t die because of copyright enforcement, it died because the copyright infringement made such large demands on capacity that it was not worth distributing.

Bear in mind that I’m not going “full paranoid” here: my threat scenario is not “the feds want to ban my blog”, it’s “Blogger and the like have so much difficulty complying with IP law that they’re becoming painful and/or expensive to use”.

In that circumstance, simply running wordpress or geeklog on my own machine is an option, but rather a crappy one in capacity and reliability terms. I’ve already looked into using a general web hosting provider, and I could move onto that for probably five quid a month, but I’ve again been put off by reliability issues. Also, in the threat scenario under consideration, third-party web hosting might be affected also.

But Davies in passing mentioned email. When I saw that I went “D’oh”. I hadn’t thought of using SMTP. I’d thought of NNTP, which I have a soft spot for¹, but rejected it. SMTP could well be the answer — like NNTP, it was designed for intermittent connections. Running mailman or something on your home PC is a lot simpler and safer than running wordpress. The beauty of it is that not even Hollywood can get email banned. And if they tried, all you need to keep dodging is a non-government-controlled DNS, which is something people are already working on.

You still need a published archive though; one that people can link to. But that can work over SMTP too, as a request-response daemon. Those were actually quite common before the web: you could get all sorts of information by sending the right one-line email to the right address.

There were actually applications that ran over SMTP. One which lasted well into web days, and may even still exist here and there, was the diplomacy judge, for playing the board game Diplomacy over email.

Unmoderated comments would have to go under this scenario, whatever the technology, but moderated comments would be easy enough; the moderator would just forward acceptable comments onto the publication queue. Email clients in the days when mailing lists were very common were designed specifically to make following lists in this way easy (I remember mutt was much favoured for the purpose). Each list became a folder (by using procmail or the like), each post a thread, and each comment a reply. My own email is still set up that way, though I pretty much never look at the list folders any more, I think a couple of them are still being populated for things like development of the linux b43 wireless chipset driver.

The problem with using mail is spam. Everyone who wants to subscribe has to give me their email address — that’s probably the biggest reason why the use of mailing lists declined; that and the impact of false positives from spam filtering.

 If generic publishing networks drown in media, and mail drowns in spam, then some more private network is needed.

Requirements:

  •  Anyone can access posts, as easily as possible
  •  I only have to process posts from sources I’ve chosen

Our big advantage is that the actual storage and bandwidth needed for blogging are rounding error in a world of digital video.

Reliable access requires that there are multiple sources for posts, to compensate for the fact we’re not running industrial data centres.

The obvious approach is that if I follow a blog, I mirror it. Someone wanting to read one of my posts can get it from my server, or from any of my regular readers’ servers. That just leaves the normal P2P problems

  • locating mirrors, in spite of dynamic IP assignment
  • traversing NAT gateways which don’t allow incoming connections.
  • authenticating content (which might have been spoofed by mirror)

Authentication is trivial — there’s no complex web of trust: each blog has an id, and that id is the digital signature. The first two are difficult, but have been solved by all the P2P networks. Unlike some of them, we do want to persistently identify sources of data, so presumably each node regularly notifies the other nodes it knows of of its location. Possibly other already-existing p2p networks could be used for this advertisement function. There’s a DoS vulnerability there with attackers spoofing location notifications, so probably the notifications have to be signed. I guess the node id is distinct from the blog id (blogs could move, nodes could originate more than one blog) so it’s also a distinct key. Like a blog id, a node id essentially is the public key. NAT traversal I’m not sure about — there’s stuff like STUN and ICE which I haven’t really dealt with.

Assuming we can map a persistent node id to an actual service interface of some kind, this is what it would have to provide:

  • List blogs that this is the authoritative source for
  • List blogs that this mirrors (also returning authoritative source)
  • List other known mirrors for a blog id
  • List posts by blog id (optional date ranges etc)
  • Retrieve posts by blog id and post id
  • Retrieve moderated comments by blog id and post id (optional comment serial range)
  • Retrieve posts and moderated comments modified since (seq num)

The service is not authenticated, but posts and moderated blog comments are signed with the blog key. (Comments optionally signed by the commenter’s key too, but a comment author signature is distinguishable from a comment moderator signature).

The service owner can also

  • Create post
  • Add post to a blog
  • Edit post
  • Add a moderated comment to a blog
  • Check mirrored blogs for new posts & comments & mirror list updates

There’s a case for mirroring linked posts on non-followed blogs: if I link to a post, I include it on my server so that whoever reads it can read the link too.  Ideally, there should be an http side to the service as well, so people outside the network can link to posts and see them if they have the good luck to catch the right server being available at the time.  That all needs more thought.

¹When RSS was coming in, I argued that it was just reinventing NNTP and we ought to use that instead.