What Social Should Look Like?
Ephemeral By Default, P2P, and Algorithmically-Mediated
Motivation
Recently, I wrote a post that proposed Twitter shouldn’t ban Donald Trump because I thought the possibility of durable social fission exists. Hours later, Twitter banned Donald Trump. Soon after, a broader group of major internet companies did the same. Then, Amazon brought the Ban Hammer down on Parler, the hate reactor built by hapless fall guy John Matze and heiress Rebekah Mercer.
There are excellent justifications for why this was necessary and why this emergent coordination occurred and why was necessary. I don’t want to talk about them. What matters more for me right now? I don’t think it will ever work again. A trans-national economy of people building censorship-resistant technology seems to have a new occasion for war. The fire next time won’t be containable in similar ways.
This leads me to a simple conclusion: if you care about making social mediums work better in the future, your solutions must assume censorship resistance as a given. Otherwise, your building on an already-pruned branch.
Engineering Our Augmented Space
Centralized social media – as it exists today – is Procrustean. On Twitter, super-nodes excessively synchronize attention in mal-adaptive ways. The weight of identity-to-context skews far enough towards the former that we’re prone to chronically ignoring or failing to consider the latter. Social beefs end proliferate in wild abundance. Crowds grow, swarm, and disassociate from reality. All of this is all great for farming freely-offered attention. But, a good cybernetic ecosystem it does not make.
Given it’s defacto-albeit-not-really role as the world’s hyperspace public sphere, a great deal of (recent) attention seems to be on making a censorship-resistant Twitter. This is a mistake borne of the DWeb/Blockchain communities’ ideological defaults.
Censorship resistance may be necessary for solving some of these problems. But, it’s desperately far from sufficient. Missing is the attention dedicated to creating a social environment that fits us, not one that demands we fit it. It requires considering that – while there are a variety of technological architectures for creating a distributed and decentralized social medium – not all of them are sociological solutions.
I am interested in finding a solution at the intersection of both problems.
What follows is a small projection of my view from 10,000 feet.
1. Ephemeral By Default
Blockchains ensure immutability. Robust consensus mechanisms afford canonical state. Both can be useful. But the latter is computationally costly. More than that, there is an expense borne by the forms of expression it selects for.
In the context of ephemeral mediums, it’s excessively restrictive. Not every statement requires non-repudiation; in social contexts, very few do.I’m bending what non-repudiation means a bit here. Right to be forgetten is almost closer. Except, you can’t forget on a global immutable ledger…by definition. Ephemeral by default albeit with cryptographic signing for message authentication means what you socially reveal is more in your control, and fosters better norms.
If every utterance persists forever, people either self-censor or excessively fracture their identity into disposable fragments, as a defense.@balaijis gives a talk worth listening to about pseudonymous economy that proposes the use of cryptographic attestations to imbue alts with some of main’s properties, in a precise information theoretic way. He sees it as a way to manage social capital risks as any other capital asset. The talk is fascinating, but I think it’s solving the wrong problem.
Both behaviors frustrate social processes – including those of consensus formation.
The need to optionally commit a statement to a ledger in contexts where non-repudiation is desirable exists, as does the need.For example, if you are following someone whose reputation is derived from making accurate predictions about the future, you want their full history, not the selectively trimmed version. Global and immutable state makes such scams more difficult. But, my contention is that, making it impossible is too socially costly. Expressive behavior exists and instrumentality is a dead end. But it is not a good default. The temporal context of most expressions is now. Carrying prior utterances – generally without context – forward, creates conditions for maladaptive hysteresis. The past binds longer than it need to, in ways that don’t reflect the present – and, in doing so, it limits the future.
2. Peer-to-peer by Necessity
On the technological side, federation offers a means of decentralization that fits in well with existing skills and practices. For example, if you can deploy a website, you can probably set up a mastodon instance. This matters for fostering adoption. Participation at an infrastructure/administration level doesn’t have a steep learning curve or cost of entry.
On the sociological side, it also seems to solve a critical problem: users have a stake in the communities they join. Or, more accurately, users are part of communities – it doesn’t make much sense to speak of communities absent individual stakes. This can help create incentives for the community to define and enforce its boundaries.
However, boundary enforcement and community cultivation seems inextricably tied up in communal ownership of identity. For example, on mastodon, your identity is subject to community/node intervention. Expulsion doesn’t render you a digital nomad; it erases the identity. For what will end up being the most passionate and proselytizing early adopters, this is too repulsive. And while you can propose networks that afford right of intact exit, these have the effect of neutering the enforcement mechanisms. What you have is less of a community and more of a federated network transport.
More to the point, I think the model reflects an implement-what-we-know mentality. Early websites were nothing more than digital brochures. Federation models try to replicate…schools? Co-op boards? Cities? City states? Nation states? They tend to assume a one-to-one/belongs-to relationship often bound by geography. But the beauty of the internet is that meat space doesn’t bind. And, the brilliance of twitter showed us that association is so free, it’s almost a liquid.I saw this tweet from @aaronzlewis yesterday and wanted to plug it some how. It vaguely fits, but in any case, Aaron is worth a follow for thinking about this stuff.
Recapitulating, Zuckerborg was almost preposterously wrong when he said,This is unfair. If you use a loss function which penalizes legibility for the purpose of algorithmic attention resale, he was precisely correct.
You have one identity…The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly […] Having two identities for yourself is an example of a lack of integrity.”
Identity is contextual. We all wear different masks. And, we act in ways conditional upon both our expectation, capital, and social environment. This doesn’t betray a “lack of integrity.” It is a fundamental and adaptive part of being a human being.
With this rough sketch, I reach the following conclusions:
- Individuals must own their identities. No one except the person associated with a particular identity can revoke it. The most motivated adopters won’t allow otherwise.
- The medium itself must not mediate voluntary associations. That is, for any possible relationship pair, no entity can inhibit direct communication except for either of the participants. This follows from the same adopter assumption. But, it’s also a consequence of the ethereal nature of association absent geographic confinement. While this environment has perils, it is also the source of its continuing promise.
This largely precludes federation. Peer-to-peer is the way forward.This doesn’t mean there is no role for service providers that act as proxies, caches, aggregators, etc. There surely is and will be. But, I don’t believe they should be part of the p2p core.
3. Computation is the Massage
I argued that federation won’t work because it relies upon the wrong analogies. But, if any user can reach any user, the possible interaction space is quadratic. We do need something to structure something so incomprehensible.
The social graph is the scaffolding of that structure. It collapses \(n^2\) to something we can cope with. It affords repeated interactions and social recall. It defines an architecture for message passing where we act as the processors.
However, we are social creatures and social media is omnipresent. Viewing ourselves as the processors, Parkinson’s Law becomes oddly relevant: work expands so as to fill the time available for its completion. Who we follow increases until we reach the point of desired infinity-scroll saturation. At this point (or well before it), processing demands on a one-to-one basis exceed our human capabilities. We fall back upon higher-order generalizations. Fuckery ensues.
Here, social media critics generally prescribe something like “people-centric technology” as the antidote. Increasingly, I think it’s snake oil. More often then not, you can replace it with RETVRN without loss of meaning. Technology changes the way we relate to each other. With it, so do our aspirations, perceptions, mediators, and institutions. Some expand, and some contract. The way through isn’t to retreat to a now inaccessible past.Makes note to reread Industrial Society and Its Future as an exercise in Take Cartography. The way out is to marshal our modern resources into new capabilities.
In the abstract everywhere of computer-mediated spaces, algorithms operating on messages are the massage. In particular – and assuming primarily peer-to-peer architecture – there are two opportunities for computational augmentation:
Propagation: On current mediums, user actions such as retweets, likes (stochastic retweets), and replies are explicit propagation instructions. They essentially say, “I would like to allocate follower attention here.” But, this is a crude mechanism. On a P2P network, algorithms can shape content percolation across the graph in heterogeneous ways.Secure Scuttlebutt uses the concept of near moderation that is relevant here. But, I think algorithmic mediation plays an important role in augmenting it. Moreover, the propagated messages need not be only excitatory. There is a stupifyingly obvious need for inhibitory ones, too.
Home timelines: What the user sees is a function of the messages their node receives. But, given invoked Parkinson’s Law, there will be more of them than time. The people-centric approach demands something like chronological timelines or less content.In an upcoming post, I argue that advocates of things like chronological timelines and no algorithmic mediation miss an obvious point: content producers have the same incentives and are capable of recreating the same induced harms, generatively. I think the algorithmic approach is better. The core problem now is that it’s not user-centric. Agency is absent. You don’t install what you want – what suits you best. The central provider does, and its objective function and yours diverge.
The combination?
Computation becomes the medium. By user input or algorithm, each node decides what to pass on to each connection. We shape the fabric of attention together. We decide what to amplify and attenuate, graph-locally.This isn’t a community, per-se. But I think it’s the social space we already inhabit. We decide what we want our experience to look like. We can create a space of varied social games to replace the blight-prone contemporary mono-crop.
I am aware that prescribing algorithms reeks of techbrosis at the current moment. But, I contend that we’ve reached the point where we are reactively imputing harms to the wrong component. Algorithms have tremendous ethical considerations. They can be harmful. They often are. But, they are also damn near magical Promethean fire. They allow us to navigate higher dimensional spaces otherwise beyond inaccessible. They can augment our capabilities. They already do. And, they can do more. The way out is through.
Next Steps
I started building what I think is the core of this last year, so I’m admittedly biased.Not the censorship resistant part; the a component of p2p algorithmic ecosystem. But, it was too large a project for myself, so I moved onto a project that could help me “work my way up to it.” But recent events have changed the urgency.The same sense of urgency compels me to share this post more quickly than I usually would, and with less revision. Early takes matter, and people are taking them with vigor. (See also: COVID/mask discourse circa Jan-March).
Reiterating my human decency bona fides as I’m sure it will come up,
- I believe Parler is a hate reactor;
- I believe John Matze was a group’s useful idiot;
- I believe the storming of the capitol was one of the most dangerous moments for Democracy I’ve lived through; and,
- I believe the interventions taken by a variety of corporate actors do help get us through the moment, where groups trapped by conspiratorial feedback loops coordinating to do violence pose a clear and present danger.
- I believe every one of these services was well within their legal right to act as they did.
But, I also believe that those actors were more bound by fiduciary duty than social ones – they were discharging toxic liabilities. And, the immediate needs may obscure the future harms. “Freedom of reach” is not “freedom of speech” rhetoric breaks down as more and more of our social and intellectual lives become more computer mediated. And, the further down the technological stack we go, the more entangled the two become.
For this particular episode, the benefits greatly exceeded the costs. In the long run? I don’t know. But, I do know that the event mobilized something like an army to make the same intervention impossible in the future. And, I am confident they will succeed in doing so. I want to join them, because I am a true believer in social mediums. There is still great and untapped power. But, I care about more than censorship resistance alone.