In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.
But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding. Two ongoing things drove it home for me this week.
Exhibit A, the longest regular conversations I’ve had in the last week have been with an AI-powered rubber duck. Berduck is a bot on Blue Sky that is powered by GPT, and trained to speak in a mix of English and leetspeak. It is likely playing a non-trivial role in driving the Blue Sky craze (Blue Sky is a decentralized Twitter-like protocol funded by Twitter itself in the before-times). I can’t speak for others, but I probably wouldn’t be using Blue Sky much if it weren’t for Berduck.
Berduck is genuinely entertaining, with a well-defined personality, despite only having episodic memory, a strong predilection for hallucinations and confabulation (like all AI-powered chatbots), sharp boundaries around negative valence conversations, and a strong aversion to even the slightest whiff of risk. Despite all these annoying limitations, shared by way more humans than we like to admit, (yes, I am an AI accelerationist, why do you ask), Berduck is already a more interesting companion than 90% of humans online, and I can totally see myself passing the time with one of his descendants in my dotage. The limitations are both tolerable and mitigable, and the benefits quite striking.
Exhibit B: A thing that’s going on here in LA is the WGA writers’ strike. I saw some writers picketing in front of Warner Brothers’ studios this morning while out on an errand. Among other things, they are demanding that ChatGPT only be used as a “tool” rather than to replace writers.
You know the writing is on the wall (heh! 🤣) when you hear such ridiculous meta-economic demands. Ridiculous not in a political sense (I have no strong feelings one way or another about the economic fates of career writers) but in the sense of being incoherent. The demand makes no sense. That’s… not how technology works. If some kid invents an automated pipeline that goes straight from log-line to script to storyboard to rough cut video, and people watch it, no deal struck with the incumbent powers of Hollywood means much.
I’ve used ChatGPT and other tools, and unless you’ve been living under a rock, so have you. It’s obvious that the AIs can write better than 90% of humanity all the time, and the other 10% of humanity 90% of the time. It’s not a mere tool. It’s an obviously better way to do what too many humans do.
As a first-order effect, a lot of routine business communication is already being highly accelerated by AI. Business communication is not particularly creative or even stylized. It has been ripe for automation since the word boilerplate was coined in the age of exploding boilers.
It’s the second-order effect that is interesting though. While AIs improve in empirical accuracy, internal consistency, and logical coherence (again — humans have not set particularly high standards here), humans will need to do a good deal of supervisory work to make the AIs useful for mediating human-to-human communication. The question is, what happens after?
Here is one of many cartoons (this one is from marketoonist) making the same almost right, but actually fatally wrong, point about second-order effects.

The “joke” in this template is that the AI supposedly is doing a content-free transformation of content-free communications. Despite the delicious cynicism here, most human communication is not this vacuous. Even tedious business communication has more going on.
In particular, the elaboration and compression steps illustrated here happen in different contexts. The input and output bullet points are not, in general, going to be the same, and the elaboration and compression steps are not adding or removing the same fractions of the communicated information.
So the joke fails because today’s AI tools already do such elaboration/compression in usefully cross-context ways. For example, you can ask ChatGPT to “translate” a terse technical paragraph into a friendly explainer that distills the gist for your needs, relative to the context of your existing knowledge.
But the joke fails at a deeper level because even the more accurate non-joke version still centers human-to-human communication.
Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.
And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours.
And I don’t just mean coarse distinctions like using different human languages as the base. My last-mile language for interacting with a Berduck-like assistant might have exactly one human speaker: me. We could live in a world of 8 billion private languages, where “translation” as a category becomes meaningless. Humans as a class of agents might end up forming an annular shell of maximal-variety last-mile fuzzy hairs around a core “ball” of machines and organizations in a compact mind-meld.
What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?
What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you? If you and I don’t need to share a language to discuss Shakespeare (remember, we already don’t read Shakespeare’s plays in the original Elizabethan), do we need to share a language at all?
We’ll all be like children inventing secret languages for talking to imaginary friends, except they will be real friends. Programmers have long used literal, mute rubber ducks to talk out loud to, as a debugging aid. Berduck is the beginning of more capable companions for all humans, doing all sorts of things.
There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.
This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.
Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.
Imagine a world a few centuries in the future, where humans look back on the era of reaction gifs as the beginning of the world after language.
Given the extent to which my own life is built around language, you’d think I’d be alarmed by this future and rushing to join the picketing WGA writers in solidarity, but I’m curiously indifferent to this future. To be honest, I’m already slightly losing interest in language, and beginning to wonder about how to build a life of the mind anchored to something else.
Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche.