The Comfortable Burning: Artificial Intelligence and the Slow Homogenisation of Language
‘You don’t have to burn books to destroy a culture. Just get people to stop reading them.”
- Ray Bradbury
Ray Bradbury’s Fahrenheit 451 is routinely misread. We tend to remember it as a novel about censorship, about the state’s violent suppression of ideas, and so we imagine its dystopia would announce itself with sirens and smoke. But Bradbury was subtler and darker than that. The firemen in his novel do not arrive first. Apathy does. The books disappear not because a government decrees their destruction, but because people simply stop finding them necessary. The burning is merely the bureaucratic tidying-up of a cultural appetite already extinguished.
Artificial intelligence is embedding itself, quietly, yet with extraordinary speed, into the sinews of how we write. Not how some of us write, on occasion, experimentally, but how language itself is increasingly produced, circulated, and consumed at scale. Emails, reports, articles, social media captions, customer service transcripts, educational materials, legal summaries: the textual infrastructure of contemporary life is passing through a small number of large language models trained and optimised for coherence and fluency, calibrated toward the centre of a bell curve of acceptable expression.
[Context: Linguistic Diversity] Linguists estimate that roughly 7,000 languages are spoken on Earth today, each encoding a distinct cognitive and cultural repertoire. The Hopi language has no grammatical tense; classical Arabic encodes over fifty words for the behaviour of sand. The Sapir-Whorf hypothesis, that language shapes thought, not simply expresses it, is still debated in its strong form but is widely accepted in its weaker one; the structures we have available to us influence what is cognitively accessible, and what we are inclined to notice.
Optimised Toward the Average
Language has always changed. It has always been pulled toward centres of power and prestige, flattened by trade, and reshaped by technology. The printing press standardised spelling; the telegraph enforced brevity; the telephone homogenised accents over spaces. These were genuine losses, and genuine gains, and the argument I am making is not a naive lament for stasis. Languages are not museum pieces; they are living systems, and living systems metabolise.
But there is something categorically different about the current moment, something that makes the printing press analogy feel dangerously comforting. Previous technologies altered the conditions in which language evolved; they did not produce it. The press gave writers a new distribution mechanism; it did not write the books. Large language models do not simply distribute expression. They generate it. Billions of tokens daily, in a register that tends, by design, toward fluency, accessibility, and the avoidance of friction. This is not a flaw. It is a feature. And it is the source of a profound and under-examined cultural risk.
What large language models are optimised to produce is, in essence, the means of human expression. The sentence that would offend the fewest readers and satisfy the most prompt evaluators. This is not malice. It is mathematics. But the means of human expression is not where the interesting work of language has ever been done. It is not where the particular cadence of a dialect, a subculture, a generation’s argot carries information that no paraphrase can adequately encode.
What Bradbury Actually Understood
The great insight of ‘Fahrenheit 451’ is that the dystopia is not imposed from without but preferred from within. Captain Beatty, the novel’s chief fireman, is one of literature’s most instructive villains precisely because he is not stupid. He is, in fact, widely read. He knows exactly what has been lost. And he makes the case with a terrible lucidity, for why the loss was freely chosen: books made people unhappy. They presented irresolvable complexity. They demanded the reader hold contradictions in suspension without resolving them. They were, in short, cognitively uncomfortable in a society that had decided comfort was the sovereign value.
We are not living in Bradbury’s America. But I notice, with something between fascination and alarm, that the case for AI-generated language frequently proceeds along Beatty’s axis: it is clearer, faster, more accessible, less likely to alienate, more likely to satisfy. These are real virtues. Clarity is not the enemy of thought. But there is a type of clarity that is achieved by the suppression of complexity instead of its illumination, and it is this that a system trained to minimise friction will, over time, tend to produce.
The Biodiversity Argument
There is a parallel with ecological thinking. Biodiversity is valuable not simply for sentimental reasons, but because variety is pleasant, because each species has intrinsic worth; a monoculture is catastrophically fragile. The Irish Potato Famine was not caused by a blight; it was caused by the near-total absence of genetic diversity in the potato crop, which meant that when the blight came, there was nothing to resist it. Diversity is resilience. Redundancy is survival.
Language variation operates similarly. Each language, each dialect, each idiosyncratic register carries its own cognitive toolkit, its own structures for encoding time, agency, obligation, desire, ambiguity. When a language dies, it takes with it not simply a vocabulary but an entire architecture of attention, a set of distinctions the world was made to seem to require, and which now, for those who inherit only the dominant tongue, will simply not appear. The Kawésqar people of Tierra del Fuego had words for sea states that have no equivalent in Spanish. Those states of water did not cease to exist when the language retreated. They became, simply, harder to think about.
I am not suggesting that AI will kill minority languages; that process, tragic and complex, has older and crueller causes. I am suggesting something more insidious: that even within dominant languages, a gradual gravitational pull toward machine-legible, frictionlessly-fluent prose may quietly erode the edges, the dialect, the idiolect, the deliberately fractured syntax, the pregnant silence, the untranslatable pun that constitutes the living texture of how actual humans have always actually thought.
The Counter-Argument I Take Seriously
If AI democratises access to fluent, navigable prose, removing the socioeconomic barriers that currently determine who can write a persuasive cover letter or a legible grant application, then perhaps the homogenisation is a price worth examining rather than simply lamenting. The romantic attachment to linguistic difficulty can disguise a class interest, and critics of AI-generated prose should be alert to the possibility that what they are mourning is not complexity per se but their own cultural privilege(?) The essay, the literary novel, the carefully-wrought sentence: these have never been equally available to all.
And yet. The democratisation argument tends to assume that what AI produces is a neutral substrate onto which diverse voices can be projected. The fluency that large language models generate is not accentless. It has a voice, a rhythm, a set of implicit assumptions about what constitutes clarity and what constitutes excess. It is, in the broadest sense, a style, and styles are never politically innocent. When we train people to aspire to that style as the standard of competent expression, we are not removing a hierarchy. We are replacing one with another, and then pretending we have replaced hierarchy with neutrality.
In Conclusion
It is the small group of people at the margins of ‘Fahrenheit 451’ who have committed books to memory: who have become, each of them, a single text, walking through the cold. Bradbury’s answer to homogenisation was not political resistance but intimate custodianship, the preservation of the particular, person by person, against the flattening forces of the ambient culture. I am not sure this is a sufficient answer for the scale of what we now face, but it points toward something true. The antidote to linguistic homogenisation is not a refusal to use the tools available to us, but a fierce, self-conscious commitment to the irreducibly human residue that no model can optimise for. The digression that goes nowhere but arrives somewhere unexpected. The metaphor that fails beautifully. The sentence that breaks its own rules for reasons that grammar could not account for. These are not ornamental; they are epistemological. They are how thought discovers what it did not know it was thinking.
The comfortable burning is already underway. Whether it produces a Bradburian wasteland or something more ambiguous and navigable depends on whether we are willing to maintain, with some stubbornness and inconvenience, a space in which language is allowed to remain strange.