The history of writing is often said to begin with the need to keep accurate records of transactions. Other scholars point to an ancient urge to record dreams and prophecies. Yet another theory, less in vogue these days, suggests an altogether different motivation. Though supported by archaeological evidence, this story of writing evinces a circular logic worthy of Lewis Carroll: it insists that people started writing to prove that they were writing. This story of writing begins with the seal.
Some of the world’s oldest seals have been dated to 8,000 years ago, excavated from Sha'ar HaGolan in northeastern Israel. Each of the five seals discovered at that site bears a different geometrical pattern. When impressed on a soft surface they would have left marks consisting of grouped lines or circles. As the five seals were found in five different buildings, the archeologists excavating the site have suggested that they functioned as markers identifying the different families that lived there. These inscribed shapes—humanity’s first steps on the way to full-fledged scripts—suggest that writing began with signatures.
Everywhere writing emerges one finds a similar theme. Not for nothing was the first standardized form of written Chinese referred to as the “seal script.” For the Minoans, personal seals functioned as primitive ID cards. In the Olmec heartland of southern Mexico, a cylinder seal dated to 2,650 years ago bears an image of a bird alongside two small characters. Among the earliest examples of writing ever found on the continent, these two glyphs spell out the name of the seal’s royal owner: “King Ajaw.”
For hundreds of thousands of years, language was confined by the range of human hearing. In such proximity it was easy to tell who was saying what. But the invention of writing fatally undermined this certainty. Anyone could learn to make marks and sign them with whatever name they liked, be it Shakespeare or St. Paul or Alexander the Great. One way to read the history of writing is as a series of imperfect attempts at overcoming this vexing problem.
Seals served as proof of authorship, and mitigated the suspicions raised by writing’s free-floating nature. But they were not immune to falsification. In imperial China, the harshest punishment (forbiddingly referred to as “Death by Slicing”) was reserved for the most serious crimes: rebellion, treason, and the forging of seals. In a world governed by written documents, uncertainty about their authenticity could upend the entire social order. Together with the technological challenge of replicating increasingly elaborate seals, such draconian laws acted as guardrails, preserving the legitimacy of a society stitched together by bureaucracy.
Today, the millennia-long arms race between writing and authentication is entering another era. Large Language Models (LLMs) and chatbots such as ChatGPT have made it trivially easy to produce text in any style, suited to nearly any purpose. In the wake of the chatbot’s rocket-like rise toward mass adoption, news stories about people losing their jobs to ChatGPT have emerged alongside stories of people losing their jobs for using ChatGPT. This apparent contradiction—that we believe ChatGPT is good enough to replace some workers but also that some workers who use ChatGPT should be replaced—speaks directly to the ambiguous role that writing plays in our society.
Last year two administrators at Vanderbilt University stepped down from their posts after carelessly copy-pasting text from ChatGPT into an email about a mass shooting. Here, again, a contradiction surfaces. We have institutions that often insist that people write as if they were heartless machines, but when people actually use a machine to write for them, they are denounced by those same institutions for being heartless. Does anyone really believe, as one university spokesperson claimed, that the main problem with the email was that it showed a lack of “empathy”? I submit that the email’s real offense lay elsewhere—in its casual demolition of the signature rhetorical style of our age. Beyond the small tagline at the email’s bottom indicating its artificial provenance, the message was indistinguishable from the countless other box-ticking exercises emailed across the country every day. The use of AI-generated text in this context exposed the uncomfortable fact that AI-generated text could be used in every similar context.
As Ethan Mollick has observed, the meaning of our writing does not simply emanate from the words we string together. It also comes from the fact that such words signal that we were willing to “set our time on fire” in order to write them. Each email we write represents a small, unrecoverable chunk of a precious human life—no matter if it consists entirely of tired platitudes. Even the most asinine messages testify eloquently to the minutes sacrificed in their composition. Our inboxes are filled with the ashes of many hours.
But ChatGPT subverts this secondary function of writing. Its speed and ease of use make it possible to produce messages in a manner of seconds, leaving much of our time intact. The sudden ubiquity of LLMs casts a shadow of suspicion over every written communication. How are we to prove that the words we claim as our own were not prompted by the push of a button? The ancient challenge posed by writing—that of linking visible marks to an absent person—has reasserted itself, implicating nearly every interaction in our highly mediated society.
To help dispel such suspicions, many of the world’s top engineers are currently working on tools for detecting AI-generated text. Among the more popular approaches being trialed involves the design of digital watermarks.
The term “watermark” is a misnomer. The earliest watermarks in Europe were not made with water at all, but by wires pressed onto paper. One theory is that such watermarks originated with the heretical religious sects of the medieval era, who used these impressions in their forbidden books to communicate “signals of hidden meaning” to fellow initiates.
But the new breed of digital watermarks isn’t designed for human eyes at all. Rather, these watermarks carry signals of hidden meanings for other neural networks. The dominant type of watermarks currently being devised make use of LLMs’ probabilistic reasoning abilities to insert invisible patterns of words and phrases throughout an AI-generated text.
The process itself is straightforward: as an LLM is deciding the next “tokens” in a sequence (each token is roughly equal to four letters), it divides the possibilities into two lists—a “red list” and a “green list”—based on their statistical likelihood. By only selecting words from the green list, and avoiding the words from the red list, the model creates a text with a highly skewed probability distribution. To a human reader the writing appears unremarkable. But to a machine familiar with the watermarking algorithm its artificial origin is as obvious as that of the Vanderbilt email.
These statistical patterns form a kind of silicon seal, a distant descendant of those found at Sha’ar HaGolan. But just as the most elaborate imperial seals proved insufficient to dissuade all falsifiers, it is unlikely that such watermarks will fully succeed in putting to rest the suspicions raised by the advent of LLMs. Silicon seals will inspire seal-breaking algorithms—or simple hacks—that dissolve the watermarks. And some of the many competing LLMs will surely refrain from adopting any such seals at all. Technological solutions are thus unlikely to end this authentication arms race.
So how else are we to prove that we have set our time on fire and dedicated serious thought to our writing? How do we ensure that our words attest to our own creativity?
One strategy might be taken from the correspondence of the last century’s Modernists, who thrived in an age rocked by an earlier generation of writing machines. Typewriters and telegrams inspired an explosion of creative shorthand that shaped the communication styles of some of the leading lights of English letters.
In a letter to his friend Bill Smith, Hemingway once remarked: “Havta carp along wit cheerful facial all diurnal and seek relief in a screed.” Carlos Baker, the editor of Hemingway’s Selected Letters, provides the following gloss: “'You have to move along all day with a cheerful face and then find relief in letter writing.” As far as Google is aware, no one else in history besides Hemingway has published the phrase “havta carp.” Indeed, the entire sentence is a sterling example of what LLM watermark-makers would call “high-entropy text.” Knowledge of the sentence’s first words provides little help predicting the words that follow.
From the perspective of an algorithm designed to detect AI-generated text, Hemingway’s letter is dominated by “red” tokens that violate the statistical patterns of a digital watermark. The text’s human origins are affirmed by means of an idiolect that would horrify most proofreaders.
That feeling of horror has a history—one that has shaped the English language over the centuries, as various institutions and authorities have sought to tame a tongue all too prone to taking on new forms. Today, the English favored by our institutions is best exemplified by the bloodless verbiage of the Vanderbilt email written by ChatGPT. The success of LLMs in mimicking this official lingua franca means that many people will undoubtedly turn to them when they need to communicate within these institutions. But whenever an individual seeks to assert ownership over their own words, to mark their sentences as expressions of their own creative imagination, they will be assisted in doing so by employing a language as singularly unpredictable as Hemingway’s.
The novelist Helen DeWitt’s epic story of fighting her editor to preserve her book’s distinctive punctuation is a case study in the corrosive effects of linguistic conformity on contemporary fiction. Now, thanks to popular typing assistants like Grammerly, everyone can enjoy the unthinking enforcement of such lexical norms. Outsourcing one’s writing to an LLM entirely only makes this process of policing language even more seamless.
Such acquiescence to an AI’s idea of good writing spares us from the strenuous task (dare I say “responsibility”) of contemplating our own stylistic choices. And if Nietzsche’s equation of great style and great passion holds, then what does that say about the rise of a style based purely on statistical correlations?
Despite the technological sophistication of the tools involved, this situation has a historical precedent. After the Norman Conquest, French became the dominant written language in England. For the following one hundred and fifty years practically no examples of written English remain. But English didn’t disappear. Most texts may have been written in French, but English was still spoken. And in the absence of any official oversight, the English language got really weird. Once people began writing in English again, in the early thirteenth century, the language that emerged was unrecognizable. Not only because of the lack of case markers and gendered nouns, but also because of the astonishing variety of spoken dialects that were preserved in written form for the first time.
As in the period after the Norman Conquest, we may soon face a situation where a moribund official language exists alongside an increasingly bizarre menagerie of unofficial dialects. But whereas the writings of the thirteenth-century made a claim for the vitality of English as a mode of creative expression, the high-entropy Englishes to come will be shaped by the extreme selection pressure of testifying to their human origin.
Texts produced in such Englishes are likely to be filled with in-jokes, obscure references, and gossip—any content that will be sure to exist outside the training data of the largest LLMs. Already, many Chinese citizens have taken to using inventive puns to circumvent an automated online censorship regime. In the future, beyond evading AI censors, such wordplay could also function as a compelling sign of authentic creativity.
Multilingual writing may be another way to signal human authorship. The most advanced LLMs still struggle with “code switching,” which refers to the practice of alternating languages within a sentence. Although some have posited that AI may lead to a decline in the need to study languages, due to the invention of apps for real-time translation and automated dubbing, code switching in writing may prove to be one of the few ways writers can credibly signal their authorship.
(One thinks of E.E. Cummings, writing how glad he is that a longtime critic has finally started to attack another poet “poor shawn shay”—turning a facetious misspelling of the French phrase pour changer into another high-entropy sequence.)
Such expressions are likely to be categorized as red tokens by watermark detection algorithms and also earn the dreaded red squiggly underline on the ubiquitous Microsoft Word software. These things are not unrelated, for the line between error and originality is faint, and just as artisanal crafts derive some of their value from their defects, so too might the creative texts of the future derive much of their worth from their divergence from a roboticized official English.
The crimson underline in Microsoft Word signaling a mistake, and the classification of certain words as “red” tokens, both call to mind Nathaniel Hawthorne’s The Scarlet Letter. In that novel, the eponymous red A initially signals a transgression. But over time, the meaning of the letter evolves. Speaking of the adulterous protagonist forced to wear the letter A, Hawthorne’s narrator writes that “such helpfulness was found in her,—so much power to do, and power to sympathize,—that many people refused to interpret the scarlet A by its original signification. They said that it meant Able.”
Eventually, perhaps the same will be said of these scarlet tokens. Originally used to mark deviations from hidden statistical patterns, they may yet come to indicate the ability of their authors to write for themselves when others have ceased to do so.
In other words: dans le futur, an argot’s irregularisms shall of an organic Inwit speak, p’rhaps more plainly than the words “plain Dunstable” could ever.