At the turn of the 20th century, Kate Masterson was a prolific, rangy writer. She wrote jocular poems and witty plays. She contributed to The New York Times and Harper’s Weekly. Her essays offered useful advice to emerging writers: “verse that is bright and jingly, containing some timeliness, an original thought or, maybe, humor, is one of the best opening wedges in the profession of literature.” 

It worked for her. After writing flighty poems, New York’s The Journal newspaper dispatched Masterson to Cuba, where she risked her life for a series of dynamic reports — which included impersonating the wife of a prisoner to gain access to a Havana jail, where rebel leaders were executed at night. “I know that women newspaper representatives are supposed to be very brave,” she wrote, “but I confess that I was the most frightened woman on earth while in that rock-bound Spanish fortress.” The experience reveals that although Masterson plied her trade as a humor writer, she was devoted to her craft; she believed that writing was worth the risk.

Sadly, her writing is largely forgotten — but one of her light-hearted fables is eerily prescient. In 1899, Masterson wrote “The Haunted Typewriter” for Life magazine. Tucked among pithy verse and cheeky cartoons, the fable is chilling. 

A poet bought a used typewriter, and awoke one night to hear it “clicking” on its own. The machine was “turning out unintelligible, ungrammatical stuff, written in a sort of ragtime that resembled poetry in its form.” After a few nights of the same action, as “no visible fingers touched the keys,” the poet decided to take the works, give them titles, and “sent them to one of the big magazines.”  The magazine put the poet’s face on the cover, and printed his automated poem on the first page. The editors “sent him a large check and an order for more of the same kind.”

I can’t help but return to one detail from Masterson’s fable. The machine wrote something that “resembled poetry in its form,” but wasn’t quite poetry. This handless, heartless work was something else. 

Most discussions of artificial intelligence and adaptive technology are worried about their usage in the classroom, or how automated texts might affect the professional livelihoods of writers. Both are significant concerns. Yet Masterson’s fable makes me wonder about how AI forces us to consider the nature of creativity — what it means to create art. AI offers us a compelling paradox: in order to affirm the worth of artists, we must seek that which is uniquely human about art. In other words, we must figure out why we matter. 

 

 

It is entirely appropriate that Marshall McLuhan, the Canadian media theorist, is one of the most misunderstood thinkers. His manner and method invite confusion. McLuhan liked to say that he offered “probes,” not treatises. He had a doctorate in English from Cambridge, but his books lacked footnotes, frustrating academics and critics. It didn’t help that McLuhan was everywhere: on magazine covers, on radio, and on prime time television. In 1969, John Lennon interviewed McLuhan — not the other way around.  

Two of McLuhan’s observations are especially useful for AI: technology as an extension of the nervous system, and the concept of obsolescence, which is when a technology or action is in the process of becoming obsolete. Sixty years ago, McLuhan synthesized both concerns in an obscure essay, “The Agenbite of Outwit.”

The wheel, McLuhan notes, is an extension of the foot. The city is a “collective outering of the skin.” Those technologies were continuations of our physical selves, but the telegraph was something different. McLuhan claims “electronic media are, instead, extensions of the central nervous system, an inclusive and simultaneous field.” This new electric world makes us “peculiarly vulnerable” as we experience “total uneasiness.” 

Yet we are distracted. “As Narcissus fell in love with an outering (projection, extension) of himself,” McLuhan writes, “man seems invariably to fall in love with the newest gadget or gimmick that is merely an extension of his own body.” In 1963, McLuhan was talking about cars and television — but we are now implicated by his echoes: “The point of the Narcissus myth is not that people are prone to fall in love with their own images but that people fall in love with extensions of themselves which they are convinced are not extensions of themselves.”

Electronic media, McLuhan warns, was “not a closed system.” It requires “awareness, interplay and dialogue.” We must feed the beast. New technology creates new forms and structures, and “renders those most deeply immersed in a revolution the least aware of its dynamic.” There is no turning back from this electronic shift: “everything happens to everyone at the same time: everyone knows about, and therefore participates in, everything that is happening the moment it happens.”

The instantaneous world forever changed the transfer of information, and transformed how we work. “Man in the future will not work, automation will work for him,” McLuhan affirms. Yet all would not be lost. “I believe that artists, in all media, respond soonest to the challenges of new pressures,” McLuhan writes. “I would like to suggest that they also show us ways of living with new technology without destroying earlier forms and achievements.” After all, the “new media, too, are not toys; they should not be in the hands” of executives and institutions. At the precise moment of obsolescence — when the old technology evolves into the new — artists will save us.

 

 

“Every generation poised on the edge of a massive change seems, to later observers, to have been oblivious of the issues and the imminent event.” 

McLuhan’s dictum was demonstrated in late November 2022, when ChatGPT (Generative Pre-Trained Transformer), was released by OpenAI, a San Francisco-based AI research laboratory established in 2015 by a group that originally included Elon Musk. Breathless news reports announced its nearly miraculous arrival, but its release was the latest step in a deliberate process.

Starting in March 2021, Percy Liang and other researchers at the Center for Research on Foundation Models at the Stanford Institute for Human-Centered Artificial Intelligence began building a comprehensive assessment of AI foundation models that was ultimately published on July 12, 2022. The multi-authored paper documented a “paradigm shift” for AI via these foundation models — so called “to underscore their critically central yet incomplete character.” Despite the “impending widespread deployment” of these models, “we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.”

OpenAI developers struck a similar tone while rolling out ChatGPT. The program had made significant gains over the preceding two years, anchored through Reinforcement Learning from Human Feedback (RLHF). The method required human trainers playing both roles, “the user and an AI assistant.” Despite the program’s fascinating capabilities, it had clear limitations. Some responses sounded reasonable, but others ranged from incorrect to inane. Developers attributed the challenge of fixing such an issue because “during RL training, there’s currently no source of truth.” 

Other responses were “excessively verbose,” and peppered with repeated and unnecessary phrases, “such as restating that it’s a language model trained by OpenAI.” In response to vague questions, the program did its best to guess rather than asking “clarifying questions.”

Essentially, ChatGPT acted like an eager to please, curious, brilliant child — who occasionally misused terms and somehow knew everything that was available online as of September 2021.

 

 

In 1948, Geoffrey Jefferson, professor of neurosurgery at the University of Manchester, was awarded the Lister Medal by the Royal College of Surgeons of England for “distinguished contributions to surgical science.” On June 9, 1949, Jefferson gave the Lister Oration. His address was titled “The Mind of Mechanical Man.” The subject was a marked departure from other recent recipients, who spoke of “The Use of Micro-Organisms for Therapeutic Purposes” and “Some Aspects of Bronchiogenic Carcinoma,” topics consistent with the award’s namesake. Yet Jefferson’s address was a product of its moment. 

In a tone reminiscent of media reporting on ChatGPT almost 75 years later, articles on early computer development used the phrase “electronic brain.” For example, a November 9, 1946, page spread in The Illustrated London News featured photos of floor-to-ceiling devices with the headline “The ‘Electronic Brain,’ with 18,000 Valves To Help in Solving ‘Impossible’ Problems,” repeating the phrase in multiple captions. The phrase “electronic brain,” or “mechanical brain,” became a common locution in the British press from 1946 onward, and its little-known origin was an October 31, 1946, speech by Admiral Lord Mountbatten, who was then-president of the British Institution of Radio Engineers. Mountbatten discussed how electronic devices that  “enormously augment our present human senses” might one day become a “sense-machine” through “direct application of electrical currents to the human body or brain.” The ultimate goal for machines would be to “reproduce by artificial means the speed, the intricacy of the connecting links, and the detailed pictures of the human mind. … It is in this domain that the stage is now set for the most Wellsian development of all: the Electronic Brain.” 

Mountbatten’s speech was covered by the British press, which latched onto his provocative phrase. Not all were convinced. Mathematical and theoretical physicist Douglas Hartree, a professor at the University of Cambridge’s Cavendish Laboratory, wrote a letter to The Times of London voicing skepticism.  

Mountbatten’s claims were based on the Electronic Numerical Integrator and Computer (ENIAC) at the University of Pennsylvania, which Hartree himself used — “probably at present the only person in this country to have done so.” Afterward, writing for Nature, Hartree noted that “use of the machine is no substitute for the thought of organizing the computations, only for the labour in carrying them out.” As he wrote in his letter, Hartree explained that such machines “can only do precisely what they are instructed to do by the operators who set them up.” The distinction between creation and labor, he felt, was “important.” The term “electronic brain” obscures an essential difference, and “this is why I hope use of this term will be avoided in future.”

Hartree’s wish was not granted, thus leading to Jefferson’s perceived need to address the Royal College on the topic. “We feel perhaps that we are being pushed,” Jefferson said early in his speech, “gently, not roughly pushed, to accept the great likeness between the actions of electronic machines and those of the nervous system.” He acknowledged that “most of our advances have been made by use of technical methods common both to machines and to living things.” Yet he cautioned that “all our advances have depended on observation of the thing itself, accepting likeness to mechanism only as analogy and not as identity.” 

Jefferson echoed Hartree: “I see a new and greater danger threatening — that of anthropomorphizing the machine.” A machine, Jefferson said, “might solve problems in logic,” and it can operate with tremendous speed. Yet language seemed to be the final frontier. In phrases that prefigure ChatGPT’s conundrum — the program’s lack of knowledge of collected information beyond a particular date — Jefferson argues that a machine “would have to be able to create concepts and to find for itself suitable worlds in which to express additions to knowledge that it brought about.” In other words, the machine must not merely retrieve or synthesize; it must create.

Jefferson concluded: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain — that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

There are many narrative modes — informational, chronological, rhetorical, satirical or parodic, scholarly, homiletic — but poetry exists on its own plane. I wonder if poetry, then, might be the ultimate language difference between human and machine.  

Like many others, I’m drawn to ChatGPT as a game. I’m far too skeptical to have it do any real work for me, but I am intrigued about its possibilities for art. To be clear, I don’t think AI is particularly good at making art. Instead, I think it is like Masterson’s haunted typewriter: it makes things that resemble art.

 

 

One of the most formative poets in my life is Gerard Manley Hopkins, a 19th century British Jesuit priest. By all accounts, Hopkins was a dry preacher and an overworked teacher, but he was a poet of staggering talent. His prosody — the manner and aesthetic of his poetic lines — were out of place and time, equally or more stylistic than writers a hundred years after him. His poems were dense and dynamic; his phrases were fresh and his images jarring. His locus, though, was Christ. Hopkins’s union of style, substance, and spirituality remains a marvel.

McLuhan, a Catholic convert, was drawn to Hopkins both because of his faith (Hopkins had also converted, like others at Oxford) and the poet’s oddly modern linguistic mode. McLuhan thought that “Pied Beauty,” one short poem, was a “catalogue of the notes of the sense of touch,” a “manifesto of the nonvisual, and like Cezanne or Seurat, or Rouault it provides an indispensable approach to understanding TV.” McLuhan’s sweeping claim is ultimately true: Hopkins’s poetic approach transcended his medium and his time.

I asked ChatGPT to write a poem that parodied Hopkins’s “God’s Grandeur.” The original poem begins with a declarative, classic first line: “The world is charged with the grandeur of God.” That grandeur, Hopkins writes, “will flame out, like shining from shook foil; / It gathers to a greatness, like the ooze of oil / Crushed.” Three lines in, and the word and sense play is dizzying: “shining” and “shook” blend together, made parallel with the alliterations of “gathers” and “greatness,” as well as “ooze” and “oil.” The decision to push “crushed” to the following line is so smart, its finality prefacing the poem’s only, and central, question: “Why do men then now not reck his rod?”

“God’s Grandeur” is a poem about how generations have “trod” across the earth, as “all is seared with trade; bleared, smeared with toil.” The first stanza is dirty and mucky; the second stanza glows with nature, where “lives the dearest freshness deep down things.” What saves us, Hopkins reveals, is the Holy Ghost, the illumination of Christ.

Like much of Hopkins’s verse, it is deeply skilled, devotional, and — I say this in the best possible way — it is strange. I could have asked ChatGPT to write an essay on it, but knew that essay would ken and spin from an encyclopedic stroll through human writings. I wanted ChatGPT to make something derived from, but not derivative.

“Oh how the world is full of waste and shame,” the program began its poem, “The garbage heaps pile high, the oceans choked, / The air is thick with pollution’s choking cloak, / And all creation groans in anguished pain.” “Waste and shame” is an odd phrase — the implication of the stanza as a whole is that the world does not feel such shame, so it is as if the program attempted to simultaneously create a text and append an outside perception. “Oceans choked” could work as a phrase, but “choking” in the next line is a clunky repetition.

Is it folly to parse these lines? The bigger problem, though, is one of tone. Apparently the program perceived “parodies” to mean “write a version of.” Now, I could have offered it feedback, which likely would have resulted in a better poem — but its revisions would then be an extension of my knowledge of Hopkins, and, likely, my own poetic inclinations.

Perhaps I am raging too much against this machine. I’m guilty of paltry drafts — and my own weren’t dashed on the screen in seconds. ChatGPT can’t write good poetry, yet; but why does it matter?

I suspect that my desire to debunk the program’s artistic attempts is a matter of survival. I am trying to affirm the value of human creation, and it seems easy enough to deconstruct the program’s flimsy attempts at verse. In The Spider’s Thread: Metaphor in Mind, Brain, and Poetry, the poet and psychologist Keith Holyoak considers how the genre of “found poetry” complicates our ideas of traditional creativity. Ranging from blackout poems, where poets armed with Sharpies darken a newspaper column to discover a poem, to poetic lines culled from subway station ads and spam messages, found poems require both an existing text and a human curator.

The action is analogous to the human role in training AI to write poetry. As Holyoak notes, “computer programs can be very prolific in generating, but (to date) have proved less capable at selecting.” Computers can create a lot of content, but lack the ability to poetically shape that content. Programs like ChatGPT and its predecessors might connect metaphorically linked words, which could fashion a clever phrase — but a single line does not a poem make.

Holyoak concludes that “AI lacks what is most needed to place the footprints of its own consciousness on another mind: inner experience.” Without inner experience, AI also “lacks what is most needed to appreciate poetry: a sense of poetic truth, which is grounded not in objective reality but rather in subjective experience.”

 

 

Our fears and curiosities about AI are often ways of working through what it means to be conscious and sentient.

Robert Long, a philosophy fellow at the San Francisco-based Center for AI Safety (CAIS), is one of the essential thinkers for these considerations. CAIS states “that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely.” Prior to joining CAIS, Long was a research fellow at Oxford, following his doctorate in philosophy at New York University, where his advisors were David Chalmers, Ned Block, and Michael Strevens — seminal theorists on language and consciousness.

Long’s current work focuses on the possibility of AI having some form of sentience, as well as the problems that arise in discussing this possibility. For the purposes of his work, Long has defined sentience as “capable of having a certain subset of phenomenally conscious experiences — valenced ones.” He uses the example of looking at a blue square, like an Yves Klein painting. Merely seeing the blue square is a “phenomenally conscious” experience; noticing it is there, perhaps on the wall of a dining room or in a public building. He uses terms like “suffering” and “pleasure” as “shorthands for the variety of negatively and positively valenced experiences.” When we look at the Yves Klein painting, pause, and feel a sense of melancholy — that is a sentient action.

I asked Long about Geoffrey Jefferson’s claim about machines and poetry.

“I think that Geoffrey Jefferson is right — to equal a human brain, a machine would need to write poetry in the same way and for the same reasons that we do,” Long says. “Or, to make an important clarification: to equal some select few human brains. What we have now are systems that write poetry in very different ways from how humans do it.” Long adds that what “​​Jefferson did not foresee, and indeed very few people foresaw until very recently, was that it would be possible to ‘build a machine that could use words’ without having ever had any ‘thoughts and emotions felt.’ Things like ChatGPT really are quite surprising in this respect.”

I told Long that I thought AI-generated poetry feels tinny and a little empty. His response was revealing. “ChatGPT may produce fairly tinny poetry, but I think that’s more because it’s been so trained to be so inoffensive and HR-friendly rather than because of any fundamental limitation of large language models. I expect the poetic abilities of large language models to keep getting better — including large language models that few people (including me) will think are likely to be sentient.”

The ability to write or appreciate poetry — “or any other complex cognitive task” — should not be linked with sentience. “In the animal world, there are so many different ways that animals experience the world, learn, act,” Long says. “No one thinks that the question of whether a pig is sentient is settled because it can’t write or appreciate poetry. And that’s true even more for humans who can’t write or appreciate poetry.”

Long suspects that much of what connects us to poetry as an art is the recognition of human emotion and experience — the process, and perhaps struggle, behind the words. “I think that AI will be able to write great poetry very soon, if not already,” Long notes, “in terms of the quality of the words themselves. I think how it will affect us as readers will depend on whether we know it was generated by an AI or not.” He theorizes that “if and when we have AI systems that we do think feel things and are writing poetry because of those feelings — well, that could result in some genuinely strange and wonderful poetry.”

His thoughts make me wonder: perhaps the center of these considerations is not so much the poem, but the poet.   

 

 

I’ve long been drawn to the poetry of Carl Phillips because of his technical mastery. His syntactic control reveals the minute but notable differences between poetry and prose, how poetic sentences wrangle with lineation, pauses, and a page’s white space. His work also reflects my feeling that writing, and art more broadly, is the union of technique and something more mystical, and that careful technique can elicit that mystical space. In his book on writing, My Trade is Mystery: Seven Meditations from a Life in Writing, he reflects the collaboration of craft and the ineffable.

In a chapter titled “Practice,” Phillips writes that “the only real catalyst for discipline is a desire for what discipline can lead to,” a type of vocational inevitability. For Phillips, poetry is an identity because the practice requires such devotion. “Any poem I write,” he explains, “is at some level both a record and an enactment of what it means to live inside a human body for a particular few moments in time.” He notes: “It’s as much an arc of thinking as of sensation.” Each poem he drafts is an accumulation of observation, memory, and the alchemy that churns those experiences into narrative, “which means that our lives themselves are both research for the next poem and the medium by which we conduct that research.”

In his vision, the poem is the punctuation of experience. He goes for walks, he cooks, he looks out the window at the weather, which “has its own music.” Phillips concludes: “This doesn’t mean my next poem will concern weather or putting a Bolognese sauce together or the bark of a tree that I noticed while walking, but these all get added to the countless things I’ve noticed, smelled, listened to across a longish life and they leave a for-the-most-part untraceable imprint on each thought and gesture that follows, including the thought-and-gesture work of poems.” The poet needs practice and habit to reveal and generate these observations. The task of poetry, Phillips reminds us, “is not transcription, but transformation.”

At the end of an essay in 1913, Kate Masterson wrote: “All the qualities which win success in other vocations are those that make success in the writing business. But besides these there must be an almost inexhaustible patience, concentration that will not admit of other pursuits or pleasures, and constant study to avoid the deadly rut.” Masterson, like Phillips, suggests that writers must be attentive to and engaged with the world in order to sustain their craft. This attention is necessary for the practical business of writing, and for its most resplendent mystery.

In the age of AI, when writers and artists feel an intractable burden — a worry over their innate worth — the best we can do is the action which has confounded and compelled those who came before us, and those who will be here long after we are gone: we must live, and we must do so deeply, ambitiously, humanly.