While
the writing advice
to "kill your
darlings" has a
roster of many
fathers (Faulkner,
Wilde, King,
Chesterton, Welty,
Chekov), its true
sire is Arthur
Quiller-Couch, who
coined it in "On
Style," a 1914
lecture included
in his published
1913-1914
Cambridge
lectures, "On the
Art of Writing,"
where he inveighed
against
"extraneous
Ornament":
If you here
require a
practical rule of
me, I will
present you with
this: Whenever
you feel an
impulse to
perpetrate a
piece of
exceptionally
fine writing,
obey
it—whole-heartedly—and
delete it before
sending your
manuscript to
press. Murder
your darlings.[1]
These days,
available to all
writers, is a free
and competent
darling-killer:
ChatGPT. (I'm
using version 3.5
with a free
account at OpenAI;
ChatGPT 4 is
available through
the Bing search
engine, which, as
an aside, is a
pleasure to use
for search.)
First, some
caveats. ChatGPT
does not think, at
least as a human
thinks. Which
means that when
you, as the
writer, prompt it
to "streamline and
simplify the
following text,"
it makes its
editorial
decisions based on
comparing the text
you offer against
its training
materials, not on
anything like the
lived experience
of a multidecade
editor with
acquired tastes
and biases. In
other words, it
has no cognitive
or aesthetic
context for what
it offers back to
you, which is only
the results of the
statistical
probabilities
generated by its
algorithm.
It follows, then,
that ChatGPT does
not have a "voice"
in the way that
that word is
understood
regarding human
writers. If you
asked it to
describe its
voice, it might
likely reply that
it is ChatGPT and
therefore cannot
have a voice, but
it can list for
you the
constituent
elements of a
voice and what
others have said
about what it is
and why it is
important.
Given these
limits, however,
what I've found
that it does very
well is what I
call "stress-test"
my writing, that
is, cleanse it of
tics and looseness
that get in the
way of saying what
I want to say as
clearly and as
quickly as I can
say it.
INTERLUDE
A quick example of what I mean: I ran the above sentence through
ChatGPT with this prompt:
Please use your decades-long editorial expertise to streamline and
simplify the following sentence: "Given these limits, however, what I've
found that it does very well is what I call "stress-test" my writing, that is
, cleanse it of tics and looseness that get in the way of saying what I
want to say as clearly and as quickly as I can say it."
The result:
Despite these limitations, it excels at stress-testing my writing, helping
me eliminate any stylistic quirks or verbosity that hinder clear and
concise communication.
I like this but want to keep something of the original, so this is the revised
sentence:
Despite these limitations, it excels at "stress-testing" my writing,
helping me eliminate any stylistic tics or looseness that hinder clear
and concise communication.
ChatGPT and I have partnered to create something that I think is better
than the original. More on this later.
TO CONTINUE
In other words, it planes down my style in a very Strunk & White way:
"A sentence should contain no unnecessary words, a paragraph no
unnecessary sentences, for the same reason that a drawing should have no
unnecessary lines and a machine no unnecessary parts."
Another and longer example. Here are the first two paragraphs from an
essay I wrote about how the term "participatory fascism" applies to modern
American politics:
It Can't Happen Here by Sinclair Lewis came into the public domain in
2023, so I wanted to see if it could be adapted to something. (It has
been adapted as a stage play but not a movie – and it had great success
as a play.) I romped through the book, whose tone I would describe as
whimsically vicious with a side order of jeremiad, and it certainly gave
a good feel for the fears and frustrations of the times as people
confronted an ascendant authoritarianism and many said, "Hmm, that
doesn't sound so bad."
Much like George Orwell, Lewis conceived of the authoritarian regime
as based on physical punishment and intellectual oppression designed
to flatten all differences while still using those differences to divide and
conquer (such as Windrip's abolition of the Negro). Orwell crystallized
his vision of how this authoritarianism would work in the image of a
boot stamping on a human face forever, and Lewis follows in that same
vein. (163 words)
When I finished the essay, I ran it through ChatGPT, and here is what it did
with those opening paragraphs:
"It Can't Happen Here" by Sinclair Lewis entered the public domain in
2023, prompting my curiosity about potential adaptations. The book,
characterized by a whimsical viciousness with jeremiad undertones,
gives a good feel for the fears and frustrations of an era grappling with
rising authoritarianism. It's akin to George Orwell's vision, depicting a
regime that employs physical punishment and intellectual suppression
to both homogenize and divide, similar to Windrip's abolition of the
Negro. (73 words)
I quite like the changes:
● It deleted digressions or digression-like instances to keep the writing on
the trunk line. For instance, gone is the parenthetical statement about
other adaptations, and it brings the mention of how Windrop abolished the
Negro from behind the parentheses to nicely round off the paragraph.
● It tightened the prose. My third sentence kind of lollops-along, trying for
a little humor ("romped"), more or less gets around to describing the
book's tone ("whimsically vicious with a side order of jeremiad"—and why
"side order"? what purpose does the culinary reference serve?), and ambles
its way to a soft landing with a bit of ventriloquism thrown in for a faux
flair.
● And I lollygag into the next paragraph as if the reader has nothing but
time to devote to my maunderings.
ChatGPT made short work of that. It distilled the beginning of the third
sentence into a very nice concentrate, saying better what I was mucking
about trying to say at all: "...characterized by a whimsical viciousness with
jeremiad undertones…" Perfect.
Not content with that, it took the rest of that sentence and the whole of the
next paragraph and concisioned them neatly.
Notice that it deleted my reference to Orwell's image of a boot stamping on
a human face, but later, ChatGPT brought that image back in a very neat
way.
I was making a case that Aldous Huxley, not Orwell, had the better
argument about how the repression would happen: through spectacle
rather than assault, through worrying people about losing their comforts
rather than by taking those comforts away. Of course, I was wending my
way through my presentation—wending here, wending there, wending
back and around—but ChatGPT made straight the way and connected
Orwell and the boots sharply to cement my point:
Meanwhile, we can keep all the trappings of a democracy as long as
those trappings are limited to voting, making modest campaign
donations (while dark money floods the system), engaging in elections
as spectacles, and indulging in excessive consumerism while drowning
in debt.
In this scenario, it's not the jackboot on the face but the allure of 50%
off Doc Martens that prevails.
My original sentence was "Not the jackboot on the face but Doc Martens for
50% off while they last." ChatGPT added the bit that my prose needed to
make the point stick: "In this scenario…that prevails."
Does this bother me? Not a bit, for a few reasons.
I've used ChatGPT to conjure material from scratch for my job as a
communications person for a university's development department. For
instance, I want a solicitation email with a particular theme requesting a
donation, and with very few exceptions, ChatGPT will generate functional
but not inspired prose no matter how precise I make the prompt.
And no wonder: because it's drawing from thousands of solicitation
examples for its output, ranging from the mediocre to the sublime, it will
give me the useful mean, the serviceable average, without any assessment
of its effectiveness. Remember: ChatGPT is not built to be creative but to
give you the most probable statistical outcome that answers the prompt
you ask.
So, I don't use it for that. Instead, I use it as an assistant to brainstorm
options to phrases, sentences, or documents when I'm stuck. It usually
doesn't unknot the problem outright, but if I tweak the prompt to
regenerate responses, ChatGPT often provides more options than I could
come up with on my own, and out of that welter, something usually arises
that gives me a way forward that I wouldn't have thought of myself.
ChatGPT is a superb summarizer. I am still astounded by the way it can
take a complex document and extract its core points in a clean prose. It is
also a good summarizer if you want a range of opinions or options. Let's say
I want to find the five best practices for writing a good solicitation email, so
I prompt it to do that, and it gives me that information (as best as it can
based on its training). You can then ask it to explain more deeply any one
of these practices, and it will (again, limited by its training).
If I wanted to, I could sharpen the prompt to say that not only do I want
the thumbnail descriptions but that I also want ChatGPT to extract from
each practice its top three principles and the top three ways to put those
principles into action, and to put all of that into a table that I can use for a
presentation. And it will do that (again, limited to its training).
These results are not gospel; you need to verify them. But you also save so
much time doing it this way rather than opening up Google and firing off
your search terms and then trolling through the results and so on.
ChatGPT is good for generating email subject lines and pre-header text
based on your parameters: it must say X, or it must say X in the first three
words, or it should have a joke, and so on. And it will generate as many of
them as you want (though, after two or three cycles, it just begins recycling
what it's already produced—even ChatGPT can get bored).
In short, ChatGPT, built as it is right now, is a ready and willing editorial
assistant. It helps me keep my writing honest, it takes on some of the more
prosaic tasks in my work (I mean, generating 10 to 12 subject lines for
every email quickly drains me of anything original or catchy), and, when
needed, can tutor me on some aspect of something more efficiently than if
do the search myself.
It's difficult to know how to end this essay because no one at this point can
identify all the ripples generative AI will cause as it falls into the pool of
human life. Generative AI is/will be dangerous, magical, disruptive,
surprising, revealing, obscurantist, pedestrian, and so on and so on and so
on. I just know that I have found a tool that, within the limits that bind it,
has prompted me to reconsider things in ways that add value to what I
produce without nullifying my own mind and soul.
Will it become so sentient that it will take over my job? In my job as a
writer of fundraising documents, I usually rely on confined and tested
approaches that are probably algorithmable, and I could see them
simulating what I do well enough to put me out of a job. (Some are even
suggesting that AI could handle the routine bits of being a CEO—that will
happen as well.)
The dangers of generative AI, like ChatGPT, lie more in systems that make
decisions about people's lives, which Dan Quillen points out brilliantly in
his book Resisting AI. In that realm, people should rightly fear how AI will
be used to preserve the current power relations in our society and, quite
literally, determine who will live and who will die.
But at my lower level, I'm content with the technology and look forward to
how it is going to upset and unseat the conventional wisdoms we indulge
about life, the universe and everything.
[1] https://slate.com/culture/2013/10/kill-your-darlings-writing-advice
-what-writer-really-said-to-murder-your-babies.html
|