Interesting column from The Ethicist at the New York Times. Screenwriting is the example here but the response applies to all forms of writing.
It seems the tide is shifting a bit in terms of writers considering it acceptable to use AI chatbots as a tool, whether for research, prompts or drafting. I realize it's still very much forbidden in all literary circles to pass off any AI-assisted work as your own, but the columnist's response is well thought out and, in my view, convincing. Also raises some very good points about TV writing. I'll be interested in hearing what others think!
"I’m a Screenwriter. Is It All Right if I Use A.I.?
I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld
From the Ethicist:
“We’re done here.” Some years ago, sleepless in a hotel room, I flicked through TV channels and landed on three or four shows in which someone was making that declaration, maybe thunderously, maybe in an ominous hush. “We have nothing more to discuss.” “This conversation is over!” Do people really talk like that? Possibly, if they’ve watched enough television.
My point is that a good deal of scripted TV has long felt pretty algorithmic, an ecosystem of heavily recycled tropes. In a sitcom, the person others are discussing pipes up with “I’m right here!” After a meeting goes off the rails, someone must deadpan, “That went well.” In a drama, a furious character must sweep everything off the desk. And so on. For some, A.I. is another soulless contraption we should toss aside, like a politician in the movies who stops reading, crumples the pages and starts speaking from the heart. (How many times have we seen that one?) But human beings have been churning out prefab dialogue and scene structures for generations without artificial assistance. Few seem to mind.
When screenwriters I know talk about generative A.I., they’re not dismissive, though they’re clear about its limits. One writer says he brainstorms with a chatbot when he’s “breaking story,” sketching major plot points and turns. The bot doesn’t solve the problem, but in effect, it prompts him to go past the obvious. Another, an illustrious writer-director, used it to turn a finished screenplay into the “treatment” the studio wanted first, saving himself days of busywork. A third, hired to write a period feature, has found it helpful in coming up with cadences that felt true to a certain historical figure. These writers loathe cliché. But for those charged with creating “lean back” entertainment — second-screen viewing — the aim isn’t achieving originality so much as landing beats cleanly for a mass audience.
So why don’t the writers feel threatened? A big reason is that suspense, in some form, is what keeps people watching anything longer than a TikTok clip, and it’s where A.I. flounders. A writer, uniquely, can juggle the big picture and the small one, shift between the 30,000-foot view and the three-foot view, build an emotional arc across multiple acts, plant premonitory details that pay off only much later and track what the audience knows against what the characters know. A recent study found that large language models simply couldn’t tell how suspenseful readers would find a piece of writing.
That’s why I hear screenwriters talk about A.I. as a tool, not an understudy with ambitions. I realize you’ve got another perspective right now: “We’re not so different, you and I,” as the villain tells the hero in a zillion movies. But don’t sell yourself short. You fed the machine your writing before you asked it to draft a scene. You made it clear what dramatic work was to be done. And so long as you and the studio or production company are consenting parties on this score, you’ll be on the right side of the Writers Guild of America rules. Your employers wanted a script; you’ll be accountable for each page they read. And though generative A.I. was trained on the work of human creators, so were you: Every show you’ve watched, every script you’ve read, surely left its mark. You have no cause to apologize.
Does the entertainment industry? It was hooked on formula, as I’ve stressed, long before the L.L.M.s arrived. Some contrivances endure simply because they’re legible, efficient and easy to execute. Take the one where one character has news to share with another, but is interrupted by the other’s news, which gives the first character reason not to share her own news. Then comes the inevitable: “So what was it you wanted to tell me?” Ulp! Writers have flogged that one for decades; why wouldn’t a bot cough it up? The truth is that many viewers cherish familiarity and prefer shows, especially soaps and franchise fare, to deliver surprises in unsurprising ways. Still, there will always be an audience for work that spurns the template — for writers who, shall we say, think outside the bot.
That’s the bigger story. In the day-to-day life of a working writer, the question is less abstract. If people press you about your A.I. policy, point to the guild’s rules. Tell them that every page you submit reads the way you want it to. Then announce: We’re done here."
It seems the tide is shifting a bit in terms of writers considering it acceptable to use AI chatbots as a tool, whether for research, prompts or drafting. I realize it's still very much forbidden in all literary circles to pass off any AI-assisted work as your own, but the columnist's response is well thought out and, in my view, convincing. Also raises some very good points about TV writing. I'll be interested in hearing what others think!
"I’m a Screenwriter. Is It All Right if I Use A.I.?
I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld
From the Ethicist:
“We’re done here.” Some years ago, sleepless in a hotel room, I flicked through TV channels and landed on three or four shows in which someone was making that declaration, maybe thunderously, maybe in an ominous hush. “We have nothing more to discuss.” “This conversation is over!” Do people really talk like that? Possibly, if they’ve watched enough television.
My point is that a good deal of scripted TV has long felt pretty algorithmic, an ecosystem of heavily recycled tropes. In a sitcom, the person others are discussing pipes up with “I’m right here!” After a meeting goes off the rails, someone must deadpan, “That went well.” In a drama, a furious character must sweep everything off the desk. And so on. For some, A.I. is another soulless contraption we should toss aside, like a politician in the movies who stops reading, crumples the pages and starts speaking from the heart. (How many times have we seen that one?) But human beings have been churning out prefab dialogue and scene structures for generations without artificial assistance. Few seem to mind.
When screenwriters I know talk about generative A.I., they’re not dismissive, though they’re clear about its limits. One writer says he brainstorms with a chatbot when he’s “breaking story,” sketching major plot points and turns. The bot doesn’t solve the problem, but in effect, it prompts him to go past the obvious. Another, an illustrious writer-director, used it to turn a finished screenplay into the “treatment” the studio wanted first, saving himself days of busywork. A third, hired to write a period feature, has found it helpful in coming up with cadences that felt true to a certain historical figure. These writers loathe cliché. But for those charged with creating “lean back” entertainment — second-screen viewing — the aim isn’t achieving originality so much as landing beats cleanly for a mass audience.
So why don’t the writers feel threatened? A big reason is that suspense, in some form, is what keeps people watching anything longer than a TikTok clip, and it’s where A.I. flounders. A writer, uniquely, can juggle the big picture and the small one, shift between the 30,000-foot view and the three-foot view, build an emotional arc across multiple acts, plant premonitory details that pay off only much later and track what the audience knows against what the characters know. A recent study found that large language models simply couldn’t tell how suspenseful readers would find a piece of writing.
That’s why I hear screenwriters talk about A.I. as a tool, not an understudy with ambitions. I realize you’ve got another perspective right now: “We’re not so different, you and I,” as the villain tells the hero in a zillion movies. But don’t sell yourself short. You fed the machine your writing before you asked it to draft a scene. You made it clear what dramatic work was to be done. And so long as you and the studio or production company are consenting parties on this score, you’ll be on the right side of the Writers Guild of America rules. Your employers wanted a script; you’ll be accountable for each page they read. And though generative A.I. was trained on the work of human creators, so were you: Every show you’ve watched, every script you’ve read, surely left its mark. You have no cause to apologize.
Does the entertainment industry? It was hooked on formula, as I’ve stressed, long before the L.L.M.s arrived. Some contrivances endure simply because they’re legible, efficient and easy to execute. Take the one where one character has news to share with another, but is interrupted by the other’s news, which gives the first character reason not to share her own news. Then comes the inevitable: “So what was it you wanted to tell me?” Ulp! Writers have flogged that one for decades; why wouldn’t a bot cough it up? The truth is that many viewers cherish familiarity and prefer shows, especially soaps and franchise fare, to deliver surprises in unsurprising ways. Still, there will always be an audience for work that spurns the template — for writers who, shall we say, think outside the bot.
That’s the bigger story. In the day-to-day life of a working writer, the question is less abstract. If people press you about your A.I. policy, point to the guild’s rules. Tell them that every page you submit reads the way you want it to. Then announce: We’re done here."