• Café Life is the Colony's main hangout, watering hole and meeting point.

    This is a place where you'll meet and make writing friends, and indulge in stratospherically-elevated wit or barometrically low humour.

    Some Colonists pop in religiously every day before or after work. Others we see here less regularly, but all are equally welcome. Two important grounds rules…

    • Don't give offence
    • Don't take offence

    We now allow political discussion, but strongly suggest it takes place in the Steam Room, which is a private sub-forum within Café Life. It’s only accessible to Full Members.

    You can dismiss this notice by clicking the "x" box

Craft Chat To AI or not to AI

Invest in You. Get Full Membership now.

Mel L

Full Member
Blogger
Joined
Aug 24, 2021
Location
Switzerland
LitBits
10
Interesting column from The Ethicist at the New York Times. Screenwriting is the example here but the response applies to all forms of writing.

It seems the tide is shifting a bit in terms of writers considering it acceptable to use AI chatbots as a tool, whether for research, prompts or drafting. I realize it's still very much forbidden in all literary circles to pass off any AI-assisted work as your own, but the columnist's response is well thought out and, in my view, convincing. Also raises some very good points about TV writing. I'll be interested in hearing what others think!

"I’m a Screenwriter. Is It All Right if I Use A.I.?

I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld

From the Ethicist:

“We’re done here.” Some years ago, sleepless in a hotel room, I flicked through TV channels and landed on three or four shows in which someone was making that declaration, maybe thunderously, maybe in an ominous hush. “We have nothing more to discuss.” “This conversation is over!” Do people really talk like that? Possibly, if they’ve watched enough television.

My point is that a good deal of scripted TV has long felt pretty algorithmic, an ecosystem of heavily recycled tropes. In a sitcom, the person others are discussing pipes up with “I’m right here!” After a meeting goes off the rails, someone must deadpan, “That went well.” In a drama, a furious character must sweep everything off the desk. And so on. For some, A.I. is another soulless contraption we should toss aside, like a politician in the movies who stops reading, crumples the pages and starts speaking from the heart. (How many times have we seen that one?) But human beings have been churning out prefab dialogue and scene structures for generations without artificial assistance. Few seem to mind.

When screenwriters I know talk about generative A.I., they’re not dismissive, though they’re clear about its limits. One writer says he brainstorms with a chatbot when he’s “breaking story,” sketching major plot points and turns. The bot doesn’t solve the problem, but in effect, it prompts him to go past the obvious. Another, an illustrious writer-director, used it to turn a finished screenplay into the “treatment” the studio wanted first, saving himself days of busywork. A third, hired to write a period feature, has found it helpful in coming up with cadences that felt true to a certain historical figure. These writers loathe cliché. But for those charged with creating “lean back” entertainment — second-screen viewing — the aim isn’t achieving originality so much as landing beats cleanly for a mass audience.

So why don’t the writers feel threatened? A big reason is that suspense, in some form, is what keeps people watching anything longer than a TikTok clip, and it’s where A.I. flounders. A writer, uniquely, can juggle the big picture and the small one, shift between the 30,000-foot view and the three-foot view, build an emotional arc across multiple acts, plant premonitory details that pay off only much later and track what the audience knows against what the characters know. A recent study found that large language models simply couldn’t tell how suspenseful readers would find a piece of writing.

That’s why I hear screenwriters talk about A.I. as a tool, not an understudy with ambitions. I realize you’ve got another perspective right now: “We’re not so different, you and I,” as the villain tells the hero in a zillion movies. But don’t sell yourself short. You fed the machine your writing before you asked it to draft a scene. You made it clear what dramatic work was to be done. And so long as you and the studio or production company are consenting parties on this score, you’ll be on the right side of the Writers Guild of America rules. Your employers wanted a script; you’ll be accountable for each page they read. And though generative A.I. was trained on the work of human creators, so were you: Every show you’ve watched, every script you’ve read, surely left its mark. You have no cause to apologize.

Does the entertainment industry? It was hooked on formula, as I’ve stressed, long before the L.L.M.s arrived. Some contrivances endure simply because they’re legible, efficient and easy to execute. Take the one where one character has news to share with another, but is interrupted by the other’s news, which gives the first character reason not to share her own news. Then comes the inevitable: “So what was it you wanted to tell me?” Ulp! Writers have flogged that one for decades; why wouldn’t a bot cough it up? The truth is that many viewers cherish familiarity and prefer shows, especially soaps and franchise fare, to deliver surprises in unsurprising ways. Still, there will always be an audience for work that spurns the template — for writers who, shall we say, think outside the bot.

That’s the bigger story. In the day-to-day life of a working writer, the question is less abstract. If people press you about your A.I. policy, point to the guild’s rules. Tell them that every page you submit reads the way you want it to. Then announce: We’re done here."
 
AI industry uses the excuse (as above): Writers are influenced by every book they've read. AI tools are influenced by every book fed to them. What's the difference?

Well, AI industry, the difference is being inspired by others to create your own vs using others' words/ideas without their permission, without paying them. If a writer is especially inspired by e.g. a Hans Christian Anderson story, they should say so in their acknowledgements. If they use someone else's IP, they need to ask permission first, maybe need to pay for it. AI doesn't ask, doesn't even acknowledge its sources.
 
brainstorms with a chatbot when he’s “breaking story,” sketching major plot points and turns. The bot doesn’t solve the problem, but in effect, it prompts him to go past the obvious
This is an interesting idea. I'd like a tool that pushes me outside the norm, but not at another's expense. As Hannah says, writers aren't being paid to help create the database of ChatGPT. The idea of me putting my writing into it scares me. What will it use my writing for? I don't want to imagine. Besides, pushing yourself to think outside the box is half the fun of writing.
 
My question I guess is then, why write? If you don't want to break the story, and you don't want to push yourself and discover and express your own personal perspective, and you don't care about being original, and you don't care about plagiarism, and you don't care if your research is correct (because ChatGPT does NOT claim accuracy and it doesn't list sources) and you are okay giving your human experience over to a machine because that's easier than doing it yourself or asking other humans...

Why the f*ck are you writing???? Do something else.

But it's no surprise to anyone that my answer to "Is It All Right if I Use A.I." is always going to be a resounding no. No. NO. In no circumstance is it all right to use AI. Using it perpetuates the lowest common denominator of a machine's amalgamation of humanity.
 
My question I guess is then, why write? If you don't want to break the story, and you don't want to push yourself and discover and express your own personal perspective, and you don't care about being original, and you don't care about plagiarism, and you don't care if your research is correct (because ChatGPT does NOT claim accuracy and it doesn't list sources) and you are okay giving your human experience over to a machine because that's easier than doing it yourself or asking other humans...

Why the f*ck are you writing???? Do something else.

But it's no surprise to anyone that my answer to "Is It All Right if I Use A.I." is always going to be a resounding no. No. NO. In no circumstance is it all right to use AI. Using it perpetuates the lowest common denominator of a machine's amalgamation of humanity.

I suppose they're are all kinds of writers, some only do it for the money, some the love, some the art etc etc.
 
Invest in You. Get Full Membership now.
AI industry uses the excuse (as above): Writers are influenced by every book they've read. AI tools are influenced by every book fed to them. What's the difference?

Well, AI industry, the difference is being inspired by others to create your own vs using others' words/ideas without their permission, without paying them. If a writer is especially inspired by e.g. a Hans Christian Anderson story, they should say so in their acknowledgements. If they use someone else's IP, they need to ask permission first, maybe need to pay for it. AI doesn't ask, doesn't even acknowledge its sources.
So it's all about the money? I'm not sure it's as black and white as that. Even if the sources (not just works of writers, but the entire catalogue of human knowledge and creation) were paid or acknowledged by the AI industry, it would still be controversial. As long as we view the chatbots as competitors rather than tools, we will resent them. But I'm beginning to think we cannot afford to ignore them. Perhaps the best way to protect ourselves as masters is to learn how to harness the potential of AI tools?
 
This is an interesting idea. I'd like a tool that pushes me outside the norm, but not at another's expense. As Hannah says, writers aren't being paid to help create the database of ChatGPT. The idea of me putting my writing into it scares me. What will it use my writing for? I don't want to imagine. Besides, pushing yourself to think outside the box is half the fun of writing.
Agree! But what if AI could do some of the grunt work, ie research or proofing or administrative support? Tiffany Yates Martin had an interesting post on this. (What to Consider in Using AI for Your Writing).
She believes that the generative part of creating is still very much up to humans but uses AI for supportive tasks. And she asks every writer to seriously think about their own approach. If it's a hard 'no' then that's fine. But perhaps we also have to consider that some of kind of 'yes' is also okay.
 
My question I guess is then, why write? If you don't want to break the story, and you don't want to push yourself and discover and express your own personal perspective, and you don't care about being original, and you don't care about plagiarism, and you don't care if your research is correct (because ChatGPT does NOT claim accuracy and it doesn't list sources) and you are okay giving your human experience over to a machine because that's easier than doing it yourself or asking other humans...

Why the f*ck are you writing???? Do something else.

But it's no surprise to anyone that my answer to "Is It All Right if I Use A.I." is always going to be a resounding no. No. NO. In no circumstance is it all right to use AI. Using it perpetuates the lowest common denominator of a machine's amalgamation of humanity.
I write for all the reasons you mention, LJ. But I don't see anything innately evil in using AI for certain tasks (TBD) to save time or spark different ways of thinking. Of course, any information you obtain from a chatbot can be wrong but all research requires crosschecking references and using several sources.

I haven't used AI yet for any creative work but I'm not ready to say no, never at this stage. I truly am beginning to think of this debate as 'keeping your friends close and your enemies closer.'
 
Invest in You. Get Full Membership now.
This is an interesting idea. I'd like a tool that pushes me outside the norm, but not at another's expense. As Hannah says, writers aren't being paid to help create the database of ChatGPT. The idea of me putting my writing into it scares me. What will it use my writing for? I don't want to imagine. Besides, pushing yourself to think outside the box is half the fun of writing.
I prefer people defined as mensches -no tools need apply. The thing is something was lost with driving to the library, asking the librarian, looking thru the card file. With AI the only history that gets handed to you is what the bots are allowed to know. It is the first step to controlling the narrative. In the US the narrative that Hitler was the bad guy is already being rewritten. When any evidence to the contrary is erased what will our historical fiction look like?
AI brings not enlightenment but a new dark ages, meaning the loss of knowledge and skills so that the light in enlightenment dies.
. There is more than one way to burn libraries and how many people today know that the dark ages began with the torching of the world's greatest library in Alexandria by an evangelical crowd protesting a woman mathmetician and philosopher? And will that knowledge survive a future controlled by those who control AI?
 
Last edited:
Mel, I would warn about this. Enshittification. When there is no alternative to AI-what do those needing to monetise it do? Because make no mistake the investment in alternative energies and a different future evaporated when AI investment became a gold rush. The richest men in the world are becoming richer, in make-believe money-true, by this investment but when they have seized the future and have control over the production of energy by bringing back nuclear power with all it's down-played downsides? What then? Hint. The electrical grid system is at peak delivery and that is why electric power prices are already rising. The power going to data storage is enough to heat every dwelling in the world thru any winter, There can be only one electrical benefactor. Consumers or AI and its masters. Soon to be ours?

 
Mel, I would warn about this. Enshittification. When there is no alternative to AI-what do those needing to monetise it do? Because make no mistake the investment in alternative energies and a different future evaporated when AI investment became a gold rush. The richest men in the world are becoming richer, in make-believe money-true, by this investment but when they have seized the future and have control over the production of energy by bringing back nuclear power with all it's down-played downsides? What then? Hint. The electrical grid system is at peak delivery and that is why electric power prices are already rising. The power going to data storage is enough to heat every dwelling in the world thru any winter, There can be only one electrical benefactor. Consumers or AI and its masters. Soon to be ours?

I do agree with the author's premise that enshittification can only be remedied by systemic solutions. We need to fix our broken profit-driven social system, globally, before it destroys the planet. Meanwhile, AI, despite being the scapegoat for so many of the world's problems, marches on. And it will do, with or without us.
 
I do agree with the author's premise that enshittification can only be remedied by systemic solutions. We need to fix our broken profit-driven social system, globally, before it destroys the planet. Meanwhile, AI, despite being the scapegoat for so many of the world's problems, marches on. And it will do, with or without us.
Well, I take the stand that if we don't use it, it will not march on. "It's here to stay, so get onboard" is as much a marketing ploy as anything. If we don't stop using it, then who knows where it will go? And I mean that as a rhetorical question, because no one can know where it will go, even its creators. But I don't think any where good.

So in a bigger picture kind of way, I think using it "as a tool" is like doing a little bit of cocaine (or whatever addictive drug) and saying everyone's doing it, and it helps me cope, and I'm not going to abuse it. But the act of buying it keeps it in the market, and eventually, if enough people use it and rely on it, then we have a population of addicted people who no longer think for themselves.

I am not arguing that it's easier to use it. Or that it might be useful in some cases. Or even the ethical implications of where the info comes from, or even the environmental issues.... I'm saying that using it opens the door for so much harm to society and to us as humans. More than we can even imagine.

Mel, I will always support you personally no matter what you decide. But this for me is a soapbox issue, so I will never support AI even just a little.
 
Well, I take the stand that if we don't use it, it will not march on. "It's here to stay, so get onboard" is as much a marketing ploy as anything. If we don't stop using it, then who knows where it will go? And I mean that as a rhetorical question, because no one can know where it will go, even its creators. But I don't think any where good.

So in a bigger picture kind of way, I think using it "as a tool" is like doing a little bit of cocaine (or whatever addictive drug) and saying everyone's doing it, and it helps me cope, and I'm not going to abuse it. But the act of buying it keeps it in the market, and eventually, if enough people use it and rely on it, then we have a population of addicted people who no longer think for themselves.

I am not arguing that it's easier to use it. Or that it might be useful in some cases. Or even the ethical implications of where the info comes from, or even the environmental issues.... I'm saying that using it opens the door for so much harm to society and to us as humans. More than we can even imagine.

Mel, I will always support you personally no matter what you decide. But this for me is a soapbox issue, so I will never support AI even just a little.
Thanks for thé vote of confidence, LJ. No matter what, I am all for healthy debate.
 
Invest in You. Get Full Membership now.
Last year, training one ChatGPT LLM used over 120 times an average homes' annual use, and cooling the data storage centre used millions of litres of water, and I haven't even mentioned carbon footprint, electronic waste and rare mineral demand. There are so many ethical reasons (stealing IP; destroying the planet) which lead me to choose not to touch AI. If I miss out on any help it can give, so be it.
 
It is the first step to controlling the narrative

Totally this!

training one ChatGPT LLM used over 120 times an average homes' annual use, and cooling the data storage centre used millions of litres of water, and I haven't even mentioned carbon footprint, electronic waste and rare mineral demand

Truly? I can see this. Too scary. Yet another way to destroy the planet by greedy selfish twats.
 
I use AI for research, marketing and creating promo vids. I tested it for editing and it was atrocious. I decided to create my own AI Sarit that I will train on all my own writing/photos/docs so it will be my private assistant.
That's really interesting, Sarit. I imagine you must be pretty tech-savvy to train your own AI. Do you customize an existing chatbot or create your own? Do you have to pay for the service? I know my daughter uses a paid version of ChatGPT in her work as a clinical pathologist.
 
I don't use AI for writing, but I have no choice but to use it in my day job.
I, like many others, have mixed feelings about this.

If you live in England, your GP surgery will have been using AI to triage and book appointments since October 1st. The surgery where I work is an early adopter and has been using AI to do this since last December. The surgery where I am a patient has been using a different AI system for a few years now. The use of AI is mandated by NHS England; there is no choice.

Our AI is called PATCHS. I think it's the most commonly used AI in English GP surgeries. The patient types in their symptoms, and PATCHS sorts them out into priority order for the clinician's attention. PATCHS automatically sends out further questions for more info depending on the symptoms. If the patient says things like I can't breathe or I think I'm having a heart attack, PATCHS will tell them not to mither the GP surgery, but instead go straight to A&E. This, in itself, can save lives, rather than have the life-threatened patient on hold to speak to a receptionist who may or may not tell them the same thing. Yes this is a real thing. Some patients are too scared to call an ambulance when they are at risk of dying. They would rather be position... eight... in the queue in the hope that their family doc can cancel the morning's clinic to deliver life-saving interventions in their own home.

At the moment, PATCHS is overseen by two clinicians at all times. It doesn't make any diagnostic decisions. Most of the replies that a patient receives will be from a human with at least an MSc in clinical diagnostics if not a GP. The patients assume we are robots, however, and can be very rude to us. But their attitudes and foul language now form part of their medical records for perpetuity, so joke's on them.
At least it's better than the receptionists who previously had this abuse to their faces on a daily basis.
And before you ask, we haven't laid off any receptionists. If anything there is more work for them to do with marshalling patients to use the AI because the health secretary has said we need to be more like Amazon.

The patients have no choice but to use PATCHS. Some of them love it. Some of them hate it. Some of them left us to join another practice when we went live with PATCHS last year. Well, everyone's got to use it now, so that was pointless.

A few interesting experiences with PATCHS:
1. A patient put a request in to be seen by a clinician they had seen before about the same issue. This clinician's surname is Wisdom. PATCHS told them that we could not assist and that they needed to see their dentist.
2. PATCHS will flag patients who demand an English or white doctor. We review the request and send a warning or strike them off our list if this is an issue of racism. You may be shocked to know that this isn't unusual. It's always been awkward for reception staff to handle. Especially when patients have the gall to ask this of a receptionist who is not white-British. Now PATCHS can send an automated zero-tolerance message.
Patients are still allowed to request the gender of their clinician. This makes sense for intimate examinations to be done by the same gender clinician as the patient. But we don't yet have a way of filtering out the patients who don't trust female doctors.
3. Patients can now book their own appointments and get automated confirmation and/or downloads to their calendars, etc. So why then, on Friday alone, were there fifteen missed appointments? Nearly four hours of wasted clinician time.

Anyway, that's my rant about AI. We have no choice in many areas of life.
But I have a choice to write without AI and, at least for now, that's what I'm doing.
 
Last edited:
Invest in You. Get Full Membership now.
I do agree with the author's premise that enshittification can only be remedied by systemic solutions. We need to fix our broken profit-driven social system, globally, before it destroys the planet. Meanwhile, AI, despite being the scapegoat for so many of the world's problems, marches on. And it will do, with or without us.
I think that is a horrifying premise and one that is not true. I think it stems from the old saying that you cannot stop progress. But how is progress defined? The truth is that AI depends on data storage AND constant input of data. There are madmen out there convinced that AI is the next evolution on earth, that it is inevitable that it will take over and eliminate humans and that is a good thing. As a humanist I say that IS insane, as is all nihilism. it is the ultimate conclusion of the current premise of business schools that constant growth is necessary for economies wo an examination of the reality that, that is just another definition of cancer.
AI taking over is not inevitable. In fact unless these investors can find a way to moneticize it, it is a bubble that will burst. The discussion needs to be about the exorbitant costs it requires to even exist. That the pollution it creates is far, far greater than the return it can offer.


It is in effect just another modern Easter Island Head like giant apartment buildings in NYC that are unliveable but exist as real estate investments. There was no inevitability that they be built. No direction from an invisible hand. They exist as a product created from marketing strategy.


I would recommend this book for a reminder that statistics are not reason.

 
Last edited:

Further Articles from the Author Platform

Latest Articles By Litopians

  • Baking day
    Today has been a baking day. Out of necessity, for work, but a baking day nevertheless. A baking day ...
  • Conflict
    What is narrative conflict? Narrative conflict is a fundamental element of storytelling that involve ...
  • Why Is This Your Character’s Story To Tell?
    I watched an interesting webinar recently. Part of it focused on the title question of this post. At ...
  • If Writing Were Choreographed
    In a studio far, far away… There once was a girl. What’s this? That’s not a start. It’s bar ...
  • Narrative Tension
    Do readers care about what happens to your character? Are they invested in their journey? How can we ...
  • Jonathon
    I first met Jonathon in the early 90s, and we became mates or buddies rather than close friends. He ...
  • Faster Than A Speeding Bullet?
    Faster than a speeding bullet? No. I love to take my time and savor the words on the page as I read, ...
What Goes Around
Comes Around!
Back
Top