Once you’ve read about a dozen AI-generated articles, you start to recognize ChatGPT’s style. Today I saw two articles on Facebook, one that seemed to be pure glurge (emotion-provoking content that was probably not true, or a distortion of the truth), and another that was true but exaggerated (for the sake of engagement, I assume).
I don’t know about you, but I prefer to read content, and view art, created by humans.
So I decided to check if I was right about the two articles that I suspected of being AI, and pasted the text into an AI detector. The first one came back as 79% AI, the second one as 78% AI. The irony with the AI detector tool is that it probably also uses AI, so I won’t be using it on a regular basis. I just wanted to confirm that my instincts were correct. A further irony is that it offered to humanize the writing style. So now there will be a second generation of AI writing that is even harder to detect.
Most of the articles I found about how to detect AI writing were aimed at helping college professors detect AI writing in student essays, rather than in breathless Facebook posts. One point that was frequently emphasized was the flat and emotionless quality of AI writing—but I have noticed that a lot of these posts are aimed at provoking emotion.
The thing that I have noticed the most in these articles are these features:
- The sentences and paragraphs are all the same length.
- The writing overuses subordinate clauses.
- It overuses certain words and phrases (“quietly”, for example).
- There’s a relentless feeling about the writing, because the pace and sentence length doesn’t vary.
- I’m usually bored by the end of the article, even if the topic is something I care about. This is because of the uniformity of the style, I think.
Once you have noticed the style and checked it a couple of times with detection software, it becomes easier and easier to spot it.
Why this matters
(1) AI generation is very expensive in terms of energy.
Data centres consume vast amounts of electricity, creating greenhouse gas emissions. They also require large amounts of water for construction and to cool the electrical components. Global AI demand is expected to consume 4.2-6.6 billion cubic meters of water by 2027, surpassing Denmark’s total annual water withdrawal of 4-6 billion cubic meters.
Source: The United Nations
(2) AI may not be able to do your job, but it will make it soul-destroying.
Let’s say my hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 X-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 X-rays per day. That’s fine, we just care about finding all those tumors.”
…
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.”
… The radiologist’s job is not really to oversee the AI’s work, it is to take the blame for the AI’s mistakes.
— Cory Doctorow, in The Guardian
(3) AI might be great if it was doing all the tedious parts of a job, and leaving the creativity to humans, but that isn’t what’s happening.
…the image-generation program does not know anything about your big, numinous, irreducible feeling. The only thing it knows is whatever you put into your prompt, and those few sentences are diluted across a million pixels or a hundred-thousand words, so that the average communicative density of the resulting work is indistinguishable from zero.
— Cory Doctorow, in The Guardian
(4) AI hallucinates. In other words, it makes stuff up.
There’s enough misinformation and disinformation on the internet, without the huge surge in AI-generated content, most of which is nonsense, or at least exaggerated.
AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AIchatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
Source: IBM, AI Hallucinations
According to IBM, the AI hallucinations issue is mostly fixed, but I would still suggest that relying on AI as evidence of anything is not a good idea.
(5) Truth matters
If you share an article that has been written by AI, confabulated from multiple online sources, and exaggerated for clicks, even if it seems to support a cause that you care about, you are undermining that cause. People can point to the fact that the article is AI-generated and use it to undermine the facts, and that undermines the cause. If no-one knows what is true and what is false, even true facts become questionable, and even worse, people cease to care. Hannah Arendt predicted this.
In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and nothing was true… The totalitarian mass leaders based their propaganda on the correct psychological assumption that, under such conditions, one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness.
Hannah Arendt, The Origins of Totalitarianism
Further reading and sources
How to spot AI writing: 5 telltale signs to look for – Tom’s Guide
How to Detect AI Writing – WikiHow
Identify AI-written texts – FlatPage
AI detector – Just Done – this tool combines three different AI detection tools.
How much energy does AI use? – United Nations
What are AI Hallucinations? – IBM
“AI companies will fail. We can salvage something from the wreckage” – Cory Doctorow – The Guardian

Comments
2 Responses to “How to spot AI-generated content”
I really enjoyed reading this!
Thank you!