<
https://freedium.cfd/https://ai.gopubby.com/when-ai-detectors-do-more-harm-than-good-4ca5ec916ce9>
“2014 marked the year when I completed my master's thesis in philosophy after
eight grueling months of work. I remember writing it had been a slow and deeply
reflective process, where I sometimes spent hours rewriting the same sentence
until it felt just right.
Perhaps a little obsessive as an approach, but I was proud of the result,
eventually. And, most importantly, it felt like me.
Years of study didn't just teach me what to express but also how I wanted to
express it.
Then, after a few years as an academic writer, I shifted toward the field of
AI. That's where I discovered the world of Large Language Models (LLMs), like
ChatGPT, but also their so-called "enemies": AI detectors. Part of my freelance
work involved flagging AI-generated content produced by other freelancers using
these tools during the review process.
I had a hunch early on that these tools weren't entirely reliable. But I wanted
proof, so I decided to put an entire chapter of my master's thesis through a
GPT detector.
Result: 60% AI-generated.
It was rather shocking, really. I thought about all the research and creative
effort I had poured into that work, a work that today, in the era of LLMs,
would perhaps be considered cheating, potentially even jeopardizing my academic
path. Who knows?
That moment pushed me to think seriously about the ethical and social
implications of AI detectors. While I understand and even support their general
mission, the way these tools currently function risks inadvertently creating
stylistic alienation and discrimination.”
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics