<
https://www.techdirt.com/2025/10/02/stanford-study-ai-generated-workslop-actually-making-productivity-worse/>
"Automation undeniably has some useful applications. But the folks hyping
modern “AI” have not only dramatically overstated its capabilities, many of
them generally view these tools as a way to lazily cut corners or undermine
labor. There’s also a weird innovation cult that has arisen around managers and
LLM use, resulting in the mandatory use of tools that may not be helping
anybody — just because.
The result is often a hot mess, as we’ve seen in journalism. The AI hype simply
doesn’t match the reality, and a lot of the underlying financial numbers being
tossed around aren’t based in reality; something that’s very likely going to
result in a massive bubble deflation as the reality and the hype cycles collide
(Gartner calls this the “trough of disillusionment,” and expects it to arrive
next year).
One recent study out of MIT Media Lab found that 95% of organizations see no
measurable return on their investment in AI (yet). One of many reasons for
this, as noted in a different recent Stanford survey (hat tip:
404 Media), is
because the mass influx of AI “workslop” requires colleagues to spend
additional time trying to decipher genuine meaning and intent buried in a sharp
spike in lazy, automated garbage.
The survey defines workslop as “AI generated work content that masquerades as
good work, but lacks the substance to meaningfully advance a given task.”
Somewhat reflective of America’s obsession with artifice. And it found that as
use of ChatGPT and other tools have risen in the workplace, it’s created a lot
of garbage that requires time to decipher:
“When coworkers receive workslop, they are often required to take on the
burden of decoding the content, inferring missed or false context. A cascade
of effortful and complex decision-making processes may follow, including
rework and uncomfortable exchanges with colleagues.”
Confusing or inaccurate emails that require time to decipher. Lazy or incorrect
research that requires endless additional meetings to correct. Writing full of
errors that requires supervisors to edit or correct themselves:
“A director in retail said: “I had to waste more time following up on the
information and checking it with my own research. I then had to waste even
more time setting up meetings with other supervisors to address the issue.
Then I continued to waste my own time having to redo the work myself.”
In this way, a technology deemed a massive time saver winds up creating all
manner of additional downstream productivity costs. This is made worse by the
fact that a lot of these technologies are being rushed into mass adoption in
business and academia before they’re fully cooked. And by the fact the
real-world capabilities of the products are being wildly overstated by both
companies and a lazy media.
This isn’t inherently the fault of the AI, it’s the fault of the reckless,
greedy, and often incompetent people high in the extraction class dictating the
technology’s implementation. And the people so desperate to be
innovation-smacked, they’re simply not thinking things through. “AI” will get
better; though any claim of HAL-9000 type sentience will remain mythology for
the foreseeable future."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics