LLMs as hostile epistemic environments
Originally posted to Twitter at https://twitter.com/rtk254/status/1667868407486619652
There is a follow up to this (excellent!) piece waiting to be written about LLMs as “hostile epistemic environments”, or “epistemic junk food”. I don’t have time for that yet, so meanwhile just a few thoughts 🧵:
Hostile epistemic envs exploit our cognitive vulnerabilities and weaknesses, which are plentiful in an age where information overload is stretching cognition beyond its limits.
I have a hunch that LLMs, like conspiracy theories, work becasue they offer an intense dose of clarity/coherence, particularly valuable in an overwhelmingly complex world.
Hostile epistemic envs share important similarities with hostile nutritional envs (junk food): the companies creating them may not be intentionally trying to harm us, they want $$ and harm is a side effect.
Creators of epistemic junk food will ofc say- “but we’re helping people”! A good heuristic might be - helping in the short or long term? Lonliness chatbots may alleviate loneliness like cigarettes relieve stress, but have negative side effects (for which corporations aren’t accountable) and shouldn’t be substitutes for more systemic approaches.
This isn’t to say that there aren’t any non-“junk-food” uses of LLMs - there are many useful applications of the technology. But the “fairtrade/sustainable” uses are still rare comparatively to the junk uses, and those less exploitative uses are harder to turn a profit with.
Despite their not-quite-intentional hostility, greed can make you do pretty awful stuff: today we shake our heads in disbelief that we were duped into thinking 7up for babies was not only ok, it was even wholesome. I hope that a few decades from now we’ll look back similarly at junk LLM products.
However, with capitalism reaching deeper and deeper into our psyche in search of new territory to exploit, what will be left of us in a few decades is anyone’s guess 🤷