Why I don't use generative LLMs, and nor should you

Created: 2025-11-29
Updated: 2026-03-05

At the moment of writing, I don't have as many sources for these as I would like, but I may add them later. (Feel free to send me some yourself.)

Here is my roughly-written list of reasons why I don't use generative LLMs (popularly known as "AI", and being sold to suckers as "AGI" - "Artificial General Intelligence"), and nor should you.

1. They consume water and energy at massive scales.

…both for training and for use.

"Oh, but so many other things already consume so much energy and water! Like cloud services!"

Those cloud services were being used mainly for machine learning. You know, the old, less-fashionable name for "AI", before "AI" became such a buzzword.

The foundations for this scam have been a decade in the making.

3. They DDoS individual- and community-run servers with their scraping.

…which, in turn, causes the servers to resort to privacy-invasive and user-hostile techniques like CAPTCHAs.

4. They destroy privacy at a whole new scale.

5. They destroy their sources.

I've read numerous times that LLM output is not suitable for training LLMs. In other words, they rely on human output.

What text, images, and videos will you train on, when all the sources have been replaced by LLMs?

What (say) StackOverflow answers will you train on, when nobody asks nor answers questions on StackOverflow anymore?

6. They are monopolist software.

I am an activist for anti-monopolist software - that's my umbrella term for the intersection of free/libre/open software, decentralization, and other related concerns.

Most LLMs in their current form are not anti-monopolist. Therefore, all sensible people who understand why free/libre/open software is important should reject them.

What would constitute an anti-monopolist LLM?

  1. Obviously, the model itself has to be anti-monopolist (free/libre/open) software.
  2. Anybody should be able to train the model independently, which means -
    • (at the very least) a list of all data sources should be published, OR
    • (ideally) all data sources should also be anti-monopolist data.
  3. Anyone should be able to do the training, which means it should be possible to do on any kind of hardware. (Which means they would also have to be vastly more power efficient than now.)
  4. It has to run offline on the user's device, rather than as a centralized network service. (This also resolves their privacy issues.)
  5. The weights have to be released, or it's no different from secret sauce binary-only software.
  6. The weights have to be considered derivative works of the training data, and licensed accordingly.

This would kill just about all popular LLM products. Therefore, none of them are anti-monopolist, and by using or funding them, we're falling for their bait.

Conclusion

In closing - if you make gLLMs, fund them, or use them…that tells me -

  1. You don't care about the environment.
  2. You don't care about individual- and community-made websites and resources.
  3. You don't care about creative workers, or any workers at all.

    In other words, you are a corporate bootlicker and a useful idiot for your corporate overlords.

    Maybe you think you're immune to the problems, but wait and watch…it's going to get you too. (If we let it.)

  4. You don't care about your privacy, the consequent power you're giving others over yourself, and the consequent erosion of our democracies.
  5. Really, you don't care about who or what you trample over, as long as you can get your fancy little toys and satisfy your peer pressure and FOMO.