Why I don't use generative LLMs, and nor should you
Created: 2025-11-29
Updated: 2026-03-05
At the moment of writing, I don't have as many sources for these as I would like, but I may add them later. (Feel free to send me some yourself.)
Here is my roughly-written list of reasons why I don't use generative LLMs (popularly known as "AI", and being sold to suckers as "AGI" - "Artificial General Intelligence"), and nor should you.
1. They consume water and energy at massive scales.
…both for training and for use.
"Oh, but so many other things already consume so much energy and water! Like cloud services!"
…
Those cloud services were being used mainly for machine learning. You know, the old, less-fashionable name for "AI", before "AI" became such a buzzword.
The foundations for this scam have been a decade in the making.
2. They blatantly violate copyright.
This includes violating freedom-respecting software/data/media licenses, both permissive and copyleft.
Whatever the courts end up deciding, my stance is set - the weights ARE a derivative product of training data, and the piracy of copyrighted creative works to train "AI" will always be a violation.
"But I pirate stuff, and don't care about copyright!"
Do you really not see any difference between individuals pirating big-name media…and multi-billion-dollar companies pirating the entire web and all art ever made, including those by volunteer communities and poor artists, and using it to train tools which put said creators out of business?
In the long term, I'm in favor of abolishing copyright. But while it's there, it's GOOD as far as it protects individuals and small businesses from big players, and BAD when it allows companies to bully individuals.
So if you care about the working-class people (and odds are, that includes you) - you have to employ copyright wherever it protects the workers, and fight against it wherever it helps companies.
3. They DDoS individual- and community-run servers with their scraping.
…which, in turn, causes the servers to resort to privacy-invasive and user-hostile techniques like CAPTCHAs.
4. They destroy privacy at a whole new scale.
5. They destroy their sources.
I've read numerous times that LLM output is not suitable for training LLMs. In other words, they rely on human output.
What text, images, and videos will you train on, when all the sources have been replaced by LLMs?
What (say) StackOverflow answers will you train on, when nobody asks nor answers questions on StackOverflow anymore?
6. They are monopolist software.
I am an activist for anti-monopolist software - that's my umbrella term for the intersection of free/libre/open software, decentralization, and other related concerns.
Most LLMs in their current form are not anti-monopolist. Therefore, all sensible people who understand why free/libre/open software is important should reject them.
What would constitute an anti-monopolist LLM?
- Obviously, the model itself has to be anti-monopolist (free/libre/open) software.
- Anybody should be able to train the model independently, which means -
- (at the very least) a list of all data sources should be published, OR
- (ideally) all data sources should also be anti-monopolist data.
- Anyone should be able to do the training, which means it should be possible to do on any kind of hardware. (Which means they would also have to be vastly more power efficient than now.)
- It has to run offline on the user's device, rather than as a centralized network service. (This also resolves their privacy issues.)
- The weights have to be released, or it's no different from secret sauce binary-only software.
- The weights have to be considered derivative works of the training data, and licensed accordingly.
This would kill just about all popular LLM products. Therefore, none of them are anti-monopolist, and by using or funding them, we're falling for their bait.
Conclusion
In closing - if you make gLLMs, fund them, or use them…that tells me -
- You don't care about the environment.
- You don't care about individual- and community-made websites and resources.
You don't care about creative workers, or any workers at all.
In other words, you are a corporate bootlicker and a useful idiot for your corporate overlords.
Maybe you think you're immune to the problems, but wait and watch…it's going to get you too. (If we let it.)
- You don't care about your privacy, the consequent power you're giving others over yourself, and the consequent erosion of our democracies.
- Really, you don't care about who or what you trample over, as long as you can get your fancy little toys and satisfy your peer pressure and FOMO.