How are you using new AI technology? Maybe you're only deploying things like ChatGPT to summarize long texts or draft up mindless emails. But what are you losing by taking these shortcuts? And is this tech taking away our ability to think?
Typically LLMs aren’t a problem with FOSS with licensing as pretty much anything and everything is free to use, remix, etc.
What is more of a problem is hallucinations, imagining using the wrong rm -rf ~/ command without understanding the consequence, but arguably that’s hard to predict. What will always be a problem though, no matter the model, is how much energy was put into it… so that, in fine, it makes the actual documentation and some issues on StackOverflow slightly more accessible because one can do semantic search rather than full text search. Does one really need to run billion parameters models in the cloud on a remote data center for that?
The ecological costs don’t need to be very high. We host our own LLM models at my company on a Mac Mini, which doesn’t use a ton of power and works pretty well.
Typically LLMs aren’t a problem with FOSS with licensing as pretty much anything and everything is free to use, remix, etc.
What is more of a problem is hallucinations, imagining using the wrong
rm -rf ~/
command without understanding the consequence, but arguably that’s hard to predict. What will always be a problem though, no matter the model, is how much energy was put into it… so that, in fine, it makes the actual documentation and some issues on StackOverflow slightly more accessible because one can do semantic search rather than full text search. Does one really need to run billion parameters models in the cloud on a remote data center for that?Most licenses require attribution.
This is the real problem. I’m arguing it’s a good tool in the hands of someone who knows what they’re doing.
Despite the ecological costs?
The ecological costs don’t need to be very high. We host our own LLM models at my company on a Mac Mini, which doesn’t use a ton of power and works pretty well.
FWIW I did try few LLMs locally too (cf my notes on the topic https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence ) but AFAIK that is only the top of the iceberg, that LLM has been trained before and that’s a significant part of the cost.