Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I agree, but the crux of my post is that it doesn't have to be that way - it's not inherent to the training and use of LLMs.
I think your second point is what makes the first point worse - this is happening at an industrial scale, with the only concern being profit. We pay technocrats for the use of their services, and they use that money to train more models without a care for the deviation it causes.
I think a lot of the harm caused by model training can be forgiven if the models were used for the betterment of quality of life of the masses, but they're not, they're mainly used to enrich technocrats and business owners at any expense.
Well - there's nothing left to argue about - I do believe we have bigger climate killers than large computing centers, but it is a worrying trend to spend that much energy for an investment bubble on what is essentially an somewhat advanced word prediction. However, if we could somehow get the wish.com version of Tony Stark and other evil pisswads to die out, then yes, using LLMs for some creative ideas is a possibility. Or for references to other sources that you can then check.
However, the way those models are being trained is aimed at impressing naive people and that's very dangerous, because those people mistake impressively coherent sentences for understanding and are willing to talk about automating tasks upon which lives depend.