this post was submitted on 01 Sep 2023
364 points (94.2% liked)
Technology
59440 readers
4492 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If there was evidence AI was heading that direction at all, that direction was where society wanted to move AI to, and that there was the understanding we absolutely aren't there yet... I'd be significantly more optimistic.
My problem is that currently, Machine Learning and Expert Systems are being implemented quietly by a number of companies to at best to improve their own commercial offerings and at worst to cut their human staffed support teams to ribbons. Nearly everyone can relate to frustrations of seeking support with an automated system instead of a human. Those situations have continued to get worse, instead of better, as this tech has grown.
Additionally, thanks to how convincing LLMs are at appearing intelligent, they've become a fad rather than being evaluated and appreciated for what they actually are. There are countless startups now who are just trying to cash in on the hype by using the ChatGPT api to offer products that just shove GPT at all sorts of entirely unsuitable use cases.
Lastly, there are a good deal of issues with the currently most popular AI tech, LLMs, that the industry appears to have no intention of attempting to address in good faith. The complete disdain for copyright, IP, or even fair use when it comes to the data the models have been trained on. The recent articles stating that in order to remove material from a dataset would require effectively rebuilding the LLM. The lack of methodology to get true sources for the data used in responses, lack of reproducability of responses, lack of any auditability of these systems because that would jeapordize the "secret sauce" or is just simply impossible on a technical level. And when most people discuss this they get shouted down by the "true believers" as just not understanding the technology rather than any attempt at discussion in good faith. If you have concerns you're either stupid or against technological advancement. Don't you see all the good this could potentially do in the future but it it isn't doing yet?
I would love for the type of trustworthy, helpful digital assistant it sounds like you're describing. I've wanted that technology for well over a decade. We're just not there yet.