this post was submitted on 01 Sep 2023
364 points (94.2% liked)
Technology
59440 readers
5037 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How can having more tools to solve problems make things worse? I can't think of any problem in my life that more tools and methods would work against solving it.
Depends on who's using it and for what purpose.
This isn't about you.
It never is I noticed.
What problems will get solved? “Our ads aren’t effective enough? We have to pay people to do things when we could be putting it into profits? We’re charging less rent than we think people will pay, but we don’t know how much? People have gotten savvy to my latest scam.”
The capital holding class will be the ones using ai to their benefit. The dishonest will join them. We may get some concessions here and there, but they own them.
Seriously though you can't think of a single way it will massively benefit regular people?
Not really. It hallucinates so much I don’t use it for factual information. It has massive glaring issues in applications like driverless cars. I suppose that applications like a driverless train would be nice but it’s not something I expect anytime soon. I suspect I’ll be told to like it when it tries to get me to consume more.
Maybe better ai in video games will be nice.
Maybe I’ve just become a cranky old lady, but while I can acknowledge actual theoretical value in it when I hear ai hype it feels like listening to crypto bros at worst and at best like listening to an executive telling me I need to implement lean manufacturing and plugging their ears when I want to discuss the costs and risks.
Are you only thinking of current LLMs and not expecting them to improve?
I first rode a train without a driver about twenty five years ago so I think you're a little behind on that one, they have pilotless planes too, there's a lot of clever stuff going on.
I totally get that feeling that everything exists to make you consume more but what if an AI could help you consume less and more healthily? If it could reduce waste by using more efficient ways of doing things? If it could give you access to better things at a lower price and with less manufacturing related environmental issues?
What is it could sum up all the information on a product you need to buy like saying 'there are 3674 adverts for proprietary models however consumer testing demonstrates one of these cheaper open source models would be more effective for you needs...'
If it could actually give you the information you need and filter out at the advertising junk?
If there was evidence AI was heading that direction at all, that direction was where society wanted to move AI to, and that there was the understanding we absolutely aren't there yet... I'd be significantly more optimistic.
My problem is that currently, Machine Learning and Expert Systems are being implemented quietly by a number of companies to at best to improve their own commercial offerings and at worst to cut their human staffed support teams to ribbons. Nearly everyone can relate to frustrations of seeking support with an automated system instead of a human. Those situations have continued to get worse, instead of better, as this tech has grown.
Additionally, thanks to how convincing LLMs are at appearing intelligent, they've become a fad rather than being evaluated and appreciated for what they actually are. There are countless startups now who are just trying to cash in on the hype by using the ChatGPT api to offer products that just shove GPT at all sorts of entirely unsuitable use cases.
Lastly, there are a good deal of issues with the currently most popular AI tech, LLMs, that the industry appears to have no intention of attempting to address in good faith. The complete disdain for copyright, IP, or even fair use when it comes to the data the models have been trained on. The recent articles stating that in order to remove material from a dataset would require effectively rebuilding the LLM. The lack of methodology to get true sources for the data used in responses, lack of reproducability of responses, lack of any auditability of these systems because that would jeapordize the "secret sauce" or is just simply impossible on a technical level. And when most people discuss this they get shouted down by the "true believers" as just not understanding the technology rather than any attempt at discussion in good faith. If you have concerns you're either stupid or against technological advancement. Don't you see all the good this could potentially do in the future but it it isn't doing yet?
I would love for the type of trustworthy, helpful digital assistant it sounds like you're describing. I've wanted that technology for well over a decade. We're just not there yet.
That sounds really nice and we get to the root cause of my issue here: I don’t think that that is what will happen. I’m not saying to ban the stuff or anything but when I see how it’s being sold to the investors I’m not seeing reasonable and achievable plans of action that benefit everyone. I’m seeing gimmicks, ads, and moonshots. All while the dishonest are getting a lot out of it. I’m seeing it at its most effective being a means to increase the power of the capital holding class because that’s who’s investing in it and I don’t think that training such things will get cheaper.
And I expect them to improve yes, but I’m also concerned with methodological failures. And I’m not saying that it’ll never make life better, but right now in 2023 I’m not impressed by what I’m seeing. And that’s before I get into the realm of the tendency for trends like this to blind policy makers and business leaders. Hyper loop was sold as being for autonomous vehicles and specifically made to not be cheaply convertible to a known better solution. The whole fucking cloud computing craze comes to mind as well.
I will cede one thing here though. I do think it has a lot of room for use as one of many engineering tools to help with the design process. Being able to directly compare to known optimization methods is always going to be useful and if it can automatically plug a layout or process into a model it would be nice. Idk if I expect that to happen as well as anyone seems to think though.
I guess I just don’t trust the tech industry anymore. When I see something like LLMs it seems gimmicky as hell and a lot of early adoption is either minor or harmful. I see driverless cars getting priority over public transit over and over despite the fact that they’ve been 5 years away since I was a kid. I see people talking about using AI to help the fight against climate change from the same people who won’t quit meat. Meanwhile surveillance increases, wages stay stagnant, and the world keeps getting hotter. Contrary to how I sound I love technology. I’m an engineer for a reason. But there’s just so many reasons to feel skeptical of it. So yeah enjoy your hype. If it winds up useful for someone like me I’ll try it. But I’m not buying into the hype and I’ll be skeptical of it until I start seeing actual results.
Ha yeah I agree on all that, well one thing I disagree with but yes people who pretend to care about the environment but eat meat are annoying and scammers pushing their big money making ideas in our faces nonstop is infuriating, but honestly it's the same with gardening - I get endless bullshit adverts for garden gadgets which do nothing but make the job harder, trying to trick people into giving you money is the culture we live in.
What I disagree is that it's only the rich getting access to this, most of the actually important stuff is open source. I'm not just taking about how Adobe's image gen is trash compared to a well set up SD, the knowledge of how to train and the tools to make NNs are all open source. The cost of training is high but the cost of writing Wikipedia would be astronomical if it was written by paid staff, chatGPT cost about ten million to train using current technology which is a lot money but the pet toy market is 7.5 Billion annually, the video game content revenue is fifty billion a year - as things progress training will get cheaper and more community projects will get made, hopefully we'll see people learn to support organisations that contribute to the commons rather than create walled gardens.
AI design tools are going to make it incredibly easy for people like me who design 3d printable things and share them on thingiverse, that alone will undermine a lot of shitty corporate monopolies and help change the structure of society for the better - imagine being able to just ask your computer to find a template for an item you need then describing how you went it customised, having the ai sort out all the strength and materials stuff then being able to print it or farm the job out locally.
An AI that knows the content of a billion adverts but also the little things posted on random corners of the internet which do exactly what you need and don't come with any bullshit - it could be what we need to cut through the nonsence that flooda us.
But yeah I'm not asking you to like Sam Altman or any of those techbro silicon valley capitalist cultists - we need open source and free AI for the people by the people.
You aren’t the only one with access to these tools. Yeah if I and I alone had ai that wouldn’t be bad. But the people who used to run Nigerian Prince scams now have ai. Advertisers have ai. The bosses who want to cut jobs have ai. The cops who want to ensure there’s no revolts from the folks getting fucked by the system have ai. So yeah I don’t think I can get nearly as much out of it as the people who want to use it in ways that will/could negatively impact me will. So I’m not excited for it or happy about it, and I’m terrified because the people who seem really excited about it seem blind to it’s weaknesses
"[Eli] Whitney believed that his cotton gin would reduce the demand for enslaved labor and would help hasten the end of southern slavery. Paradoxically, the cotton gin, a labor-saving device, helped preserve and prolong slavery in the United States for another 70 years."
Well, for one clearly this creates more mechanisms to exploit the poor. Especially if we chose to regulate as slowly as we have with other tech in the past.
If you manage to keep your job then sure, you'll be way more efficient. I guess AI will help you with your job search and resume if you're laid off, but maybe companies won't need as many people as they used to. 🤷
I don't know if it's just me or what, but I don't think AI, and eventually androids, replacing humans doing awful grunt work is really bad, it's a system that refuses to figure out a way to tax corporations using AI to support those displaced workers.
For decades it's been the grunt work that was automated and outmoded. Suddenly it's highly educated individuals that are nearing the chopping block.
Nah my job is already heavily automated. All more will do is let me go even faster
No, the point of AI is not that you work better, faster and more efficiently. The point of AI is that you will not be necessary anymore.
If we were at that point there is nothing left to discuss.
Ask the thousands of information laborers, some who might've think the same as you, who no longer have a job because they were layoff when the manager got swindled by OpenAI marketing.
Man a lot of people seem to know what I do for a living without me saying a word about it.
“It doesn't affect me, therefore it is not a problem. Fuck those people, I got mine.” The absolute lack of empathy of some.
I am sorry you do not know what "" means. Maybe ask your strawman to explain it.
The AI is in the hands of interests who think you are the problem.
(They don't like me either.)