this post was submitted on 30 Dec 2024
70 points (93.8% liked)

Technology

60191 readers
1472 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 3 days ago (1 children)

TBH I felt this was bit superficial. No concrete examples, I don't really think adoption curve outside tech people will be that fast to agents, doesn't really go into how using agents to manipulate people would significantly differ from using non agent chatbot for same end.

I'm still worried how AI agents could be used to do evil, just that I don't feel any better informed after reading this.

Curious to hear any thoughts on this.

[–] [email protected] 5 points 2 days ago* (last edited 2 days ago)

I think there is a risk vector, but I think as you say the people most susceptible toward AI manipulation (the folks who just don't know any better) is low due to low adoption. I think there are a lot of people in the business of selling AI who are doing it by playing up how scary it is. AI is going to replace you professionally and maybe even in bed, and that's only if it doesn't take over and destroy mankind first! But it's hype more than anything.

Meanwhile you've got an AI who apparently became a millionaire through meme coins. You've got the potential for something just as evil in the stock markets as HFT— now whoever develops the smartest stock AIs make all the money effortlessly. Obviously the potential to scam the elderly and ignorant is high. There are tons of illegal or unethical things AI can be used for. An AI concierge or even AI Tony Robbins is low on my list.

[–] [email protected] 2 points 2 days ago

Will be????

[–] [email protected] 3 points 3 days ago (1 children)

do we have any protection on this site against agents?

[–] [email protected] 2 points 3 days ago (1 children)

Insofar as the agents described in the article, I'm not sure where the overlap with Lemmy is.

[–] [email protected] -2 points 3 days ago (1 children)
[–] [email protected] 5 points 3 days ago

I'm just trying to follow the train of thought.

Frankly the best defense is probably to just write your own agent if you're worried about someone injecting an agenda into one. I strongly suspect most agents would have a place to inject your own agenda and priorities so it knows what you want it to do for you.

There is just a lot of speculation here without practical consideration. And I get it, you have to be aware of possible risks to guard against them, but as a practical matter I'd have to see one actually weaponized before worrying overly much about the consequences.

AI is the ultimate paranoia boogeyman. Simultaneously incapable of the simplest tasks and yet capable of mind control. It can't be both, and in my experience is far closer to the former than the latter.

[–] [email protected] 0 points 2 days ago

Any major platform owned by a major corporation is a potential (and often practical) manipulation engine. AI already creeps into Social Media and only amplifies what platforms want you to see anyway.