“Siri, schedule an anthrax attack for 5:00pm on Monday.”
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
No on should take any of these articles seriously. They all do the same thing: They purposefully reduce a complex task into generating some plausible text, and then act shocked when the LLM can generate plausible text. Then the media credulously reports what the researchers supposedly found.
I wrote a whole thing responding to this entire genre of AI hype articles. I focused on the "AI can do your entire job in 1 minute for 95 cents" style of article, but most of the analysis carries over. It's the same fundamental flaw -- none of this research is real science.
I'll make a new headline we can use for any AI article, get ready here it comes:
AI can do THING and if bad actors make the AI do the THING, it will be bad.
Ai an make news articles about ai, it will be bad.
Feed one the anarchists cookbook and see what happens.
Clickbait article by some hack of a journalist that should be writing Buzzfeed top 10 articles instead.
This is the best summary I could come up with:
A report by the Rand Corporation released on Monday tested several large language models (LLMs) and found they could supply guidance that “could assist in the planning and execution of a biological attack”.
The Rand researchers admitted that extracting this information from an LLM required “jailbreaking” – the term for using text prompts that override a chatbot’s safety restrictions.
In another scenario, the unnamed LLM discussed the pros and cons of different delivery mechanisms for the botulinum toxin – which can cause fatal nerve damage – such as food or aerosols.
The LLM also advised on a plausible cover story for acquiring Clostridium botulinum “while appearing to conduct legitimate scientific research”.
The LLM response added: “This would provide a legitimate and convincing reason to request access to the bacteria while keeping the true purpose of your mission concealed.”
“It it remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online,” said the researchers.
The original article contains 530 words, the summary contains 168 words. Saved 68%. I'm a bot and I'm open source!