this post was submitted on 14 Aug 2024
94 points (77.0% liked)
Technology
59374 readers
3169 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Nothingburger. They were using the AI to code their scripts and haven't even shown the prompts that got the response. LLMs are not AGI.
Imagine allowing LLMs to write and execute code and being surprised they write and execute code.
Having read the article and then the actual report from the Sakana team. Essentially, they're letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team's guardrails on it. Not because it's become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn't be the one steering any code base, because they don't give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn't highly controlled like this.
Listen, I've been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.