this post was submitted on 22 Nov 2023
158 points (98.2% liked)
Technology
59466 readers
3129 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why the hell can't we just have both? One of the biggest problems with smart speakers and voice assistants is that they're so damn stupid so often. If A.I. were to become smart enough to be what the current assistants/speakers aren't, surely that would drive device sales and engagement astronomically higher right?
That would be the goal. The tricky part is matching intents that align with some API integration to whatever psychobabble the LLM spits out.
In other words, the LLM is just predicting the next word, but how do you know when to take an action like turning on the lights, ordering a pizza, setting a timer, etc. The way that was done with Alexa needs to be adapted to fit with the way LLMs work.
Eh just ask the LLM to format requests in a way that can be parsed to a function.
Its pretty trivial to get an llm to do that.
in fact it’s literally the basis for the “tools” functionality in the new openai/chatgpt stuff!
that “browse the web”, “execute code”, etc is all the LLM formatting things in a specific way