233
I still don’t think companies serve you ads based on spying through your microphone
(simonwillison.net)
This is a most excellent place for technology news and articles.
battery life would fall through the floor if they did spy
A phone reacting to "ok Google" or the equivalent for the other assistants already requires to listen to what you're saying - doesn't seem to affect battery life all that much.
Passively waiting until a specific pattern of pressure waves impacts a sensor takes way less power than recording and transmitting the voice data
That was once true, but I am now very skeptical of that with on-device processing that can log key words and send them without using much data or power.
and how are key words decided?
AdSense
That's a ton of words
Yes this is how algorithms work
I don’t know what you mean. Every ad campaign has its own keywords. They add up to a ton.
https://devset.ai/blog/revolutionizing-adsense-harnessing-the-power-of-gemini-in-technology-marketing
Gemini is on every pixel 7-9
I don’t see how this clearly AI-generated article about how we’ll have 1-on-1 conversations with ads has to do with what we’re talking about.
Ok here's from the google dev blog
https://blog.google/products/ads-commerce/put-google-ai-to-work-with-search-ads/
Here's how a model can store a localized profile from adsense and learn on it
https://www.researchgate.net/publication/316821039_Convolutional_Dictionary_Learning_via_Local_Processing
Here's how google tensor chips are literally built for that workflow to the detriment of other performance.
https://www.androidauthority.com/google-tensor-g4-explained-everything-you-need-to-know-about-the-pixel-9-processor-3466184/
Google has an adsemse profile on you the user, that profile is added to by metadata from apps and sensors on the phone; then offloaded to google cloud servers when the phone is charging. No input from the user required.
Here's a GrapheneOS security feature to prevent persistence and breaks the above workflow.
https://grapheneos.org/features#anti-persistence
I have an honest question. How much android kernel development or tensor chip work have you done?
That Google blog also says the same thing except it's written by a human. I'm not disputing that AI can process audio data into ad statistics; I'm disputing that audio data is constantly recorded and sent.
I interpreted Farts as saying that the device listens to key ad words just like it listens for "Hey Siri", and I asked how it decides which words to listen to. Each ad campaign has their own keywords, and if you want to personalize, you'll have to listen to all of the words from every campaign, which would be equivalent to listening to everything and would severely degrade performance.
Reread my response to you. It's not a stream, it's locally held as tokens and then streamed over wifi while charging. I don't see any sources posted by you. I think we're wasting each other's time.
And the tokens would have to be analyzed from audio, which has to be recorded first. Held temporarily or not, it's mostly the same.
Yeah, there's just more effective methods to get essentially the same data.