Yeah as the other person suggested i suspect it's more like "when do these expire?" "does this have mold on it?" "what does this sign say?"
You might get some about "does this match?" but i don't know
Yeah as the other person suggested i suspect it's more like "when do these expire?" "does this have mold on it?" "what does this sign say?"
You might get some about "does this match?" but i don't know
The problem is that so many browsers leverage hardware acceleration and offer access to the GPUs. So yes, the browsers could fix the issue, but the underlying cause is the way GPUs handle data that the attack is leveraging. Fixing it would likely involve not using hardware acceleration.
As these patterns are processed by the iGPU, their varying degrees of redundancy cause the lossless compression output to depend on the secret pixel. The data-dependent compression output directly translates to data-dependent DRAM traffic and data-dependent cache occupancy. Consequently, we show that, even under the most passive threat model—where an attacker can only observe coarse-grained redundancy information of a pattern using a coarse-grained timer in the browser and lacks the ability to adaptively select input—individual pixels can be leaked. Our proof-of-concept attack succeeds on a range of devices (including computers, phones) from a variety of hardware vendors with distinct GPU architectures (Intel, AMD, Apple, Nvidia). Surprisingly, our attack also succeeds on discrete GPUs, and we have preliminary results indicating the presence of software-transparent compression on those architectures as well.
It sounds distantly similar to some of the canvas issues where the acceleration creates different artifacts which makes it possible to identify GPUs and fingerprint the browsers.
You should 100% lie when you can. You can give every site a different email address, name, birthday, gender, and location and just note all of that in your password manager.
However, there's a lot you just can't control, like other people catching you in their pictures.
Or leave the house 😢
This only sorta works for today and if your friends never share images or videos online. The ever-increasing amount of people taking pictures and filming and posting them online means the day is quickly approaching where you could be identified and tracked through other people's content, security & surveillance cameras, etc.
If stores start adopting the tracking used at Walmart and the Amazon biometric data, social media will be the last of your worries.
Who says there's no innovation in tech companies today? lol
Avatar checks out
I have no idea what their business model is, but this would be a great way to collect more data for training various forms of AI. Arguably without harvesting people's personal data or their creative works.
I also suspect that because it's an assistive tool, it can probably get a fair bit of grant money.
Yes, it's a press release, but I think this is maybe a an interesting use for some of the AI to augment that of volunteers who help describe and annotate for people who have vision challenges.
Welcome to the future [of shit]!
I saw some research a while back around giving computers personality traits or having them respond more human like, and college students found it super creepy. If you watch how people interact with assistants, it's very different than from interacting with humans.
Many sites have had to enable reveal passwords for people with complicated passwords not using password managers.
It's low risk, but their numbers are also coming from fairly dated hardware and is just proof of concept. It can almost certainly be speed up significantly.