Natanael

joined 1 year ago
[–] [email protected] 1 points 7 months ago (1 children)

Hash based message authentication code

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

It varies, there's definitely generative pieces involved but they try to not make it blatant

If we're talking evidence in court then it's practically speaking more important if the photographer themselves can testify about how accurate they think it is and how well it corresponds to what they saw. Any significantly AI edited photo effectively becomes as strong evidence as a diary entry written by a person on the scene, it backs up their testimony to a certain degree by checking for the witness' consistency over time instead of trusting it directly. The photo can lie just as much as the diary entry can, so it's a test for credibility instead.

If you use face swap then those photos are likely nearly unusable. Editing for colors and contrast, etc, still usable. Upscaling depends entirely on what the testimony is about. Identifying a person that's just a pixelated blob? Nope, won't do. Same with verifying what a scene looked like, such as identifying very pixelated objects, not OK. But upscaling a clear photo which you just wanted to be larger, where the photographer can attest to who the subject is? Still usable.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (3 children)

You're talking past each other, some Yubikeys have PGP apples for asymmetric encryption (public / private keypairs), and HMAC is a symmetric single key algorithm where the yubikey sends a resulting value to the PC/phone which is part of the key derivation inputs (even though the yubikey's root key remains secret).

[–] [email protected] 9 points 7 months ago* (last edited 7 months ago)

Android has password managers with keyboard app integration so you can paste both fields from the keyboard itself

I use Keepass2Android and it's own keyboard app for this. I switch active keyboard app when the login field shows up to paste and then switch back to my normal keyboard after

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

Depends on implementation, if done properly and if they don't try to upscale and deblur too much then that kind of interpolation between multiple frames can be useful to extract more detail. If it's a moving subject then this type of zoom can create false results because the algorithm can't tell the difference and will think it's an optical artifact. For stationary subjects and photographers it can be useful

[–] [email protected] 1 points 7 months ago

There's different types of computational photography, the ones which ensures to capture enough sensor data to then interpolate in a way which accurately simulates a different camera/lighting setup are in a way "more realistic" than the ones which heavily really on complex algorithms to do stuff like deblurring. My point is essentially that the calculations done has to be founded in physics rather than in just trying to produce something artistic.

[–] [email protected] 1 points 7 months ago (2 children)

When it generates additional data instead of just interpolating captured data

[–] [email protected] 1 points 7 months ago

I think there's a possibility for long format video of stable scenes to use ML for higher compression ratios by deriving a video specific model of the objects in the frame and then describing their movements (essentially reducing the actual frames to wire frame models instead of image frames, then painting them in from the model).

But that's a very specific thing that probably only work well for certain types of video content (think animated stuff)

[–] [email protected] 8 points 7 months ago* (last edited 7 months ago)

License plates is an interesting case because with a known set of visual symbols (known fonts used by approved plate issuers) you can often accurately deblur even very very blurry text (but not with AI algorithms, but rather by modeling the blur of the cameras and the unique blur gradients this results in for each letter). It does require a certain minimum pixel resolution of the letters to guarantee unambiguity though.

[–] [email protected] 9 points 7 months ago (2 children)

But if you don't do that then the ML engine doesn't have the introspective capability to realize it failed to recreate an image

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

There's a specific type of digital zoom which captures multiple frames and takes advantage of motion between frames (plus inertial sensor movement data) to interpolate to get higher detail. This is rather limited because you need a lot of sharp successive frames just to get a solid 2-3x resolution with minimal extra noise.

[–] [email protected] 6 points 7 months ago

This is just smarter post processing, like better noise cancelation, error correction, interpolation, etc.

But ML tools extrapolate rather than interpolate which adds things that weren't there

view more: ‹ prev next ›