this post was submitted on 23 Aug 2024
419 points (91.0% liked)
Technology
59148 readers
2006 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I work at a newspaper as both a writer and photographer. I deal with images all day.
Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.
So, as a professional, am I worried? Not really. Because at the end of the day, it all comes down to ‘trust and verify when possible’. We generally receive our images from people who are wholly reliable. They have no reason to deceive us and know that burning that bridge will hurt their organisation and career. It’s not worth it.
If someone was to send us an image that’s ‘too interesting’, we’d obviously try to verify it through other sources. If a bunch of people photographed that same incident from different angles, clearly it’s real. If we can’t verify it, well, we either trust the source and run it, or we don’t.
I don't think you can assume this anymore.
Yeah photo editing software, and AI, can be used to create images from different points of view, mimicking different styles, and qualities, of different equipment, and make adjustments for continuity from perspective, to perspective. Unless we have way for something, like AI, to be able to identify fabricated images, using some sort of encoding fingerprint, or something, it won't be forever until they are completely indiscernible from the genuine article. You would have to be able to prove a negative, that the person who claims to have taken the photo could not have, in order to do so. This, as we know, is far more difficult than current discretionary methods.
The point I'm making isn't really about the ability to fake specific angles or the tech side of it. It's about levels of trust and independent sources.
It's certainly possible for people to put up some fake accounts and tweet some fake images of seperate angles. But I'm not trusting random accounts on Twitter for that. We look at sources like AP, Reuters, AFP... if they all have the same news images from different angles, it's trustworthy enough for me. On a smaller scale, we look at people and sources we trust and have vetted personally. People with longstanding relationships. It really does boil down to a 'circle of trust': if I don't know a particular photographer, I'll talk to someone who can vouch for them based on past experiences.
And if all else fails and it's just too juicy not to run? We'd slap a big 'ole 'this image has not been verified' on it. Which we've never had to do so far, because we're careful with our sources.
Sorry, but if traditional news media loses much more ground to "alternative fact" land, and other reasons for decline vs the new media, I have zero faith they won't just give in and go with it. I mean, if they are gonna fail anyway, why not at least see if they can get themselves a slice of that pie.