this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

59287 readers
4196 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 8 months ago (1 children)

Thank you very much. The confirmation bias is crazy - one guy is literally trying to tell me that AI generators don't have knowledge because, when asking it for a picture of racially diverse Nazis, you get a picture of racially diverse Nazis. The facts don't matter as long as you get to be angry about stupid AIs.

It's hard to tell a difference between these people and Trump supporters sometimes.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (2 children)

It's hard to tell a difference between these people and Trump supporters sometimes.

To me it feels a lot like when I was arguing against antivaxxers.

The same pattern of linking and explaining research but having it dismissed because it doesn't line up with their gut feelings and whatever they read when "doing their own research" guided by that very confirmation bias.

The field is moving faster than any I've seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

A lot of outstanding assumptions have been proven wrong.

It's a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

[–] [email protected] 2 points 8 months ago (1 children)

Exactly. They have very strong feelings that they are right, and won't be moved - not by arguments, research, evidence or anything else.

Just look at the guy telling me "they can't reason!". I asked whether they'd accept they are wrong if I provide a counter example, and they literally can't say yes. Their world view won't allow it. If I'm sure I'm right that no counter examples exist to my point, I'd gladly say "yes, a counter example would sway me".

[–] [email protected] -1 points 8 months ago

Yall actually have any research to share or just gonna talk about it?

[–] [email protected] -1 points 8 months ago (1 children)

Yall actually have any research to share or just gonna talk about it?

[–] [email protected] 2 points 8 months ago (1 children)
[–] [email protected] 2 points 8 months ago (1 children)

Jsyk I can't see that comment from your link.

[–] [email protected] 1 points 8 months ago (1 children)

Weird, works fine for me. It's their response to the comment in this thread with this content:

I think you might be confusing intelligence with memory. Memory is compressed knowledge, intelligence is the ability to decompress and interpret that knowledge.

[–] [email protected] 2 points 8 months ago