this post was submitted on 23 Feb 2024
126 points (92.6% liked)

Technology

59374 readers
3586 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Tyler Perry Puts $800M Studio Expansion On Hold After Seeing OpenAI’s Sora: “Jobs Are Going to Be Lost”::Tyler Perry is raising the alarm about the impact of OpenAI's Sora on Hollywood.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 8 months ago* (last edited 8 months ago) (3 children)

Now say you made that animation with Sora. You have no manipulatable assets, just a set of generated frames that made the furry guy look in the wrong direction.

"Sora, regenerate $Scene153 with $Character looking at $OtherCharacter. Same Style."

Or "Sora, regenerate $Scene153 from time mark X to time mark Y with $Character looking at $OtherCharcter. Same Style".

It's a new model, you won't work with frames anymore you'll work with scenes and when the tools get a bit smarter you'll be working with scene layers.

"Sora, regenerate $Scene153 with $Character in Layer1 looking at $OtherCharacter in Layer2. Same Style, both layers."

I give it 36 months or less before that's the norm.

[–] [email protected] 6 points 8 months ago

I agree, I don't think people realise how early into this tech we are at the moment. There are going to be huge leaps over the next few years.

[–] [email protected] 2 points 8 months ago

This seems like a fundamental misunderstanding of how generative AI works. To accomplish what you're describing you'd need:

  • An instance of generative AI running for each asset.
  • An enclosing instance of generative AI running for each scene.
  • A means for each AI instance to discard its own model and recreate exactly the same asset, tweaked in precisely the manner requested, but immediately being able to reincorporate the model for subsequent generation.
  • A coordinating AI instance to keep it all working together, performing actions such as mediating asset collisions.

The whole system would need to be able to rewind to specific trouble spots, correct them, and still generate everything that comes after unchanged. We're talking orders of magnitude more complexity and difficulty.

And in the meantime, artists creating 3D assets the regular way would suddenly look a lot less expensive and a lot less difficult.

If all you have is a hammer, everything looks like a nail. Right now, generative AI is everyone's really attractive hammer. But I don't see it working here in 36 months. Or 48. Or even 60.

The first 90% is easy. The last 10% is really fucking hard.

[–] [email protected] 2 points 8 months ago

Or just "take the frame and replace the head with the same face pointed a different way".