this post was submitted on 15 Feb 2024
428 points (95.3% liked)

Technology

59374 readers
3250 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 9 months ago (4 children)

The example videos are both impressive (insofar that they exist) and dreadful. Two-legged horses everywhere, lots of random half-human-half-horse hybrids, walls change materials constantly, etc.

It really feels like all this does is generate 60 DALL-E images per second and little else.

[–] [email protected] 9 points 9 months ago

For the limitations visual AI tends to have, this is still better than what I've seen. Objects and subjects seem pretty stable from Frame to Frame, even if those objects are quite nightmarish

I think "will Smith eating spaghetti" was only like a year ago

[–] [email protected] 4 points 9 months ago

This would work very well with a text adventure game, though. A lot of them are already set in fantasy worlds with cosmic horrors everywhere, so this would fit well to animate what's happening in the game

[–] [email protected] 2 points 9 months ago

I mean, it took a couple months for AI to mostly figure out that hand situation. Video is, I'd assume, a different beast, but I can't imagine it won't improve almost as fast.

[–] [email protected] 1 points 9 months ago

It will get better, but in the mean time you just manually tell the AI to try again or adjust your prompt. I don't get the negativity about it not being perfect right off the bat. When the magic wand tool originally came out, it had tons of jagged edges. That didn't make it useless, it just meant it did a good chunk of the work for you and you just needed to manually get it the rest of the way there. With stable diffusion if I get a bad hand you just inpaint and regenerate it again until it's fixed. If you don't get the composition you want, just generate parts of the scene, combine it in an image editor, then have it use it as a base image to generate on top of.

They're showing you the raw output to show off the capabilities of the base model. In practice you would review the output and manually fix anything that's broken. Sure you'll get people too lazy to even do that, but non lazy people will be able to do really impressive things with this even in its current state.