OpenAI introduces Sora, its text-to-video AI model - eviltoast
  • sleepmode@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 months ago

    After seeing the horrific stuff my demented friends have made dall-e barf out I’m excited and afraid at the same time.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      9 months ago

      The example videos are both impressive (insofar that they exist) and dreadful. Two-legged horses everywhere, lots of random half-human-half-horse hybrids, walls change materials constantly, etc.

      It really feels like all this does is generate 60 DALL-E images per second and little else.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        This would work very well with a text adventure game, though. A lot of them are already set in fantasy worlds with cosmic horrors everywhere, so this would fit well to animate what’s happening in the game

      • TheHarpyEagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I mean, it took a couple months for AI to mostly figure out that hand situation. Video is, I’d assume, a different beast, but I can’t imagine it won’t improve almost as fast.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        It will get better, but in the mean time you just manually tell the AI to try again or adjust your prompt. I don’t get the negativity about it not being perfect right off the bat. When the magic wand tool originally came out, it had tons of jagged edges. That didn’t make it useless, it just meant it did a good chunk of the work for you and you just needed to manually get it the rest of the way there. With stable diffusion if I get a bad hand you just inpaint and regenerate it again until it’s fixed. If you don’t get the composition you want, just generate parts of the scene, combine it in an image editor, then have it use it as a base image to generate on top of.

        They’re showing you the raw output to show off the capabilities of the base model. In practice you would review the output and manually fix anything that’s broken. Sure you’ll get people too lazy to even do that, but non lazy people will be able to do really impressive things with this even in its current state.