Reply
  • Anyone have any idea of where to start on creating something like the simulated / seemingly self-generating fractal-esque animations on the characters’ faces and bodies? The use of technology in this film is wild. I have no clue where to start on using AI to create moving images. Most I’ve ever done is create stills on DALLE. Thanks ❤️

  • Jun 2
    ·
    1 reply

    Check out comfy ui.

    Havent seen the movie but going off what you said i imagine they use facial detection of some sort

    Is there a clip i can watch to see?

  • Jun 3
    ·
    1 reply
    nocomment

    Check out comfy ui.

    Havent seen the movie but going off what you said i imagine they use facial detection of some sort

    Is there a clip i can watch to see?

    unfortunately it’s not really shown in the trailers,
    m.youtube.com/watch?v=CpLTGFgKWE0&pp=ygULYWdncm8gZHIxZnQ%3D
    the best way I can explain it is that if you see the gear-like/mechanical patterning under some of the characters skin or faces e.g. on the woman’s body’s around 1:30- those patterns were moving and fractalizing actively, just coming in and out of clarity, self multiplying, moving around

    Will Defs check our comfy ui tysm for the rec

  • Jun 3
    ·
    1 reply
    awawh

    unfortunately it’s not really shown in the trailers,
    https://m.youtube.com/watch?v=CpLTGFgKWE0&pp=ygULYWdncm8gZHIxZnQ%3D
    the best way I can explain it is that if you see the gear-like/mechanical patterning under some of the characters skin or faces e.g. on the woman’s body’s around 1:30- those patterns were moving and fractalizing actively, just coming in and out of clarity, self multiplying, moving around

    Will Defs check our comfy ui tysm for the rec

    Ahh ok it reminds me of google deepdream. Im not sure what thats called specifically but im sure theres a term for it.

  • Jun 3
    ·
    1 reply
    nocomment

    Ahh ok it reminds me of google deepdream. Im not sure what thats called specifically but im sure theres a term for it.

    fireee
    another friend of mine told me to check out runway ML so adding that to the list too

  • Jun 3
    ·
    2 replies
    awawh

    fireee
    another friend of mine told me to check out runway ML so adding that to the list too

    Cloud Services:

    RunwayML Gen-2
    PikaLabs
    OpenAI Sora (closed Beta, hard to access long waitlist)
    Domo AI

    Local/Open Source Services:

    Stable Diffusion Video
    Deforum
    EasyAnimate
    T2V-Turbo
    VideoCrafter2

    Local stuff usually requires good hardware, 10GB+ RTX20+ GPU, Deforum only needs a 6GB GPU but the other ones need a lot more VRAM from my testing. You can always use a Cloud GPU with these if you don't have the hardware but you got to pay. You can find free trials though. Some of these models run in ComfyUI or Auto1111. You can watch YouTube videos to see the workflow of both, they are both Stable Diffusion installs just different UI layouts. Some of these have standalone UI also.

    If I had to guess what Harmony used by looking at the clips online, still haven't seen the movie. It looks like EasyAnimate or Deforum, using Video2Video, maybe some Image2Video. It looks like he's using semi-transparent alpha channels, You'd need something like After Effects to do this. You'd have the video on top and then have a alpha channel with the AI movement giving it this trippy look were only part of the scene is animated/moving. You'd need to play with the opacity and diffusion noise you're injecting but doesn't seem too hard, Honestly a pretty cool idea.

  • Jun 4
    neon

    Cloud Services:

    RunwayML Gen-2
    PikaLabs
    OpenAI Sora (closed Beta, hard to access long waitlist)
    Domo AI

    Local/Open Source Services:

    Stable Diffusion Video
    Deforum
    EasyAnimate
    T2V-Turbo
    VideoCrafter2

    Local stuff usually requires good hardware, 10GB+ RTX20+ GPU, Deforum only needs a 6GB GPU but the other ones need a lot more VRAM from my testing. You can always use a Cloud GPU with these if you don't have the hardware but you got to pay. You can find free trials though. Some of these models run in ComfyUI or Auto1111. You can watch YouTube videos to see the workflow of both, they are both Stable Diffusion installs just different UI layouts. Some of these have standalone UI also.

    If I had to guess what Harmony used by looking at the clips online, still haven't seen the movie. It looks like EasyAnimate or Deforum, using Video2Video, maybe some Image2Video. It looks like he's using semi-transparent alpha channels, You'd need something like After Effects to do this. You'd have the video on top and then have a alpha channel with the AI movement giving it this trippy look were only part of the scene is animated/moving. You'd need to play with the opacity and diffusion noise you're injecting but doesn't seem too hard, Honestly a pretty cool idea.

    🐐❤️ thanks for the thoughtful response

  • Jun 4
    ·
    1 reply
    neon

    Cloud Services:

    RunwayML Gen-2
    PikaLabs
    OpenAI Sora (closed Beta, hard to access long waitlist)
    Domo AI

    Local/Open Source Services:

    Stable Diffusion Video
    Deforum
    EasyAnimate
    T2V-Turbo
    VideoCrafter2

    Local stuff usually requires good hardware, 10GB+ RTX20+ GPU, Deforum only needs a 6GB GPU but the other ones need a lot more VRAM from my testing. You can always use a Cloud GPU with these if you don't have the hardware but you got to pay. You can find free trials though. Some of these models run in ComfyUI or Auto1111. You can watch YouTube videos to see the workflow of both, they are both Stable Diffusion installs just different UI layouts. Some of these have standalone UI also.

    If I had to guess what Harmony used by looking at the clips online, still haven't seen the movie. It looks like EasyAnimate or Deforum, using Video2Video, maybe some Image2Video. It looks like he's using semi-transparent alpha channels, You'd need something like After Effects to do this. You'd have the video on top and then have a alpha channel with the AI movement giving it this trippy look were only part of the scene is animated/moving. You'd need to play with the opacity and diffusion noise you're injecting but doesn't seem too hard, Honestly a pretty cool idea.

    and def check out the film; see it in theaters if you can. Intense sensory experience, the use of tech in the film in general is incredible

  • op is aggro dri1ft a good movie? sorry for not helping you

  • awawh

    and def check out the film; see it in theaters if you can. Intense sensory experience, the use of tech in the film in general is incredible

    i’ve been meaning to see it just kinda worried about the flashing i should be okay though

    seems amazing though huge fan of his

    also no problem, been messing with image/video diffusion for almost 2 years and video editing for over a decade always happy to answer questions

  • crazy how good it's getting in such a short period of time

    can you even tell these are AI?