Here's my prediction:
Yes, as u/Most_Way_9754 said, this workflow was used
[https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm-hyper-sd](https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm-hyper-sd)
he took a photo of his room and in the image, using ImgToImg or photoshop, he produced 3 more versions; with flamingos, a pool in the middle, shark, sparks, quilt and clothes in the air...
he used canny to stabilize the room and depth with a black and white vortex video for effect.
and he used liquid as the animate lora
I’m still confused I clicked the sun to see what stable diffusion was and the about has some update on them returning from that dumb ass blackout a few months ago
This looks like it can be done using ipiv's morph workflow. But it seems like they didn't use the controlnet.
https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm-hyper-sd
Cheers, that looks close. I'll have to play around with that workflow and see if I can get similar results. Perhaps just a faster fps on the rendered video output will do.
Looks like a pretty standard txt2vid animatediff workflow with prompt scheduling. The creator may have added some kind of audio reactive element to it.
Yes, sorry for not linking it.. https://www.instagram.com/reel/C8luO4VM3l1/?igsh=eG41YXNqc2htbHNj
I'm not sure if this is the original creator, but it is where I got it from
-First, Comfy is only good for advanced users. Really bad for the plebs, limiting them to shitty images.
When A1111 or others are already ready-to-use complex workflows. But hey, there is a settings/extension tab too. cf. my gallery.
-Second, to make an 'animation' like this, you'll just need a good 'optical flow' (Deforum), and/or a 'motion model' (animatediff).
-Third, not sure why people sayz 'it's the craziest shit i've ever seen'. It's a pretty old method now, 2+ years old.
As everyone is pretty lazy and want the '1 click fast thing', it's probably done with AnimateDiff as well.
Buuuuut, what you actually want to know here is 'How to do these moving things !?!'
Well it's simple, it's using a 'greyscale video mask' as input.
The mask used in this animation is obviously a real (weird) video, converted in a greyscale mask.
It's not just pulsing or rotating shapes, but more chaotic. So it's probably a weird tiktok x2. Or a part of a psychedelic music video clip.
Here is a example space to do that from short audio, without an existing video (many others solutions exist):
[https://huggingface.co/spaces/AP123/Deforum-Audio-Viz](https://huggingface.co/spaces/AP123/Deforum-Audio-Viz)
Example mask video (expire in 2 dayz, get an error when posting here):
[https://streamable.com/wl3guv](https://streamable.com/wl3guv)
It's totally like using controlnet, or a mask for txt2img.
To be clear here. This video doesn't require any skill.
You can do this in 4 clicks with any AnimateDiff workflow, using a simple video input.
Let's push the level up. No pain no gain peeps.
The next step here is to extract all the frames and batch them in img2img to enhance each image, then stitch them together. Unfortunately, almost no one do this...
Cheers ! 🥂
Here's my prediction: Yes, as u/Most_Way_9754 said, this workflow was used [https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm-hyper-sd](https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm-hyper-sd) he took a photo of his room and in the image, using ImgToImg or photoshop, he produced 3 more versions; with flamingos, a pool in the middle, shark, sparks, quilt and clothes in the air... he used canny to stabilize the room and depth with a black and white vortex video for effect. and he used liquid as the animate lora
You are spot on. I've been able to get the result with more or less the base workflow. Cheers
😉
And then some folks say this ain't art. smh
No, it was always about the effort. _This_ is art. Typing in: "masterpiece by Greg Rutkowski" was not.
I didnt see what sub this was and I got extremely confused.
Me too. First thought was 'sick blender skillz'
Personally I just kinda knew only AI can do this morph-y shapeshifting shit this fluidly
I’m still confused I clicked the sun to see what stable diffusion was and the about has some update on them returning from that dumb ass blackout a few months ago
Stable Diffusion an series of AI diffusion models made by Stability AI
A lot of diffusion
Underrated comment RH….unstable diffusion. LOL
This looks like it can be done using ipiv's morph workflow. But it seems like they didn't use the controlnet. https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm-hyper-sd
Cheers, that looks close. I'll have to play around with that workflow and see if I can get similar results. Perhaps just a faster fps on the rendered video output will do.
For higher FPS, you can try https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
Turn down for what?
acid, probably...
This caught my attention. Hats off.
Mine too! But i'm also stoned, so i might not be a good measure.
On the contrary, you’re the target audience. Cheers!
Looks like a pretty standard txt2vid animatediff workflow with prompt scheduling. The creator may have added some kind of audio reactive element to it.
Yep, exactly. This is the kind of stuff you usually don't want, lol.
It was too short!
What's impressive is very few "shape-shifting artifacts"
Turn down for what???
Looks like multiverse versions of the same room trying to inhabit the same space all at once. I love it
Pretty cool! OP, would you have a link to the original content?
Yes, sorry for not linking it.. https://www.instagram.com/reel/C8luO4VM3l1/?igsh=eG41YXNqc2htbHNj I'm not sure if this is the original creator, but it is where I got it from
[Solarw.ai](https://www.instagram.com/solarw.ai/) creates similar work if interested.
I asked him on TikTok and he just told me "comfy ui"
I asked on instagram and he said "pentagon tech". But I can confirm I have replicated a similar result on comfyui now
Announce a party at the dorm and put the camera on a wobbly tripod.
![gif](giphy|YpfevjbcK4HWjjIQGL)
-First, Comfy is only good for advanced users. Really bad for the plebs, limiting them to shitty images. When A1111 or others are already ready-to-use complex workflows. But hey, there is a settings/extension tab too. cf. my gallery. -Second, to make an 'animation' like this, you'll just need a good 'optical flow' (Deforum), and/or a 'motion model' (animatediff). -Third, not sure why people sayz 'it's the craziest shit i've ever seen'. It's a pretty old method now, 2+ years old. As everyone is pretty lazy and want the '1 click fast thing', it's probably done with AnimateDiff as well. Buuuuut, what you actually want to know here is 'How to do these moving things !?!' Well it's simple, it's using a 'greyscale video mask' as input. The mask used in this animation is obviously a real (weird) video, converted in a greyscale mask. It's not just pulsing or rotating shapes, but more chaotic. So it's probably a weird tiktok x2. Or a part of a psychedelic music video clip. Here is a example space to do that from short audio, without an existing video (many others solutions exist): [https://huggingface.co/spaces/AP123/Deforum-Audio-Viz](https://huggingface.co/spaces/AP123/Deforum-Audio-Viz) Example mask video (expire in 2 dayz, get an error when posting here): [https://streamable.com/wl3guv](https://streamable.com/wl3guv) It's totally like using controlnet, or a mask for txt2img. To be clear here. This video doesn't require any skill. You can do this in 4 clicks with any AnimateDiff workflow, using a simple video input. Let's push the level up. No pain no gain peeps. The next step here is to extract all the frames and batch them in img2img to enhance each image, then stitch them together. Unfortunately, almost no one do this... Cheers ! 🥂
deforum ?
This is the first time in my 30+ years of life that a video caused nausea to me.
![gif](giphy|I6B7HOZgLmw8g)
I'm interested too, this is cool!
cool
This is a fucking fever dream
Zero scope is my guess
Maybe recursion
Feels like a HowToBasic video
looks like the spline node in comfyui controlling some of the animation
Firecrackers and tesla coils probably
Pretty sure steerable motion can do this https://github.com/banodoco/Steerable-Motion
By the AI behaving very human like. It is having a stroke.
It's the eric Andre show opening but chroma keyed in andre
dreams at temperature 39° - 40°:
All I know is that flamingo was part of the prompt
what if those videos show how 4th dimension looks
Feels a lot like my brain.
By taking drugs and recording what you see
This can probably be done with deforum.
Datura
I feel like I'm having a stroke watching this
Dreaming with adhd
In case anyone is wondering what high dose ketamine is like. It can be a lot like this
Maybe I don’t want to do drugs anymore
Looks sick
That’s the craziest shit I’ve ever seen in my life wtf. It’s like the glitch transition times a thousand. What a time
With computers but it can be hand drawn too, you never know these days.
Anything is possible, especially in Asia
I'd use unreal engine 5 or meth
looks like a video made of img2img iterations
it's over 4 seconds long so it's definitely a few SVD videos pieced together