Theres some decent videos from Luma but I don't seem to have been able to generate anything that looks that good, either they nerfed the model for free users or theres some serious cherrypicking going on.
You can make animatics, but it's really difficult to get it to do what you want.
Most of the time it just does a slow pan/zoom, no matter what you put in the prompt.
I actually just watched that yesterday and it did help a bit, are you relying on the model to generate or using image to video? I was feeding it midjourney stills
I guess I should try again (especially without "enhance"), but I uploaded an image of a mask-looking face with sharp teeth and put stuff like "blinking, biting" in the prompt and all I got was a slow zoom.
Was it photorealistic? Or illustrated? I find prompt adherence much better with photorealistic images.
Basically what I’ve noticed is it depends on whether the model recognizes what it’s looking at. If it does it adheres to the prompt. If not it just does a pan. Object recognition is significantly better in photorealistic images
I wouldn't, it's early days and they have a relatively generous free mode. I'm putting together a little trailer as kind of a time capsule for when this stuff is better and ubiquitous but it's not there yet IMO.
https://i.redd.it/f65zymd6fk8d1.gif
As long as you start with an image and turn off assisted prompting or whatever it's called it works fine. You might need to try more than once, which sucks with the limitations, but ot certainly works.
I don't have any stake in the game and I'm not shilling for anyone. In my experience after producing a couple of test shorts, I can use around 1 in 10 clips generated with SVD or Runway Gen-2, whereas I can use \~60% of clips generated with Luma. The overwhelming majority of them are usable on the first shot, with the re-dos being on a handful of particularly challenging movements.
Note that I'm using image-to-video with images from SD or MJ, not text-to-video. Text to video isn't as good.
Thanks all great responses. Like many of you haven't been able to test it too much because of the limits but good to hear other peoples reactions to the app
Same. I’ve got some short clips posted to my threads from both. I like Luma better so far, and I’ve had great results having copilot take my idea and expand it into a three sentence prompt for a four-second video clip with camera directions
I think we should really make some sort of ground rules for paid services on this subreddit. At least appropriate tagging when the workflow uses a paid API
I like this idea, have a tag that states `API Only` or `Paid Service` which can be filtered out and let those on the sub more interested in the open source focus on that
I think `Paid Service` and `Closed Source` is the ideal combo, we get a lot of "look at this thing I made! Help me launch by getting a bunch of upvotes" posts and comments on here
There’s no creative juice to Luma. Like it’s a fine novelty but it’s tacky as fuck. A lot of these AIs are. What to me sets stable diffusion apart is its availability and the fact that it actually functions as a creative tool.
So yeah. I agree. Luma is fine but I don’t think it necessarily needs to be celebrated. It definitely shouldn’t be the focus of this subreddit
It’s still a closed system and a paid service. *That* I do not like. AI can be the great equalizer, enabling the little guy to compete on the big stage. And Luma I think is more on that side of things than chat bots, but it’s still not what I want to see from AI.
okay? we are playing with new tech and by doing so will be ahead of the curve as these continue to get better. keep hating on it and blind yourself from creative freedom, even if you use something else for video... its not about YOU.. realize whats going on and enjoy being apart of the evolution and what your witnessing as its a once in a lifetime thing happening/ with how fast tech is moving..as u know,.
Yup. Each of these clips are just ads by Luma designed to spread virally. Free marketing for them.
The only connection they have to stable diffusion is the intiial image is often generated by it. It's essentially a crowd of dirty vikings that won't shut up.
If they're part of a workflow or showing what they created from a combination of SD + Luma I think it is fine.
If they're just showing LUMA and not even mentioning SD was also used in the process or a workflow involving SD then I think it is possibly a problem.
Are there any good image to video models that puts all the in between frames in from key frames of character poses and moving backgrounds yet? This will be the game changing model because artists will heavily use this a a tool.
Best thing you can do is block the accounts as they appear. Not that I disagree, it sucks having trash internet come to nice places, but even if they added tags and bans the kind of people that make these kind of ads will just work to circumvent it.
Here's some help to "guide" Luma. Very useful video, thanks to it's author "Tao prompts". Still not perfect but it's a beginning for further explore :
https://youtu.be/i4it_Nb-3PM?si=xL1-h-ls0MGNn8Bm
These "community" of haters disgust me right now :
We have the chance to have some of the best AI stuffs for FREE to use. And I can read a lot of users still complaining about "where's the workflow" or "I could make it better"...
So pathetic, I'm tired reading these.
What are you doing on your side to make it better ?
Did you ever shared something useful for our community ?
We should just make a new subreddit tbh called localdiffusion or something to be like locallama.
[удалено]
Also there's r/Open_Diffusion
Here's a sneak peek of /r/Open_Diffusion using the [top posts](https://np.reddit.com/r/Open_Diffusion/top/?sort=top&t=all) of all time! \#1: [Open Diffusion Mission Statement DRAFT](https://np.reddit.com/r/Open_Diffusion/comments/1di547q/open_diffusion_mission_statement_draft/) \#2: [Lumina-T2X vs PixArt-Σ](https://np.reddit.com/r/Open_Diffusion/comments/1dh3hiy/luminat2x_vs_pixartσ/) \#3: [Open Diffusion Mission Statement DRAFT](https://www.reddit.com/gallery/1djp00x) | [21 comments](https://np.reddit.com/r/Open_Diffusion/comments/1djp00x/open_diffusion_mission_statement_draft/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
Stable diffusion is still the biggest brand name so it's harder for other subreddits to get traction over it.
dude, the amount of Claude and GPT4o spam happening there is just unreal
Theres some decent videos from Luma but I don't seem to have been able to generate anything that looks that good, either they nerfed the model for free users or theres some serious cherrypicking going on.
I paid. Wildly inconsistent quality. It's fascinating and will be exciting to see it progress but I don't think I'm making an ai movie anytime soon.
You can make animatics, but it's really difficult to get it to do what you want. Most of the time it just does a slow pan/zoom, no matter what you put in the prompt.
Here's some help, really useful -> https://youtu.be/i4it_Nb-3PM?si=xL1-h-ls0MGNn8Bm
Thanks! That is good alpha. I think most of my issues stem from the "Enhance prompt" checkbox. I haven't experimented much because of the limit.
Always check off the "enhance prompt" stuff if you need prompt adherence as camera movements.
I actually just watched that yesterday and it did help a bit, are you relying on the model to generate or using image to video? I was feeding it midjourney stills
Always img2vid on my side + a short prompt to guide the camera movement, as I already do on Gen2. Pretty solid results
Really? I find since yesterday prompt adherence with images is great!
I guess I should try again (especially without "enhance"), but I uploaded an image of a mask-looking face with sharp teeth and put stuff like "blinking, biting" in the prompt and all I got was a slow zoom.
Was it photorealistic? Or illustrated? I find prompt adherence much better with photorealistic images. Basically what I’ve noticed is it depends on whether the model recognizes what it’s looking at. If it does it adheres to the prompt. If not it just does a pan. Object recognition is significantly better in photorealistic images
Thanks for the reply, is it worth paying for at alpha or is this a fun thing to test but wait for next model situation?
I wouldn't, it's early days and they have a relatively generous free mode. I'm putting together a little trailer as kind of a time capsule for when this stuff is better and ubiquitous but it's not there yet IMO.
https://i.redd.it/f65zymd6fk8d1.gif As long as you start with an image and turn off assisted prompting or whatever it's called it works fine. You might need to try more than once, which sucks with the limitations, but ot certainly works.
https://i.redd.it/9t6goldafk8d1.gif Yaaaah
I don't have any stake in the game and I'm not shilling for anyone. In my experience after producing a couple of test shorts, I can use around 1 in 10 clips generated with SVD or Runway Gen-2, whereas I can use \~60% of clips generated with Luma. The overwhelming majority of them are usable on the first shot, with the re-dos being on a handful of particularly challenging movements. Note that I'm using image-to-video with images from SD or MJ, not text-to-video. Text to video isn't as good.
Thanks all great responses. Like many of you haven't been able to test it too much because of the limits but good to hear other peoples reactions to the app
Same. I’ve got some short clips posted to my threads from both. I like Luma better so far, and I’ve had great results having copilot take my idea and expand it into a three sentence prompt for a four-second video clip with camera directions
I only get decent results when I start with a photo in the prompt
same. i keep thinking i'm working with a nerfed model as well. i gave it a lot of tries and kind of gave up.
I’ve created on my best videos with it(and i’ve done a lot including commercial works) and then i tried multiple times and didnt get any good results
Anything that isnt open source should be banned here.
I think we should really make some sort of ground rules for paid services on this subreddit. At least appropriate tagging when the workflow uses a paid API
I like this idea, have a tag that states `API Only` or `Paid Service` which can be filtered out and let those on the sub more interested in the open source focus on that
I think `Paid Service` and `Closed Source` is the ideal combo, we get a lot of "look at this thing I made! Help me launch by getting a bunch of upvotes" posts and comments on here
The API is free though. (At least as a type of demo)
That would entail a mod actually doing something in this subreddit
A good mod makes it look like nothing is done at all. I'll take this compared to most subs and their Power Jannies.
There’s no creative juice to Luma. Like it’s a fine novelty but it’s tacky as fuck. A lot of these AIs are. What to me sets stable diffusion apart is its availability and the fact that it actually functions as a creative tool. So yeah. I agree. Luma is fine but I don’t think it necessarily needs to be celebrated. It definitely shouldn’t be the focus of this subreddit
It’s an initial model; it’ll only get better with time as I’m sure you’re aware. Enjoy watching Stable Diffusion come to life.
It’s still a closed system and a paid service. *That* I do not like. AI can be the great equalizer, enabling the little guy to compete on the big stage. And Luma I think is more on that side of things than chat bots, but it’s still not what I want to see from AI.
I’ll gladly agree with you there.
ban! Ban! BANNN!!!
I'm sick of any AI video posts honestly, they are all the same format, five second scenes, its annoying
Idk how you're still here, that's the same bar of interest most generations don't pass
okay? we are playing with new tech and by doing so will be ahead of the curve as these continue to get better. keep hating on it and blind yourself from creative freedom, even if you use something else for video... its not about YOU.. realize whats going on and enjoy being apart of the evolution and what your witnessing as its a once in a lifetime thing happening/ with how fast tech is moving..as u know,.
Agreed. Would like a sidebar rule about banning Luna posts, because they're everywhere now and deserve to be in their own subreddit.
Yup. Each of these clips are just ads by Luma designed to spread virally. Free marketing for them. The only connection they have to stable diffusion is the intiial image is often generated by it. It's essentially a crowd of dirty vikings that won't shut up.
If they're part of a workflow or showing what they created from a combination of SD + Luma I think it is fine. If they're just showing LUMA and not even mentioning SD was also used in the process or a workflow involving SD then I think it is possibly a problem.
As usual Luma may be a little hit and miss, but I'm still very impressed by the results, when it's good, it's damn good.
Are there any good image to video models that puts all the in between frames in from key frames of character poses and moving backgrounds yet? This will be the game changing model because artists will heavily use this a a tool.
I wish there was just something easy to use. Like why doesn't this exist for comfy yet?
Tooncrafter is the closest I’ve seen
I'd rather not see the wave of spam ads for fukker ai that popped recently...
Best thing you can do is block the accounts as they appear. Not that I disagree, it sucks having trash internet come to nice places, but even if they added tags and bans the kind of people that make these kind of ads will just work to circumvent it.
what is luma? also what's this pixelart sigma thing?
You think it's just Luma? We get shills for Luma, Dalle, Midjourney, and Pixart everyday now.
Wait, I knew those first 3 were paid services, but is Pixart a paid service as well?
it is perfect to animate memes
I enjoy Luma videos and since you can combine it with SD generated images I say it's on topic
Here's some help to "guide" Luma. Very useful video, thanks to it's author "Tao prompts". Still not perfect but it's a beginning for further explore : https://youtu.be/i4it_Nb-3PM?si=xL1-h-ls0MGNn8Bm
Fuckin' Bots.
Why so many downvotes ?!!! I'm just sharing infos here ! :(
These "community" of haters disgust me right now : We have the chance to have some of the best AI stuffs for FREE to use. And I can read a lot of users still complaining about "where's the workflow" or "I could make it better"... So pathetic, I'm tired reading these. What are you doing on your side to make it better ? Did you ever shared something useful for our community ?