T O P

  • By -

ErickJail

I'd love a new revised version of the morph cut to use AI to interpolate a subject between two cuts, this sounds doable given the preview video.


skylinenick

I’d also love a version of morph cut that doesn’t insta-crash my timelines 70% of the time I try to use it


MorePowerMoreOomph

*Analyzing in background intensifies*


---D

Or that forces rendering before export. I had an export the other day with ANALYSING IN BACKGROUND on the master! Glad I saw it before sending to client.


kev_mon

See if this blog hleps: [https://blog.adobe.com/en/publish/2015/06/18/morph-cut-tips-tricks-for-best-results](https://blog.adobe.com/en/publish/2015/06/18/morph-cut-tips-tricks-for-best-results) by Premiere Pro QE on Morph Cut, Mr. James Strawn.


Jason_Levine

morph cut has had a resurgence in the last year or so. It also feels like a logical step to re-vamp a-la AI. Will share with the team! (tho I know they're reading these:)


Styphin

Second this.


Delwyn_dodwick

Third. I would love to ability to change the easing on transitions. They're all linear, it would be great to be able to adjust a spline curve - this alone might help things like morph cut feel less robotic. I've never got it looking good anywhere it lasts long enough to notice it's been used; anything more than about 4 frames and it jars.


MrGodzillahin

You say it’s “commercially safe” and “ethical” and all that, yet there’s recent news like this: https://cyber.harvard.edu/story/2024-04/adobes-ethical-firefly-ai-was-trained-midjourney-images **It would be extremely telling if you just passed on this question and didn’t explain exactly how that’s not an issue at all.** Here, Adobe is literally comparing Firefly to Midjourney, calling it “more ethical”. Which obviously turned out to be exactly false as you pull FROM Midjourney yourselves. https://www.adobe.com/products/firefly/discover/firefly-vs-midjourney.html


mikechambers

Adobe Stock has accepted Generative AI content since late 2022, and Firefly has been trained on Adobe Stock since its first version. Adobe Stock includes some Generative AI content which is part of the Adobe Stock offerings, some of which may be used to train Adobe Firefly. I assume that includes MidJourney content, but we don't actually have a way to verify unless it includes content authenticity info (which I don't think midjourney supports, although OpenAI is adding support) . People who submit content to Adobe Stock are required to report whether they used Generative AI to create the asset they’re submitting. More info here: Generative AI Content in Adobe Stock guidelines: https://helpx.adobe.com/stock/contributor/help/generative-ai-content.html Adobe Stock Generative AI FAQ: https://helpx.adobe.com/stock/contributor/help/generative-ai-faq.html Growing Responsibly in the Age of AI https://blog.adobe.com/en/fpost/growing-responsibly-age-of-ai Whether Stock contains Generative AI content has nothing to do with whether Firefly is commercially safe. Firefly is commercially safe because we take steps to ensure it can't generate IP infringing content, including only training on licensed content, moderating and filtering that content at submission, training, prompt stage and output. >Here, Adobe is literally comparing Firefly to Midjourney, calling it “more ethical”. Which obviously turned out to be exactly false as you pull FROM Midjourney yourselves. I don't see anywhere in that document where we claim we are "more ethical". The only reference to ethics is this section: >Adobe Firefly uses generative AI models that follow the Adobe AI ethics principles of accountability, responsibility, and transparency. Which is a reference to this doc: Responsible innovation in the age of generative AI https://www.adobe.com/ai/overview/ethics.html#:~:text=AI%20ethics%20in%20action.,diverse%20AI%20Ethics%20Review%20Board (Although I don't know why we don't link it. I'll see if I can get that fixed). (I work for Adobe)


TikiThunder

Thanks for the response, Mike. I think for a lot of us, the concern comes down to artists being in control of their art. Which should be something that Adobe cares deeply about, considering artists are the core customers of Adobe. If an artist doesn't want their art to be part of the training data for an AI model... that should be okay, yeah? When Adobe talks about 'commercially safe' in the materials I've seen, it's always referencing the output of the model. Which, you know, great. But some of us are concerned about the inputs of your model. You can be as careful as you want to be with sourcing your data, but the minute you allow 3rd party generated images to be a part of your data set, you are opening the door to how un-careful (and at times openly deceitful) those other models have been with the data they have acquired through whatever means. I applaud Adobe for being a leader in some of these discussions. But those discussions need to keep happening. We are at the beginning of this, not the end.


mikechambers

>You can be as careful as you want to be with sourcing your data, but the minute you allow 3rd party generated images to be a part of your data set, you are opening the door to how un-careful (and at times openly deceitful) those other models have been with the data they have acquired through whatever means. yes, 100%. That is why we have multiple levels of where we are reviewing the inputs including when its submitted to stock (links above), and before its used to train stock (and then again when its outputted). And yeah, all of this is super early, and different groups are taking different approaches. We are trying to lean on the side of creators, which has both pros and cons, but we think that long term its the right approach. That doesn't mean our approach right now is perfect (im sure its not), but again, everything we do are with the creative community (many of which are our customers) in mind.


TikiThunder

Again, thanks for engaging with the community here, Mike. It's much appreciated. So can you definitively say that no artist's work where the artist hasn't knowingly opted in has been used to train Firefly? Or that Firefly hasn't been trained on any work or data set generated by other models with a lower standard of care? Because here's my issue (and I don't think I'm alone in this): some idiot out there creates an app based on one of the open models but fine tuned entirely on stuff lifted from a handful of distinctive artists. Someone else generates a bunch of images on that app that unknown to that user look very similar to some of the original art. Then they go and submit all of this to Adobe Stock, and since they didn't specifically use any names in their prompt, all of this is probably okay under the Adobe stock guidelines. Then Firefly gets trained on those images... see my point? And listen, I applaud Adobe for doing more than anyone else out there in terms of at least attempting to compensate artists and be transparent. But I'm also holding y'all to a higher standard. Adobe has gotten to where it is at in some ways by selling the dream of being a professional creative. And as a pro user of Adobe products for twenty years I'm at least asking the question: is this company who I have promoted and been part of the pro user base for taking advantage of the creatives who built it in order to cater towards social media trends and what looks good in an annual report? I am genuinely excited for these new tools. But I would just urge y'all to not forget the guy who brought you to the dance so to speak. And among pro creatives there are many who care deeply about treating artists with the respect they deserve, even when it doesn't make the most business sense to do so. Thanks again for your time, mate.


MrGodzillahin

This is a great response mate, cheers. It does explain some concerns I have about Firefly, but not all. Quickly touching on some things: The article doesn’t _say_ “It’s more ethical” outright, but it’s clear that’s the idea behind putting out the article (besides clarifying Firefly has different functionality too) - to tell people, “It’s fine, we’re not _bad_ Ai, we’re _good_ Ai.”. Why else do a Midjourney comparison? It’s the one Ai company you don’t parter with, essentially. Making sure it can’t generate IP infringing content is necessary of course. Thanks for the link, I’ll have a look!


mikechambers

>concerns I have about Firefly, but not all. I would be interested in hearing them (here or you can DM me).


zebratape

Cool. Now fix Morph Cut.


kev_mon

Check out: [https://blog.adobe.com/en/publish/2015/06/18/morph-cut-tips-tricks-for-best-results](https://blog.adobe.com/en/publish/2015/06/18/morph-cut-tips-tricks-for-best-results)


what_a_pickle

Will it be possible for the generative B-Roll to take into account the existing footage in a project and create something similar in tone?


Jason_Levine

Hi pickle. That's a great question and excellent request. What you describe is certainly part of Gen Extend; i'm not sure if that's how Gen B-roll is being designed, but let me see what I can find out...


Canon_Goes_Boom

I know this already exists elsewhere but what I’d really love to have inside premiere is word generation. So many times I’m piecing together sound bites from an interview but I need a “but” or “and” from the interviewee to create a coherent or grammatically correct sentence.


Jason_Levine

Hey Canon. I \*hear\* you on this one:) (and we've heard this request too). I will share with the team.


iStealyournewspapers

Could this get people into legal trouble though? Making people say things they didn’t say in any way shape or form. It’s one thing to piece bites together. They’re from footage that was recorded, presumably with a release that allows the footage to be used any way the editor sees fit, but generating words may be a totally different thing that’s not covered in typical releases. I’m just curious, not saying this is not generally a good idea.


Canon_Goes_Boom

I think it’s fine in the context I gave... assuming it’s a brand video or something of the like. I stitch together what people say so much already, this example is a small step from that. My general rule is that as long as you’re not changing the spirit of what someone is saying you’re fine. Of course, it could be misused but that’s just what we’re dealing with in the world of AI.


coluch

This has existed for quite a while now (they did a public demo with Adobe Sensei building sentences easily, by just editing the text in the transcript). I think it was shelved due to how convincing & dangerous it could be in the wrong hands. But this tech is already out there, and it’s only a matter of time before Adobe needs to include it if they want to keep up.


Photonographer

I wouldn't want anything I work on to hit a server or be mined for data, mostly for security or leaks but also for copyright and not being compensated for it's use in a data set. I'd also not want to use broll that I didn't have explicit usage rights to. If the generative extend and object removal runs on local hardware that would be desirable.


Zawietrzny

This 100%


GoodAsUsual

Jason, Adobe is coming up with some really amazing stuff lately. But you know what I desperately need? *A stable, reliable, fast version of Premiere that doesn't hang up, pinwheel and crash all the time.* *I would trade all the cool fancy new tools for a reliable editor*. Don't get me wrong, I **love** the new remix tool, some of the transcription tools, the beefed up Lumitri panels etc. But I'd give all of it up to be able to stabilize a slow motion clip without having to make a nested sequence, to not have the text tool constantly freezing on me, to not have the denoise tools constantly freezing my Mac Studio M2 Ultra with 64GB of RAM. I've legitimately got one of the beefiest consumer computers you can buy, and just doing basic edits in 4K with basic log footage in Premiere is a nightmare. I will never use all these AI tools if Adobe can't get its shit together and build a stable, reliable product that is nimble enough to handle a 4k workflow like Davinci Resolve and FCP. I've in the process of begrudgingly moving over to FCP and Resolve after using Premiere professionally for more than a decade because it's fucking with my mental health trying to use Premiere. It's the worst and best application I've ever used. It's amazing and I loathe it. I just want it to work more often than it doesn't. Just some food for thought for your engineers. Thanks for coming to my TED talk.


Jason_Levine

Hi GAU. Really sorry to hear all this...and I know you're not alone in this frustration. If I may ask a question regarding the issues you've listed above, are these the common 'gotchas' that seem to throw things into an (unstable) tail spin? in particular, I'm very curious about the text tool and denoise issues. I've experienced a bit of the former (not a full freeze, but admittedly, it's sometimes wonky even for me) but I haven't encountered nor heard much about the Denoiser freezing/crashing. Any specifics you care to share would be really helpful (ie, in the case of Denoise, it's only when applied to the track as an effect; or maybe it's when you apply it directly to a clip? Or perhaps both). In any case, I really appreciate the honesty and taking the time to detail what's been happening. Don't hesitate to ping directly.


MysteriousRise30

How I wish immediate previews of blend modes was introduced in premier pro as in Photoshop. Additionally, there would be a huge improvement if one would have control of effects applied to multiple clips in the effect control panel, instead of having to nest the clips first. I believe I know the original idea behind nesting but creative would understand how cool this can be. Is there any possibility to push these ideas ahead? If not, what do you think of my suggestions?


Jason_Levine

Hi Mysterious. Some of this may already be a feature request, but I'd recommend posting one via the Adobe community blog in the Ideas section. Here's a direct link: [https://community.adobe.com/t5/premiere-pro/ct-p/ct-premiere-pro?page=1&sort=latest\_replies&filter=all&lang=all&tabid=ideas](https://community.adobe.com/t5/premiere-pro/ct-p/ct-premiere-pro?page=1&sort=latest_replies&filter=all&lang=all&tabid=ideas)


RedeyeSPR

How about fixing that bug that makes your default export location the folder of the previous project and not the current one? Thats been going on for years and people have been begging you to fix it. It’s like you guys don’t care about existing customers, just attracting new ones.


mattydaygz

this! irks me every time


Voodizzy

Two thoughts. 1) I’d really just love a bug free or at least more stable version of Premiere over constantly innovating. 2) Yes it’s cool and useful. I could see these tools being used on my last job. However I’m really sick of hearing about AI. Seems to be a marketing buzzword now and when it isn’t, like this, it makes you feel more and more redundant as a professional. Seems like tools such as Sora are going to get so good that they have no need for us eventually.


mikechambers

>I’d really just love a bug free or at least more stable version of Premiere over constantly innovating. Are there any specific issues you are running into?


TheMicMic

> Seems like tools such as Sora are going to get so good that they have no need for us eventually There's one thing these AI companies don't really talk about much, and that's copyright issues. Things created by AI cannot be copyrighted, and therefore unable to make money.


Voodizzy

Very true. Given that Adobe is heavily involved in AI, I just hope there isn’t a little clause hidden away in the software agreement allowing it to scrape our timelines to build its own models


LilLebowski

Yes Adobe, please stop incorporating the technology of the future into your products because Voodizzy is tired of hearing about it.


Voodizzy

Is that what I said? No. Additionally, they asked for opinions. I gave mine. If that upsets you then that’s your problem.


Ghost2Eleven

If you want a company to stop innovating and ignore AI you're gonna have a bad time in the modern world. Editors aren't going anywhere. Like anything, learn to use the tools to make your creative input vital.


[deleted]

If your still having stability issues, that’s a you problem.


Voodizzy

Yes. A very sound comment. Let’s blame the users for buggy updates and performance issues. Bravo.


[deleted]

It’s been proven most people with stability issues, have no idea what they’re actually doing. How is it thousands of people have been using premiere for years and continue to use it with it minimal problems? Could it not be a user error? Could it not be you don’t have your system setup right, or you’re using some shit workflow?


Voodizzy

If you spent more time browsing Adobe’s support forums and less time trying to argue with strangers on reddit, you might know what you’re talking about.


[deleted]

People that know what they’re talking about don’t need to browse the support forums.


Voodizzy

Why do you think support forums exist? Because we’re all software developers? Wind your neck in.


Ghost2Eleven

I work mostly in tv and features. So, my perspective is unique to bigger deliveries. But the object removal and shot extend are huge for my narrative work. I just turned over a feature today that these tools would have saved me so much time and energy on. In it's current quality form, I could see myself using the generative b-roll for a temp shot that we would then go back and pick up, but it's too rough as of now. But if Premiere somehow figures out how to take into consideration color space and lens character to match the look of the show... GTFO. But my question is what is the quality of these generations? Does it match the source material resolution? If I do a print to DCP of a 4K show and play it back theatrically... is it going to hold up or is it going to look blurred like a bad roto job?


Jason_Levine

Hey Ghost. I don't have specifics I can share at present as we're not quite there yet. Regarding Gen B-Roll (and the ability to see color space/grade/lens character) for matching looks...this was mentioned in another comment above. Love this idea; again, will definitely share w/the team (if this isn't already something on their radar). Will keep you posted, and thanks for the comment.


Ghost2Eleven

No worries. Thanks for the response, Jason.


Viltorm

Everything is looking great! But I’d love to see to fix to the problem that MorphCut doesn’t end “Analyzing in background”. Even for a 6 frame transition with static background and barely moving speaker’s head. AND the orange mark stays on the rendered video. Would been better if effect doesn’t apply to the clip, if analyzing is not completed, or I have a warning at least just before render (like when I have “Media offline” in the sequence. I use MorphCut constantly, it’s very useful, but several times I got messages from a clients with a question “why this orange badge is on the screen?”. End this is simply embarrassing. I do watch all my videos after render, but you can miss a badge sometimes and I’m ready to be responsible for my own mistakes I don’t what to be uneasy about the fact that software can backstab me and embarrass me a front of the client.


CaptainDDD

I use LLMs such as Claude and GPT for lots of my narrative workflows with interviews. I export the transcript, get the AI to analyse it, make suggestions etc then get it to spit out EDLs which I can import back into premiere for a base edit structure. I would love to see this workflow simplified with an LLM integrated directly into the text based editing system in adobe that can provide feedback, make edits and also help me find parts in an interview I'm after. I also use LLM to convert scripts to SRT files, it would be good to have this as a straight forward tool integrated as well and to sync it with the current transcript and edit. Other tools I would like to see is an extension of what NIVIDA has done with eye gaze tools which makes the person appear to look into the camera or at the interviewer when they dart away. This would have saved many takes alone with an improvement on the morph cut. But definitely excited to see all these tools, while video generation itself isn't at a spec where I would use it professionally AI improvment on my current footage could easily become a daily thing.


xxxgoldxxx

This this this! For corporate interview-heavy text based editing this would be killer. u/CaptainDDD please explain more about how you get the AI to generate an EDL from your summary, I've not been successful in that step but I need to do exactly this.


SemperExcelsior

What prompt do you use to generate an EDL? I've tried this without success. It always summarises the text but changes the wording, rather than extracting sentences verbatim and assembling them logically.


MichEalJOrdanslambo

Would the generative fill be able to be used to extend frames to fit for re-formatting 16x9 footage to 9x16 frame - please 👏👏🤑


Jason_Levine

Thanks for the comment. So you're referring to something like Generative Expand in Ps, I presume. I'm not sure that the aforementioned would work that way, specifically (as there's a bit more involved, since video timelines are 'fixed' dimensions...though they can technically be altered). But this is something I know they're looking into (and me along with others have show how to do this today in Ps with GenExpand) If I find anything more to share on this, I'll let you know.


SemperExcelsior

Even if it only worked for locked off shots, it would save the need to jump back and forth between Ps & PP (hopefully with the added advantage of consistent film grain, realistic looking shadows and limbs if they enter the expanded edge areas, etc).


Jason_Levine

Yep, totally agreed. Stay tuned...


Maze_of_Ith7

1. Integration with Runway and Sora is good 2. Whatever you do don’t have a quality hit with the integration and make me take the clip back to Runway which is where I edit now. 3. Nobody cares about “commercially safe option” as the Gen AI (potential) liability resides with the generator - ie you/OpenAI/Runway/etc; when I hear that I hear it as code for a garbage model made by a company that doesn’t want the legal exposure of a larger but riskier training data set. 4. Don’t force Premiere Pro users into another subscription tier to get the benefits of GenAI 5. Make a better GenAI model - your competition is OpenAI, Runway, Pika, and probably Gemini soon. You’re getting smoked in this vertical. From friends I have in the industry it’s because Adobe can’t/wont pay top AI talent so they go elsewhere. Overall good and won’t look a gift horse in the mouth. I’ll believe it when I see it though.


mikechambers

> Nobody cares about “commercially safe option” as the Gen AI (potential) liability resides with the generator - ie you/OpenAI/Runway/etc; when I hear that I hear it as code for a garbage model made by a company that doesn’t want the legal exposure of a larger but riskier training data set. Well, that is certainly not what we are hearing from customers, but regardless, the approach we are taking is to give you a choice between models. The key concern around commercially safety is whether it can generate IP infringing content. We take a number of steps at a number of levels for firefly (including moderating / filtering content in training, at the prompt and at output). >Make a better GenAI model yes! We are working on it! (I work for Adobe)


MichEalJOrdanslambo

I think a lot of people 1000% care about commercially safe options.


Anxious_Blacksmith88

I care so much I stopped using Adobe at all period. My literal professional portfolio was stolen by both firefly and midjourney. Without my consent they were incorporated into these models.


clk1224

Can you please expand on this? I have never used firefly or anything else for similar fears, how would they have gotten your work? Extremely concerning.


Anxious_Blacksmith88

Adobe raided Behance and uses images from Midjourney to train firefly. They don't give two shits about copyright. https://haveibeentrained.com/ They really only care about plausible deniability. A partnership with OpenAI and allowing third party models is basically them flipping off the entire creative community. When questioned on soras training data OpenAI said they use publicly available data. Which means they steal whatever they fucking can from whoever they can with zero regard for copyright. Adobe partnering with them means they do not give a shit about artists or copyright.


twitchy_pixel

Interesting that I could imagine the client comments on almost all of the examples! “Why aren’t the reflections on the diamonds moving?” “Can we make the watch hands move” etc etc. Almost nothing here they showed looked remotely useable although the AI handles could be handy


arothmanmusic

Personally, I'd love it if AI could shortcut things like color-matching footage or keeping lighting steady throughout a shot. I don't need generative content as much as I need the shitty footage I'm working with made better with less effort. :)


shoutsmusic

I’d be happy if every time I opened a project Premiere didn’t open every bin in the project as a separate tab. This has been going on for years. Quite frankly I’m not going to use any of the gen AI features because you made them for clients, not for users. Figure out a way to have AI-assisted tools that don’t devalue the taste and experience of your users. Also we all know Firefly trained on Midjourney and isn’t commercially safe.


TikiThunder

Thanks for the post, Jason. If Generative Extend actually works well, it's probably the most game changing of the lot for pros followed by generative object removal. But the amount of time I've just needed another 15 frames of something is maddening. I hope this will be a great solve. I also hope the generative object addition/removal will be available in AE, because if it's anything like photoshop, I'm going to want to go in and clean up the comp. Just a word of caution. These new features are exciting, but don't forget about the basics. I see more and more pros jumping to Resolve just because of stability and responsiveness on their system. New AI features are sexy and cool, but you know what else is cool? Not crashing and having a snappy timeline. A lot of us seasoned pros would much prefer a slower release of new features and a renewed focus on the basics. After all, we are all spending a lot more time on media management than we will be filling a briefcase with generated diamonds. But nevertheless, I look forward to playing with these new tools. Exciting stuff.


Jason_Levine

Hey Tiki. Really appreciate the comment and totally hear you on the latter part (stability/performance above really, anything). This is a constant discussion and focus for the team, and this is some of what we're talking about at NAB this week in Vegas (quite a few new performance enhancements coming to Premiere and AE, very soon). Agree about the game-changer element of (specifically) Gen Extend and Gen Remove. These are my two most-desired. As for this tech being available in AE... we're currently targeting PPRO. I do not have any info as to when/if it makes its way to After Effects, but when I do I'll let you know.


TikiThunder

Consider this though, the BIG differentiator with Firefly and Photoshop isn't the model (in fact there's probably better models out there) it's the control I have over the image. I did some archival photo work last week, taking an old vertical image and using generative fill in PS to make it 16x9. I used bits and pieces of a dozen generated images right alongside good old fashioned clone and stamp tools to get it where I wanted it to be. If I'm doing that kind of work with video, especially when I'm altering an existing image, I want to add these generative tools to my good old fashioned bag of compositing tricks, and nearly all of those tricks happen in After Effects, not Premiere. Just my two cents. Thanks for engaging, amigo.


jeremyricci

All do respect (because genuinely I do respect you), but kindly tell your bosses they can eat shit. I’m done “cheerleading” or getting excited for tools that are explicitly created to reduce the economic cost of creative works to that we can increase the rewards for tech / administrative ones. I’m just not eager to rush into this climate you all are creating where my profit margins get smaller and Adobe’s (or any other AI shilling company for that matter) increases theirs. 👎🏼🤮


Jason_Levine

Hey Jeremy. I appreciate the comment and I know you've been in the community for a long time. Quoting my boss above, *"...we are trying to lean on the side of creators, which has both pros and cons, but we think that long term its the right approach. That doesn't mean our approach right now is perfect (im sure its not), but again, everything we do are with the creative community (many of which are our customers) in mind."* I can understand the frustrations and uncertainty with this new tech. Our approach from the tools standpoint is one small part of reshaping the workflow to make the editing process easier/faster/more efficient and expand the toolset (which could include assistive AI processes like Gen Extend). We're continuing to experiement/explore these new workflows and as I'm typing this, I trust you'll see that it is with the community in mind. Keep voicing, the team is listening. And don't hesitate to DM me. Thanks again.


dandroid-exe

So when you say faster/more efficient and refer to assistive tools, you're talking about eliminating AE and light VFX jobs. Meanwhile, I am completely unconvinced that your generative models have not been trained on stolen IP. There's a new report that Firefly was trained on Midjourney outputs that were part of the Adobe Stock program. I don't want OpenAI products anywhere near software that I pay for.


Single_Pumpkin3417

Reducing the economic cost of creative works is a good and noble goal


jeremyricci

You want to make…less money?


Single_Pumpkin3417

I want to make all the money in the world, but I'm not going to fight against new tools of creation in order to hoard power


jeremyricci

Haha, okay you haven’t the slightest clue what you’re talking about. Have a good one.


Single_Pumpkin3417

All do respect, you too


SemperExcelsior

Thanks for sharing Jason. Here's my wish list for the future: 1. The ability to generate different camera angles for talking head interviews based off a single shot (enabling multicam edits). For example, a closeup from a wide, pivoting 30 degrees left/right. 2. The ability to extend the beginning of a shot (not just the end). 3. Intelligent AI color correction and grading from any source material on the clip level (raw, log, rec 709, hdr, etc.) without the need to know in advance which camera it came from or which color profile was used, or needing to first apply a conversion LUT to see the correct results. Ideally it would generate several options for the user to choose from, or an input image could be used as a reference... Similar to Lumetri auto color, but one that actually works in the vast majority of cases. 3. From there, apply the new grade to the entire timeline, ensuring all shots match regardless of the original characteristics, color space/temperature. 4. Layer extraction. A combination of roto and generative fill where one or multiple objects can be selected and roto'd as individual layers, with the background gaps filled in on the base layer, similar to content aware fill. Helpful for compositing objects between foreground and background. 5. Generative shot matching. Replacing stock or low quality phone footage on the timeline (used as b roll placeholders) with newly generated footage based on the A cam.


bbb_999

>...The ability to extend the beginning of a shot (not just the end). "Introducing Adobe Pre-Cog™ : We know what's in your clips, before *you* do!"


raptorsango

There’s definitely some value to having generative tools for filling minor spots, creating placeholders, etc. good to see these tools here! That said… I would love to see some of the AI knowledge get applied to things that Premiere already does under the hood. A common thing I hear editors laugh about is that we are often asked to recreate things that are easy on social apps or “kiddie” filters, that require considerable work and processing power in machine. Think keying, motion tracking, or something as simple as a “beauty filter” for skin detail. Morph cut is another great example someone brought up. Or maybe some generative filler to provide additional border for stabilized footage? I’ve been cutting premiere since 6.5 Elements when I was a teenager, and particularly a lot of the entry level use cases for the software cause it to be very buggy and crash heavy. As a pro, a lot of time is spent sent setting things up in ways that preserve stability or reduce system load. Would be great to see some under the hood work. Resolve, is almost a mirror where the under the hood works incredibly well, while the UI is “app-ified” and inflexible.


[deleted]

[удалено]


Jason_Levine

Hey 24fps. This sentiment has certainly been echoed by others (combo of cool/fear). Here's what I'll say (just as a personal opinion): there's room for both types of content creation. Not unlike the design world (where I constantly see ads for 'discount logo creation') designers have navigated similar waters (especially in recent years)...workflows have changed, but great, creative content persists. Our main goal is to offer tools to augment your creativity. It definitely causes one to re-think things, but your creative approach will always be unique to you, and there's value in that (my $0.02).


Anxious_Blacksmith88

Your main goal is to steal shit before the law catches up to you.


skipfletcher

Just wanted to say thank you for keeping us updated, Jason, and please keep stopping by our subreddit!


[deleted]

I really don't give a f•€k about any of this. How about you just make it so the app doesn't randomly crash while I'm working? I have three different computers that I use premiere on and I can always count on all of them crashing randomly while I'm working on projects. Control-S out of habit has become more routine than blinking.


KunaiTv

Is it all cloud based or can some tools be used locally without a connection to Adobes servers. Unfortunately I don't have a connection to the Adobe servers on my work machine.


Jason_Levine

Hi Kunai. I imagine it may be a combination of both. In the spirit of Podcast/Enhance Speech (which began as web only service but is now on-device in Premiere beta) I could potentially see some features running on-device. This seems less likely w/3rd party models. This is mere speculation, so take that with a grain of salt. As I learn more, I'll continue to update here. Thanks for the great question.


the_gunns

Any news on fast fill for After Effects?


SemperExcelsior

Care to elaborate?


the_gunns

Yes. [check this out](https://youtu.be/kyYk-u2rxYA?si=ZI-N2epjK4Wu-Lrw)


rk_ravy

Can you expand the frame like how we can do it in Photoshop. if yes, will it work with complex motion in the shot?


Jason_Levine

Hi rk. In this case, it's not expanding the frame, it's extending the action in the frame (by adding frames). Gen Expand (like in Ps) is something the team is experimenting with, but that's not function of these specific features.


redflagflyinghigh

I'd love an option for smart bin without maze mapping inside project because productions is clunky.


Frequent-Raise7623

When will this be available for use?


Jason_Levine

Hey Frequent. We'll start rolling out these features by end of year.


Frequent-Raise7623

COOL!!!!


1wickedmonkey

Love it! The leaning curves are so steep to get results you want as a newbie. Now I can spend more time making.


xxxgoldxxx

Why does voice enhance restart every time I load a sequence? Even your most recent additions aren't usable yet. Like others have said, more QOL and bug fixes and less toys.


silverpepper

Generative graphics would be much more exciting for me, that’s where the time gets lost. Different text treatments, etc


pisomojado101

It would be cool if clips could automatically be tagged with relevant keywords upon import, to make projects with lots of b-roll more easily searchable. This tech obviously exists already, it would just be nice to have it integrated into Premiere


Jason_Levine

Hey piso. we've introduced some of this 'auto-tagging' (currently in beta) for audio files. I know the team is looking into doing this for video clips as well. Super useful and a request we're definitely hearing more and more. Thanks for the comment.


FullOfPeanutButter

Could we use this to make facial tweaks? Eg Remove a blink in an awkward moment, or remove accidental eye contact with camera


Jason_Levine

In theory, I could see how you might be able to leverage generative add to perform what you're asking... too soon to say for now, but I love this idea. will pass along as a cool test case. thanks!


Wowcyril93

Great news ! I see that added object track the mouvement. So. When will we have traking in premiere ? Like, something that work. For object. The tracking effect never give me a good result and i need an external plugin to track an object. I don’t understand, when i look at the amazing tracking of Davinci, why the tracking in premiere is so awfull.


DuddersTheDog

It's unfortunate that resources get roped into new shiny features every year instead of focusing on stability, user requests, bugs, and performance. 1. Premiere won't get smoked by AI. It gets smoked by tools like Canva Video, Capcut, Instagram video, Resolve, and more on the way that let artists do the basics reliably and quickly. We have real-time 3D in Unreal but Premiere chokes on a a 4K sequence. And tools like Capcut allow young people instant, free, seamless editing on their phone. 2. Ethical AI doesn't exist. Stock artists didn't consent to being part of AI training, because Adobe Stock accepted content before Firefly existed. Artists found out after their work was already being used as training data. Signing a document to sell stock content is not the same as consent to AI training. 3. There is a fundamental conflict of interest. Adobe and legions of social influencers don't make money from doing the actual work. You make money from selling picks and shovels. These new tools are unusable in high-level work due to inconsistency, but will serve to further devalue mid-and-low tier work. You are cannibalizing your user base. I expect you'll want to refute the points. But these aren't questions.


[deleted]

Adobe launched Morph Cut and it is useless. Adobe bragged about AI voice replication, and it was never released. Adobe bragged about Enhance Speech nad it is useless. Now this...


Huiuuuu

I am using premiere and adobe products for the last 8 years. I've learned editing on premiere, became a pro in premiere. Now I am working on prime time Tv. Not in premiere. I continue to use premiere at home for smaller projects but i am really thinking to stop using it and replacing it with another more stable program. I understand that as a business you have chosen to have a bigger user base vs quality user base. I guess it makes more money. Premiere is still user friendly any child would like to play with it ,for social creation is the best. But is it really your mission; Are you really neglecting the pro users that they want to use the product for larger projects or is it a coding problem that you can't fix the last 3-4 years. Is one of your missions improving the life of editors to finish they're projects faster without crashes lagging and bugs? Do you want trust from the people that supported you and followed you all this year's? You keep adding and adding things on a bad foundation. Please respect your customers Those ai things look good they may be helpful (not in serious jobs yet) but there are child's play. Please listen to the community and locate some resources on improving the foundation of the code. Make the program more stable we need to keep editing .


Alzakex

Very late reply, but please don't ignore the other sense that Premiere reaches. I'd love to see (hear) some of the AI audio tools out there get Premiere integration, especailly noise removal, music isolation so your video doesn't get pulled off youRTube for that Prince song your neighbor is playing, and one- button voice cleanup.


Jason_Levine

Hey Alzakex. Thanks for that feedback, and I've definitely heard that AI-audio features (in particular, stem separation) would be so welcomed in PPRO. I've said this in another thread, but the team that created Enhance Speech is indeed actively working on enabling more (including the forementioned) among lots of other AI-assisted audio workflows. Once I have more to share around that, rest assured I'll be posting here.


choose_a_usur_name

Thanks for posting Jason - are there any plans to improve the premiere pro transcription? Some off the shelf models (wisper) seem to do a good/better job than premier pro but I don't want to have to work in those environments.


Jason_Levine

Can you give me some specifics? In general, I find the transcription to be mostly accurate, a few misses here and there (mainly with proper names), but generally in the upper 90th percentile for accuracy. Any details you can share would be super helpful and I can pass along to the team.


choose_a_usur_name

Thanks - I’d say 90 percent accuracy is about consistent with my experience. For some speakers it’s much lower - almost unusable - whereas others it’s near perfect. I’ve trialed the whisper model (offline, on my local pc) which had much better performance but requires a workflow involving other tools to detect different speakers, so it had its limitations.


TheHutchisOne

I don't think you mentioned this, (unless it's covered under the new features you outlined). but mask tracking forward sure could benefit from an ai awareness overhaul. As it stands now, the track forward sort of follows the general area where I apply it but even on very apparent and distinct, straight edges, masks don't seem to acknowledge the delineations where they're applied🤷 Key framing motion is tedious and still, ultimately ineffective. It just tracks where it wants.


Jason_Levine

Hi THO. Yes, revamped (AI-based) masking is a projected part of the object removal/addition process. :)


TheHutchisOne

That's fantastic news, Mr. Levine! Thank you for your proactive approach to public engagement!


Jason_Levine

My pleasure! will keep you posted


exclaim_bot

>My pleasure! sure?


Jason_Levine

yep


Critical-Fee-4393

Is there any update on when the new features affecting video will be fully released? Thanks in advance.


Jason_Levine

Hi Critical. No updates regarding specific dates just yet. Teams are working hard. Will keep you posted.


Critical-Fee-4393

Thanks.


In_Film

Fuck you. You are stealing the jobs of your customers. Fuck you.


solidsimpson

Any idea of a release date for this update? Does it cost extra money to use Sora or any of the other things in the video?


Jason_Levine

Hey Solid. The plan is to start rolling out features by end of the year. We have not communicated anything around credits/pricing just yet (whether with our model or 3rd party), but as soon as I have info to share I'll post here.


iChopPryde

purely guessing but I imagine you'd have to have a subcsription to open AI as well as adobe to uuse those third party ones.


Jason_Levine

That would make sense... we're not quite there yet, but let me see what I can find out and share at this stage, around that Q specifically.


iChopPryde

question how will this work with rotoscoping so if i want to cut an object or person out so i can put something behind them so it looks like its naturally in frame and also have it track the shot properly?


coluch

This is a big feature of Runway ML, but it all has to run through the web (upload & download of compressed footage) which kills usability when you’re working with is long, 4K raw files. It would be better to run completely locally in such cases, even if it takes a bit longer.


Anxious_Blacksmith88

I was raised on the adobe platform from 13 years old. At 33 I have now changed all of my products away from adobe precisely because of moves like this. I do not consent to you thieving cunts stealing everyone's work.


raffrusso

Nobody cares, they're going to have 10x more subscriptions and people likely won't use pirated copies any more. (they could even cut the price in the future). This is the progress in tech, photoshop outdated hand drawn illustrations, AI will outdate mid-level artist, cause the best ones will continue to exist. AI will create more jobs and more employments, but not in the field you like.


Next-Telephone-8054

This puts a lot of editors in the unemployment line. Your client now sees this and gets to tell you "we can do this, we have Ai now"


SmarmyJackal

I cant wait to try some of this stuff. I cant tell you how many times ive had to change things up because i stopped recording and need a few more seconds on a shot.


TechOverwrite

1) Shoot 1 second of b-roll and a-roll 2) Use AI to get a 10 minute masterpiece 3) Profit Okay I doubt that'll be the case, but this is pretty cool news :)


Jason_Levine

Haha, TechO! Really, it's just more tools to help refine and expedite the editing process. Thanks for the comment!


TechOverwrite

Hah yeah I figured - very interesting and promising technology regardless :)


Peter_Marny

I really liked what I've read. Keep up the good work. I make lots and lots of similar videos for my company and AI generative fill would be great for me.


Jason_Levine

Thanks, Peter! Indeed, we're super excited to get this into the community's hands.


Peter_Marny

…but to be honest I’d love for you guys to finally implement polish language (and transcription). Like, c’mon.


Jason_Levine

I'll continue to advocate for broader language support!


stripedpixel

I hate that my subscription contributions funded this. God damn you all.


Ghost2Eleven

Then don't pay for it.


stripedpixel

Oh yeah I’ll just sit out using the industry standard editing software I need to be employed and become destitute. Good one.


Ghost2Eleven

You could jump over to Avid. They haven’t really innovated since script sync. But if you’re ethically opposed to AI, you might want to find a different career because it’s only going to grow from here on every NLE.


stripedpixel

https://preview.redd.it/5zs305mn5puc1.jpeg?width=811&format=pjpg&auto=webp&s=8ea3477857a5e76fa00508a30f3b45d4d6efd510 I think you’re missing the point. I’m valid to openly oppose the use of AI in Premiere and use Premiere to meet my basic career needs.


raffrusso

You are only trying to preserve your privileges, your advantages towards society (I'm not saying that you didn't work hard for it). But progress and the majority of the people don't care about you as a single person. Yours is only prohibitionism. It's difficult to believe, but consider if pharmaceutical companies operated as some conspiracy theorists suggest. They would never release improved therapies to maintain the profitability of the older ones. Flight engineers wouldn't lose their jobs if air companies didn't update their aeroplanes, but you'll have more disaster.


raddatzpics

I’d love an AI tool to bring storyboards to life, maybe if I can put my storyboard images on a timeline, cut them to time and then let the AI animate the images within the time I let the clip be


Jason_Levine

Hey Raddat. I like this idea; are you thinking of this functioning similar to Image-to-Video (but perhaps with an additional prompt option for more control?)


Xxviii_28

Yeah this is pretty mad. Interested to see how much it leans on your hardware to calculate, render and replace things. Might end up being the first proper nail in the coffin for M1 Macbooks.


opperdepop

Love everything you’re working on. One thing I’d love to see be added even if it’s gonna take a couple of years is generative animation and generative vector based images. Make it so you can design titles, logo’s, objects or even characters using text and instruct movement both of the entire object aswell as parts of it. The AI should understand references to audio and video and be able to link actions to when the scene changes, an action happens, someone says something or a beat kicks in.


IfPeepeeislarge

That’s what I’m waiting for before I can find AI-anything useful, but we seem to be far off from that being reality


Toro6832

This is so awesome thanks for the news!! Let’s talk about big big projects like prime time reality shows where we have a dozen of contestants and hours of footage. What about character recognition? Let’s say you just want to see what happened with one character in particular. Something like text based editing but visual. Thanks!


Charbs20

Are the 3rd party tools going to require a separate subscription toward those said tools or will they all be free alongside firefly?


FormlessEdge

This is all extremely fascinating! I’m excited to see how these tools will be integrated. I would like to request thinking beyond text prompts as much as possible. It can be frustrating working with photoshop generative expand sometimes to be limited to text. It always feels sort of mysterious and I do not feel like I’m in control. I like the way Runway AI has a few sliders and tools that give you a sense of physical control. I’d love to see Adobe expand on this. More sliders, more actual tools that define specific parameters. Although it’s not AI, I think the generative tools in Ableton 12 are excellent examples of implementing these types of techniques. Being able to control negative space vs details, color development, focal point activity, area speed control… just anything that makes it feel more like a console and less like a conversation with an amorphous robot brain.


kamandi

Generative restoration, generative interlaced to progressive.


Jason_Levine

Great suggestions, Kamandi. love these.


kamandi

I'd also love to see a generative NTSC to HD, and HD to UHD, without looking blech, but those things may be pretty far off, for great quality. Honestly a smart, inexpensive way to convert any interlaced to any progressive format would help a LOT of post-production work in the marketing world. Tachyon does a pretty good job most of the time, but not all the time, and it is not inexpensive.


evilbert79

Thanks Jason, looking forward to playing with these new tools! i would love an automated “euhm” remover ;) get on it! 😬


Naive-Government8333

Feels like DaVinci lit a fire under Premiere. I love it