Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Adobe introduces Firefly AI video generator with three new tools

Adobe has finally stepped up its AI game by introducing an AI video generator. The company has recently announced that it is now expanding the family of creative generative AI models to video. The Firefly Video Model, now in limited public beta, is the first publicly available video model designed to be commercially safe, stated the company. Adobe universe already consists of several AI generative models, including Image Model, Vector Model and Design Model. Now, the Firefly video model gives an extra edge to the company. Within one year of being launched, Firefly was brought into Photoshop, Express, Illustrator, Substance 3D and more, while supporting various workflows in Creative Cloud applications. Firefly also supports text prompts in over 100 languages and enables users around the world to create stunning content that is designed to be safe for commercial use.
The company’s Firefly Video Model, which has been teased since earlier this year, is launching with a handful of new tools, including some right inside Premiere Pro that will allow creatives to extend footage and generate video from still images and text prompts.
The first, Generative Extend tool for Premiere Pro is now in beta, allowing users to extend clips by up to two seconds at either 720p or 1080p at 24 FPS. This feature is ideal for making minor adjustments to footage, such as correcting eye-lines or unexpected movements, potentially eliminating the need for retakes. Additionally, it can be used to enhance audio, extending sound effects and ambient noise by up to ten seconds, though it does not support spoken dialogue or music. Overall, it’s designed for small tweaks in video and audio editing.
Two new video generation tools from Adobe are launching on the web: Text-to-Video and Image-to-Video, now available in limited public beta within the Firefly web app. Text-to-Video operates like other generators such as Runway and OpenAI’s Sora, allowing users to input a text description to create videos. It can mimic various styles, including traditional film, 3D animation, and stop motion. Users can further refine the generated clips using “camera controls” that replicate camera angles, motion, and shooting distances.
Image-to-Video enhances the video generation process by allowing users to include a reference image along with a text prompt, giving them greater control over the output. Adobe envisions this feature as useful for creating b-roll from photos or visualising potential reshoots by uploading stills from existing videos. Overall, Image-to-Video is a step forward in video editing technology, but users should be aware of its current limitations.
Text-to-Video, Image-to-Video, and Generative Extend each take approximately 90 seconds to produce their outputs, but Adobe is actively developing a “turbo mode” to speed up this process. While these tools have some limitations, Adobe emphasises that they are “commercially safe” because they are built on content that the company has the rights to use. This contrasts with models from other providers like Runway, which face scrutiny for allegedly being trained on a vast number of scraped YouTube videos, and Meta, which may have used personal videos without consent. For many users, the assurance of commercial viability could be a significant advantage when choosing a video generation tool. This focus on compliance and safety may make Adobe’s offerings more appealing to those concerned about copyright and usage rights in their projects.
The Firefly Video Model is in limited public beta on its official website. To use the new tools, one has to join the waitlist for now. During this limited public beta, generations are free. Adobe will share more information about Firefly video generation offers and pricing when the Firefly Video Model moves out of limited public beta.

en_USEnglish