Adobe Firefly generate more realistic images

Adobe Firefly Generate More Realistic Images

Adobe Firefly Generate More Realistic Images

Adobe's Firefly has unveiled enhanced capabilities for generating more lifelike images.

Adobe Firefly generate more realistic images. During its annual creative conference, MAX, Adobe revealed updates to the underlying models powering Firefly, its AI-driven image creation service. These updates include the introduction of the Firefly Image 2 Model, designed to significantly improve the rendering of human subjects, including their facial features, skin, body, and hands, areas that have posed challenges for previous models.

Adobe also disclosed that Firefly users have collectively generated a staggering three billion images since the service's launch just half a year ago, with one billion generated in the past month alone. It's worth noting that the majority of Firefly users, 90%, are entirely new to Adobe's product ecosystem. This surge in usage led Adobe to transform the previously experimental Firefly demo site into a full-fledged Creative Cloud service.

Alexandru Costin, Adobe's VP for generative AI and Sensei, highlighted that the new model wasn't solely trained on more recent images from Adobe Stock and other authorized sources but is also significantly larger. 

Costin explained,

"Firefly is an ensemble of multiple models, and I think we've increased their sizes by a factor of three. So it's like a brain that's three times larger and that will know how to make these connections and render more beautiful pixels, more beautiful details for the user." 

Additionally, the dataset was expanded by almost two-fold, enhancing the model's comprehension of user requests.

While the larger model demands more resources, Costin assured that it would maintain a similar processing speed to the initial model. Adobe is actively working on optimizing resource utilization to ensure a consistent user experience without incurring exorbitant cloud costs. Currently, Adobe's primary focus is on improving image quality.

The updated model will initially be accessible through the Firefly web app but is slated to integrate into Adobe Creative Cloud apps, such as Photoshop, in the near future. Costin emphasized Adobe's approach to generative AI, highlighting its emphasis on generative editing rather than content creation. He stated, 

"What we've seen our customers do, and this is why Photoshop generative fill is so successful, is not generating new assets only but it's taking existing assets—a photo shoot, a product shoot—and then using generative capabilities to enhance existing workflows. So we're calling our umbrella term for defining generative as more generative editing than just text-to-image because we think that's more important for our customers."

The updated model also introduces several new controls in the Firefly web app, enabling users to set depth of field, motion blur, and field of view for their images. Furthermore, users can now upload an existing image and have Firefly match its style. The web app also features a new auto-complete function to assist users in crafting more effective prompts for image generation.

Post a Comment

Post a Comment (0)

Previous Post Next Post