Adobe introduces GenStudio and framework reference for Firefly AI

Adobe introduces GenStudio and framework reference for Firefly AI

Join us on April 10 in Atlanta and explore the security workforce landscape. We’ll look at the vision, benefits, and use cases of AI in security teams. Request an invite here.


Adobe was one of the largest and first companies to jump on the generative AI bandwagon, releasing its own commercially safe Firefly AI image generator and editing model almost a year ago in March 2023.

Now, as it celebrates the model’s first anniversary and several months after the release of Firefly 2, the creative software giant is introducing an all-new “GenStudio” application designed to help enterprise users and brands create generative AI assets for campaigns and publish them online or to their digital distribution channels.

It’s also introducing a new feature that it hopes will give customers even more control – and therefore a greater reason – to generate AI images.

The new feature, called “Structure Reference,” allows users of Adobe Firefly’s standalone text-to-image application to upload an image that will guide subsequent generations of images, not in style or content, but rather system the image and the objects and figures in it.

VB event

AI Impact Tour – Atlanta

Continuing our tour, we head to Atlanta on April 10 for the AI ​​Impact Tour stop. This exclusive, invite-only event, hosted in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Places are limited, so request an invitation today.

Ask for an invitation

The features were first officially unveiled to the public at Adobe Summit, the company’s annual conference taking place this week (March 25-28, 2024) at the Venetian Convention and Expo Center in Las Vegas.

How Adobe’s new GenStudio works

GenStudio is designed to serve as a central hub for brands, offering a comprehensive set of tools for planning marketing/advertising/promotional campaigns, creating and managing content, activating digital experiences across channels and measuring performance.

Adobe’s vision for the new app – part of its Creative Cloud subscription suite – is to simplify and streamline content generation.

It enables brands and enterprise users to track and view campaigns, manage briefs and view assigned tasks, and is integrated with Adobe Workfront, Adobe’s project management software.

Users can generate different variations of marketing assets for different distribution channels, ensuring that content is audience-focused and on-brand. GenStudio also alerts users if content deviates from brand standards, offering suggestions for adjustments.

It also serves as a content hub, connecting to Adobe Experience Manager resources and allowing users to search for resources and create personalized variations using Firefly in Adobe Express.

In addition to GenStudio, Adobe announced Adobe Experience Platform AI Assistant, a conversational interface in Adobe enterprise software designed to increase productivity and drive innovation in teams.

This assistant is able to answer technical questions, automate tasks, and generate new audiences and funnels.

Adobe’s commitment to integrating generative AI capabilities also extends to specific applications, such as variant generation with Adobe Experience Manager and Adobe Content Analytics. These innovations enable brands to instantly create personalized variations of marketing assets and align AI-generated content with performance goals.

The importance of these updates cannot be overstated as they underscore Adobe’s role as a global leader in digital solutions platforms and a trusted partner to enterprises around the world. With a customer base spanning industries and 11,000 customers worldwide, Adobe hopes to make its generative AI tools available to a wide range of users.

How Adobe Firefly’s new framework reference function works

Video demo of the new structure reference feature in Adobe Firefly. Source: Adobe

Adobe’s vice president of generative AI and “sensei” Alexandru Costin and Firefly group product manager Jonathan Pimento briefed VentureBeat on a video call yesterday about the “structure reference.”

As shown in the animated GIF above, you can click a button, upload an image of a rock formation or hill in the desert, type in “a castle found deep in the forest with moss growing on the stone walls,” and it will generate a castle in the same position and resembling the shape, size and the location of the original rock formation.

This is essentially what structure reference makes possible: taking one image and generating new ones that may be completely different stylistically, but whose internal elements are arranged AND size just like in the first photo.

For the VentureBeat demo, Adobe executives chose the living room. They then entered the text “cathedral,” and the AI ​​model generated a new image that looked like a living room in a cathedral, complete with stained glass windows and lavish couches.

This is an advanced feature for users who want more control over the results of the image generator – beyond text or “style reference” which can still be used. (The “style reference” feature ensures that the images generated by Firefly try to maintain the color scheme and artistic style of the uploaded images, unlike “structure reference”, which does not take care of the style, only the arrangement and size of objects.)

Additionally, users can actually upload initial, simple, hand-drawn concept sketches — the Adobe team we met with showed us a colleague’s sketch of a tiny house on a crescent moon — and Firefly can use them as a “structure reference,” allowing the user to go from concept art to a fully realized, colored and shaded illustration in seconds.

Who is the Firefly framework link intended for?

These attributes may be particularly useful for artists and designers working in creative agencies, independently for clients, on marketing and advertising campaigns, on film storyboards or other creative work in the enterprise where consistency, repeatability and precision are important.

A big problem with AI image generation models in general – from Midjourney to DALL-E OpenAI and, before that, Firefly – is that, because of the way they work, they inherently generate very different images each time they are used, even if some of the same keywords are reused in text suggestions.

Other AI image generators have attempted to give users greater control over AI-generated results through various methods. For example, Midjourney recently added a “character reference” feature that aims to consistently reproduce characters from a single generation across multiple images, similar to the “style reference” parameter before.

Adobe’s “Structure Referencing” is a new way to solve this problem, and based on our previous look, it seems extremely promising.

Some Firefly features are also built into various Adobe Creative Cloud apps, including Adobe Photoshop, Illustrator, Express and Substance 3D – but unfortunately the new “structure referencing” feature is limited to the standalone Firefly app for now.

However, Adobe’s claims that Firefly is commercially safe – and even its policy of providing users with compensation or some legal assistance if they are challenged/sued for using its products – make the AI ​​model extremely attractive to enterprises. Adobe notes that unlike other AI models trained by web scraping, including Midjourney and DALL-E, it only used images it already had a license for, which is more than 400 million images from Adobe Stock.

However, Adobe Stock contributors have previously expressed to VentureBeat their disappointment and dismay that their images and photos were being used to train Firefly, which they believe is competition to them.

VentureBeat uses AI image generators, including those mentioned in this article, to create images of article headlines and other resources.

Leave a Reply

Your email address will not be published. Required fields are marked *