by prya
Adobe Firefly custom models enter public beta, letting designers train AI on their own work to replicate character, illustration, and photographic styles.
Announced on March 19, 2026, the feature asks designers to upload between 10 and 30 of their own images — JPG or PNG, at a minimum resolution of 1,000 pixels — and Firefly trains a personalized model aligned to that aesthetic. The process costs 500 credits. Once built, the model becomes a reusable asset: generate new work from it across projects and campaigns without losing the visual consistency that took years to establish.
The three current areas of optimization tell the story. For illustration, Firefly custom models preserve stroke weight and fill consistency — the difference between output that looks on-brand and output that just looks like AI. For character design, the model holds the same figure reliably across scenes, angles, and contexts. For photographic styles, a specific visual look repeats across any number of images without manual re-prompting.
How Adobe Firefly custom models scale creative output
For individual designers, the trained model is a reusable foundation that carries their signature into new work. For brand teams producing high volumes of content, Firefly custom models are a consistency mechanism — every asset generated, regardless of who submits the prompt, holds to the same look. The model is private by default, so generated content stays owned by the creator.
This update coincides with Firefly adding more than 30 industry models to its environment, including Google Nano Banana 2 and Veo 3.1, Runway Gen-4.5, Kling 2.5 Turbo, and Adobe Firefly Image Model 5, now generally available. Firefly is the only platform where a designer can generate with one model, refine with another, and continue editing using Adobe professional tools — without switching apps.
The same release shipped Quick Cut, a video tool that turns raw footage into a structured first cut in minutes, alongside expanded image editing capabilities for adding and removing objects and extending scenes. Project Moonlight — Adobe's conversational agentic interface — also moved into wider private beta, working across Photoshop, Express, and Acrobat to execute real edits from natural language instructions.
Adobe says Firefly custom models will expand beyond the three current style categories as feedback from real-world creative workflows shapes the feature's next iteration. For designers who have spent years building a distinctive visual identity, anchoring AI output to that work — rather than fighting its tendency toward generic results — is a meaningful shift in what AI can actually do for a creative practice.



