Argomenti trattati
The creative software landscape is shifting from isolated applications to interconnected, conversational systems. Both Adobe and Canva are introducing generational updates—Adobe with its Firefly AI Assistant and Canva with Canva AI 2.0—that place an orchestration layer above individual tools. Instead of toggling between different programs and menus, users will be able to describe an outcome in natural language and let the underlying AI coordinate editing, generation, and export across apps. This article examines how those agents work, what they mean for makers, and the broader market forces that are accelerating the change.
How the new assistants operate across apps
The central idea is to treat the suite as a single, intelligent workspace. Adobe’s assistant was previewed under the codename Project Moonlight and will appear as the Firefly AI Assistant, able to call on Photoshop, Premiere Pro, Lightroom, Illustrator, Express and Frame.io to complete compound tasks. Canva’s AI 2.0 plays a similar role inside its browser-first canvas, coordinating image generation, layout, and export across Canva’s toolset. Both systems maintain session context so that a project’s rules, style choices, and prior edits follow the work rather than being re-entered each time. The intent is to reduce friction by letting a single conversational thread trigger a chain of edits, format conversions, and delivery steps.
Orchestration, models, and integrations
Under the hood these assistants act as an agentic layer: they determine which app or model is best suited for each step and then execute the sequence. Adobe has said the assistant will integrate not only its own Firefly models—like the newly announced Firefly Image Model 5 and Custom Models—but also partner models from providers such as Anthropic’s Claude, Google, OpenAI and others. Adobe is also building a visual workflow system called Project Graph, a node-based editor that lets teams assemble reusable, AI-powered pipelines. Canva has invested heavily in its own research unit and leveraged the Leonardo acquisition to embed design-focused models throughout its platform. The result is an ecosystem where multiple models and tools can be chained together automatically.
Practical effects on creative processes
For many creators the new assistants change the role they play: from hands-on maker to project director. A video editor, for example, could instruct the assistant to color-grade footage to a brand palette, generate thumbnail variations, and produce social assets—without manually exporting between apps. That accelerates delivery, but it also surfaces tough trade-offs. The speed and scale enabled by agentic workflows can introduce what some call AI slop: uniform, formulaic outputs that dilute an individual’s stylistic fingerprints. Creatives will need to decide where to place breakpoints—moments where human judgment intervenes—to preserve authorship while benefiting from automation.
Brand control and customized styles
To address consistency concerns, both vendors offer ways to lock down visual identity. Adobe’s Custom Models let teams train private models on their image libraries so that automated outputs follow a specific aesthetic without exposing assets publicly. Canva likewise emphasizes enterprise controls and design systems embedded in its AI layer. These mechanisms aim to balance speed with fidelity: companies can scale asset production while maintaining brand-safe appearances, but the burden of final quality assurance remains with human teams who must vet algorithmic choices.
Market forces and what to watch next
The rollouts come amid intense competition and strategic repositioning. Canva claims hundreds of millions of monthly users and has reengineered its platform after acquiring Leonardo AI in 2026; Figma dominates many UI/UX workflows. Adobe, meanwhile, is betting that the connective tissue between apps—rather than individual interfaces—will be the lasting value of its platform. The company has incorporated partnerships across a wide model ecosystem and added features like session memory and Frame.io integration to support review cycles. Organizationally, Adobe is navigating a leadership transition announced in March 2026, which adds another dimension to its strategic shift toward AI-native workflows.
Adoption is optional, and creative professionals will choose their stance: some will cling to artisanal, hands-on processes, others will embrace agentic tools to increase throughput. Either way, these assistants are likely to reshape expectations—what can be produced quickly and what demands a human signature. The coming months, as public betas and previews expand, will reveal whether conversational orchestration elevates creative work or merely changes how output is generated. For now, the message from both companies is clear: the future of creative software is moving toward agentic workflows that blur the lines between tool and collaborator.

