I’ve been thinking a lot about design systems lately.
It started while shipping a new version of SolanaUI. We rebuilt the library with portability in mind, added new components, and layered an AI skill on top so agents could better understand how to work with the system.
That process changed how I think about design systems. The question I kept coming back to was this: what does a design system look like when UI is no longer just assembled by people, but generated by AI?
For a long time, design systems followed a pretty clear model.
Back in my agency days, we built a lot of them. Designers created systems in Figma with tokens, spacing rules, interaction states, and composition patterns. Engineering turned those into code. Components were built carefully, and Storybook sat alongside it all as the place where the system was documented, tested, and made usable in practice.
That was already more than documentation. The rules of the system were encoded in the designs, the components, and the structure around them.
But even with all that, the model was still built around a stable assumption: most UI would be made by assembling a known set of parts.
That assumption starts to break once AI enters the picture.
Agents can now generate new UI components, modify existing ones, and explore variations far faster than a person working manually. And once that becomes possible, it changes what the design system needs to do.
It is no longer enough for the system to say, in effect, here are the 20 components you can use.
Now it also has to say: here are the rules for creating new components that still feel like they belong here.
The job of the design system used to be making a fixed set of building blocks coherent, reusable, and easy to assemble. That still matters. But in a more generative workflow, the bigger challenge is teaching both humans and agents how to extend the system without losing the thread.
At first, I was mostly focused on the library itself. The components mattered. Their structure mattered. Their portability mattered. But the more I worked on it, the more obvious it became that the components alone were not the whole system anymore.
What mattered just as much was the context around them: how they were intended to be used, what made a component feel like SolanaUI, how patterns should be combined, where flexibility was okay, and where consistency actually mattered.
That is what the AI skill started to capture: not just what exists, but how to make more of it.
In a traditional workflow, a design system gives you the pieces and expects people to assemble them correctly. In an AI workflow, the system also needs to help generate new pieces that follow the same logic. It has to carry enough intent, structure, and pattern knowledge that generated UI still feels coherent.
Without that, output drifts. You may still get something functional. You may even get something attractive. But it won’t really feel like it belongs to the system.
With the right context, agents can generate UI that is not just usable, but aligned. The system starts acting less like a shelf of approved parts and more like a guide for producing new work inside a defined set of constraints.
That also changes how I think about tools like Storybook and Figma.
They are still design system tools, obviously. But in a more generative environment, they start to matter not just as references for people, but as sources of structure for agents. Storybook’s MCP points in that direction. It makes components, examples, and usage patterns more legible to AI. The role of the system starts shifting from “here is the library” to “here is how this library thinks.”
That changes what good component design looks like too.
In SolanaUI 2, one of the goals was simplicity. Components should be easy to copy, easy to understand, and easy to adapt. That is useful for developers, but it also turns out to be useful for agents. In a generative workflow, rigid abstractions can get in the way. Simpler components make the system easier to extend without breaking its character.
So the design system becomes more flexible, but not looser.
Flexibility does not mean abandoning standards. If anything, AI makes standards more important, because generated UI can drift so quickly. The difference is that consistency can no longer live only in a closed set of finished components. It also has to live in the rules for making new ones.
That includes visual language, accessibility, interaction patterns, composition logic, and brand constraints. If AI is going to generate production UI, those things have to be encoded in a way that can shape generation, not just review it afterward.
The system still provides components. But components are no longer the whole story. Now the system also has to provide the logic for extension.
The more I think about it, the more this feels like a broader shift in creative systems generally.
Back in my agency days, we built similar systems for creative production too. Figma templates, Photoshop templates, and sometimes full tools to help teams produce on-brand work. Social posts, ads, motion graphics. The model was similar: define the approved assets, hand them off, and rely on people to use them correctly.
That is starting to change in the same way.
Recently, instead of building a dedicated social asset generator, we put together a set of strong Remotion examples in code and paired them with agent instructions. The goal was not to lock people into a fixed set of outputs. It was to give them a system they could generate from — flexible enough to adapt quickly, but structured enough to stay on brand.
The most useful systems are no longer just collections of finished assets. They are frameworks for producing new ones.
That is why I think design systems are becoming more generative.
Not because components no longer matter. They do. And not because the old model was wrong. It wasn’t. It worked well for a world where most interfaces were built by manually assembling a known set of parts.
But AI changes the pace and shape of interface creation. When new UI can be generated instantly, the center of gravity shifts. The design system can’t just define what already exists. It also has to define how new things should be made.
It still feels early, and I do not think the best patterns are fully settled yet. There are still open questions. How much should be explicit? How much flexibility is too much? What is the right balance between clear constraints and creative range?
But the direction feels hard to ignore.
Design systems used to be mostly about shipping a stable set of components people could piece together. Now they also need to encode the rules for creating new UI at the speed of generation.
That feels like a pretty fundamental change.
And once you start looking at design systems that way, it becomes hard to see them as just component libraries anymore.