Can design tools keep up?
Are we making software or pictures of it?
Earlier this week, Apple unveiled its newest design language: Liquid Glass. While I'm holding back from diving into the specifics of how good or accessible it might be, especially considering it's still in developer beta (and those are notoriously rough around the edges), one critical question stands out to me:
What does Liquid Glass mean for our current design tooling?
Liquid Glass isn't just about the familiar Gaussian blur effect we've seen before. This new design direction is fundamentally a shader, using intricate refractions and dynamic visuals.
Something we see is that whenever apple moves, specially in design, the market moves. I don't expect us to see glass everything everywhere but the notion that UI has a material quality to it is something I'm honestly looking forward to.
We're entering an era where smartphone processing power is no longer the limiting factor in UI design. Ngl, this felt impossible a few years ago when most smartphones couldn’t render 60fps animation. On the web tho this is already starting to become reality with tools like Rive, which allow designers to build motion, interactive components with runtime behaviors directly embedded into their products. This approach brings design and implementation together in a way that feels more like programming motion than illustrating it. Similarly, for game designers using engines like Unity, this level of physicality and real-time rendering has long been standard practice. The question is: are we seeing traditional product design tooling, the kind made for apps, SaaS, and mobile interfaces, beginning to converge with these more dynamic design paradigms? It sure looks like the lines are starting to blur (pun intended)

Still, there’s a real tension here. Tools like Figma might not be able to keep up with how dynamic and physical interfaces are getting. Half of twitter already has some mocks on how this could work but all of them, so far, fall a bit flat. Static mocks simply don't translate the tactile, responsive nature of shader-driven interfaces. Framer, perhaps due to its closer ties to actual implementation, might bridge this gap sooner.
This shift isn’t new honestly. What I’ve been writing about since Cursor became standard practice is that static click trough prototypes are obsolete and we have no reason to do it anymore. We are witnessing a necessary transformation, pushing tools towards real-time, interactive prototyping closer to actual product fidelity. And that’s a good thing.
The pace isn’t slowing. Cursor writes React an Three.js better everyday. v0 spits out live pages in seconds. Our current set of design tools is starting to feel like a waiting room. The next wave likely skips the artboard. Real software, not pictures of it. That’s where we’re heading.
What's left for us is to keep on building, stay sharp, and don’t get stuck worshipping the old tools.


