Sync, Scrub, Simulate

Two POCs for Bridging Behavior and Design in Figma #

Lately I’ve been questioning whether this even makes sense anymore.

Not the product idea — the whole direction I’ve been exploring in earlier posts: sketching interaction logic in plain text, upstream of visuals. A way to figure out how a product behaves before you decide how it looks.

It felt promising — until AI started spitting out prototypes in seconds. When the machine can generate an entire UI from a sentence, do we really need slower tools for thinking through flow, state, and behavior?

I don’t have a strong answer. But I’m still curious.

Curious whether interaction modeling still has a place in the design process.

Curious if structure — real structure — can actually help you design, not just document. So instead of writing up a long argument, I built a couple things. Just small pieces. Prototypes.

Side note: I’m not a programmer. Most of the actual coding here was done with the help of Gemini 2.5, guiding it one prompt at a time.

This post shares two of them:

  1. One that syncs a behavior script into a Figma file
  2. One that lets you scrub through interaction steps, like scenes in a timeline

They’re not polished. They don’t fully connect.
But they gave me new questions to think about. And right now, that feels more useful than trying to predict where the field is going.


🧪 Proof of Concept #1: DSL-to-Figma Sync #

This first prototype explores a basic but powerful idea:
What if your Figma file came from a behavior script — not a drawing?

Right now, the plugin uses a placeholder scenario in simulation front-end that mimics a parsed .interactiondsl file. Behind the scenes, I’ve built a Node-based parser that turns structured DSL lines into a JSON model. That model is currently hardcoded into the simulator — not yet pulled from a live file — but the pipeline is in place.

Here’s what it does so far:

✅ Parses a simple state flow from a hardcoded script
✅ Renders top-level screens as top-level frames (I still call them artboards) inside a Figma Section titled // sample.interactiondsl
✅ Creates a matching State Notes: [ScreenName] frame under each artboard
✅ Appends simulator messages as text elements into the notes frame
🧪 Establishes a one-way WebSocket bridge from the browser-based simulator → Figma plugin → canvas

Why sections? Coming from old-school Sketch, I still think in “artboards.” Figma Sections gave me a way to organize screens cleanly without wrestling with auto layout — and they’re easier to scan and rearrange when you’re just sketching.

It’s still early, but this setup lays the foundation for treating interaction logic as a source of truth — something you can write once, then visualize instantly. Not a diagram about the UI. A UI from the diagram.


🧪 Proof of Concept #2: Simulator → Stepped Scrubber → Figma #

The second prototype adds motion to the model. If the first POC was about generating screens from behavior, this one’s about scrubbing through those screens like a movie — one simulation step at a time.

In the browser, a lightweight simulator emits structured step events like this:

{
"type": "simulation-step",
"payload": {
"screen": "RepositoryPage",
"state": "Active",
"fromEvent": "initial_load",
"context": {
"repo_name": "owner/repo",
"branch": "main"
},
"visible": ["RepositoryContent", "Header", "Sidebar"],
"note": "Initial page load for the repository.",
"stepIndex": 0,
"totalSteps": 5
}
}

These are streamed into the plugin UI via WebSocket.

Here’s how it works today:

✅ Users manually start the sync — once connected, they see Listening… and a [Detach] button
✅ The scrubber is always interactive — users can jump to any step, regardless of what the simulator is doing
✅ When the simulator sends a new step, the scrubber updates in real time
✅ The current step’s note is displayed in the plugin UI (not on canvas)
✅ Users can reset to the simulator’s latest step or detach to explore states offline

🛠 Important caveat:
This POC doesn’t interact with the Figma canvas yet. The plugin receives simulation data and updates the UI — but there’s no logic to show/hide frames based on the current state. That logic lives in POC #1. The next milestone is connecting the two.

So for now, this is a UI-based scrubber, not a canvas controller. But the scaffolding is in place.
The plugin knows the current step. The DSL defines which frames exist. The next step is to make those two things talk.


Next Steps #

These two POCs don’t form a product. They don’t solve a grand problem.
But they’ve already changed how I think.

They’ve helped me see interaction logic not as documentation, but as material. Something you can build with — not just describe after the fact.

And maybe that’s the point.

I’m not trying to predict where design is headed. I’m not competing with AI tooling or trying to outpace it. I’m just trying to stay close to the work — to explore what still feels useful when things get faster, fuzzier, and more automated.

Right now, that means building slower tools.
Ones that make you think before you draw.
Ones that treat behavior like a first-class citizen.

What’s next:

It’s early. It’s messy. But it’s clarifying.

And that’s more than I expected.

🙏🙏🙏

Since you've made it this far, sharing this article on your favorite social media network would be highly appreciated 💖! For feedback, please ping me on Twitter.

Published