ATLASSIAN – LOOM WORKFLOWS
/Product Design /AI Prototyping /Design Sprint
I solo-drove this project as a 2-week design sprint in March 2026. I went wide in Week 1, rapidly exploring a variety of concepts by iterating in AI design workflows. At the end of the week, the team aligned around a single, high-conviction path forward that I then started manually refining in Figma in Week 2.
Partway through Week 2, I was impacted by Atlassian's 10% layoffs. Despite not knowing the sprint's end result, I've written this case study as the best and most recent representation of the impact I can have in just 7 working days, and of how AI continues to transform my approach to product strategy and design.
Mar 2026, 2 week sprint
Async Workflows
Product Designer
Figma, Figma Make, Claude
AI tools like Claude and Cursor are transforming how people get work done – but the input is still always a written prompt. Users type out what they want, copy-paste screenshots, and manually link to Jira tickets and Figma files to give AI agents enough context to act. It's slow, lossy, and the specialized alternatives have a high barrier to entry. You'd spend minutes writing a prompt to describe something you could show in seconds.
Now, enter video. Loom was founded on the idea that showing always beats telling. A single Loom screen recording naturally captures things that text prompts struggle to convey. This means that Loom is uniquely positioned to be more than just a video communication tool – it could be the fastest, richest way to give an AI the context it needs to actually do the work. But what would that actually look like? I embarked on a design sprint to find out.
The initial ask was to explore a range of what "Loom for vibe coding" could look like. Using our product requirements as a foundation, I guided Claude to shape two design specs – a blue-sky exploration that would test the boundaries of current tech, and a fully-feasible MVP anchored on quick execution. I then fed the design specs into Figma Make to create interactive demos to show in leadership updates.
The blue-sky demo explored the idea of seeing your vibe code update the actual UI in near real-time. For example, a UX designer could start recording a Loom over their product, and then dictate their desired UI changes. AI would then code the UI changes in a staging layer and display them on-screen, visible in the recording but leaving any actual code untouched until a post-record approval process.
While fascinating and bold, the technological investment required would far exceed our resources, and in a world where AI capabilities are progressing at lightspeed, we wanted to keep time-to-value at a minimum. That brings us to the feasible MVP.
To get around the heavy investment of displaying real-time code updates, I explored the concept of capturing the user's dictated commands and on-screen context (URLs, cursor gestures, key screenshots). Package it all up after recording, and it's a viable prompt for Atlassian's own AI coding tool (Rovo Dev) to execute. At this stage, leadership was pushing for us to build primarily with Rovo Dev in mind.
The MVP concept seemed promising, but as I squinted at it further, it seemed to fall short of three key fundamentals:
To address these goals, I dove back into Figma Make to experiment with a third option: the Context Engine.
This demo expands upon the MVP in some important ways:
My favorite aspect of this exploration actually isn't even in the demo: Loom as a context engine wouldn't be limited to tech use cases. My starting point of "how might I vibe code with Loom" had evolved into "how might I accomplish more with AI through Loom?"
I gave daily async updates throughout the design sprint, but towards the end of Week 1, a larger review was held. I walked leadership through my three demos, expecting one of the first two to gain traction. After all, they were the closest to the original brief I was given to explore vibe coding with Loom.
Instead, product leadership took a strong interest in the context engine. Our new goal was to focus on Loom as a metadata-gathering tool, while avoiding the deep complexity of handling the actual execution. In short, I needed to design Loom to be the best way to talk to AI.
Specific feedback guided my next steps:
After successfully aligning the team on a defined direction, I began fine-tuning the design in Figma, while still relying on Figma Make for quick brainstorming and animations.
Loom's recorder UI is the highest trafficked part of its experience, so any modifications are high-stakes. We needed a scalable way to present multiple recording modes: the default "Record a Video", the recently added "Bug Report" mode (from my other case study), and the proposed context engine (now titled "Make with AI".) I leaned on Figma Make to generate 15+ variations, and then jammed with design teammates to narrow them down. Here's where we landed:
"Make with AI" would be a brand-new way to use Loom, and users needed to understand that the end value was going to be a context file instead of just a video. To provide early clarity, I leaned on my prior pattern from Bug Report mode to outline initial steps for users to find success. Now, switching into a new mode would come with adequate explanation for first-time users.
Loom's in-record UI is traditionally subtle, mainly used for stopping/pausing the recording. In our case, it needed to have a much bigger role:
I manually designed a few key states in Figma, and then took it into Figma Make to animate. The end result was a little finickier than I wanted, but still a strong representation of the motion experience I had imagined. Motion design that would've taken me hours in Figma or Principle took only a few minutes and prompts in Figma Make.
The biggest net-new task from the feedback round was to design a "user review" step after recording. Users needed to be able to scan, edit, and delete individual pieces of captured context before executing the prompt in their AI tool. I reframed the problem for myself as, how might I take a jumble of metadata and present it in an intuitive way?
I anchored my approach on the idea of bucketing – by categorizing metadata based on the user's verbalized goals, I could present them in a way that aligned with the user's own mental model. This would also allow users to quickly verify if Loom's interpretation of their dictated commands was correct.
In addition, I experimented with visualizing the context as piles of metadata. While a bit of a risk, I felt that it was an intuitive representation of how "Make with AI" works, and would fit nicely when fanned out into an organized array. Compared to my early revs that displayed the dense prompt, laying out the context in the form of sorted clusters felt much more digestible, powerful, and genuinely exciting.
I shaped the final narrative around a typical debugging use case: a developer would use "Make with AI" to resolve a customer-reported bug, update the front-end according to Figma, and then update their task board accordingly, just by recording with Loom. Below is the last end-to-end prototype that I created for this project, starting from within an AI agent (Rovo Dev) and ending in Jira with work items created.
After 6+ async daily updates, 3 reviews with leadership, and many long nights of prototyping over 7 working days, I was impacted by layoffs. My hard work would come to an end just two days before the final review with our SVP of Product. And despite how it ended, this project had completely re-energized me and pushed my AI design skills to new heights.
AI prototyping with Figma Make + Claude was an absolute game-changer. Rather than spending days pushing pixels on three separate concepts, I was able to articulate each direction through working prototypes in a fraction of the time. Leveraging Claude to create design specs for Figma Make allowed me to get 90% of the way there with the first prompt. I enabled my team to react to experiences instead of static mockups – and made the tradeoffs between directions viscerally clear in a way that would have taken days manually.
If given more time, I would have built my prototype into a local version of Loom.com using Cursor. We had also planned an annotation tool for in-record that would allow users to directly highlight key UI elements to capture. I truly believe there is something revolutionary behind "Make with AI," and I hope to work on another promising, bold project in the future.