

Case Study: Bringing Clarity to Complexity — Designing PLASK AI's First End-to-End Workflow
Overview
As the first UX designer at PLASK AI, I joined a small, fast-moving team building AI-powered motion capture software. The technology was cutting-edge — you could upload a webcam video and turn it into animatable 3D motion — but for many early users, the experience felt unintuitive and incomplete. My goal was to transform the tool into something creators could not only test, but trust and return to.
The Problem I Discovered
Shortly after joining, I noticed a trend: many users would upload a video but never finish a complete mocap-edit-export workflow. Through usability interviews, product analytics, and customer support insights, I realized users didn’t struggle with the core tech — they struggled with knowing what to do next.
The motion data was raw and overwhelming. There was no clear guidance on how to clean, adjust, or export it properly. This gap created friction for animators and blocked teams from evaluating PLASK for real production use.
My Approach
I sketched out the user journey from first landing on the tool to exporting motion files. This helped expose weak points, especially the editing step, where most people gave up.
I then proposed a new structure that simplified the process into a three-step flow:
Upload - Edit - Export
Each step had a focused UI and toolset, with contextual help baked in. The goal was to reduce decision fatigue and help users build confidence in the system.
While reviewing user sessions and support feedback, I noticed a recurring pain point: the mocap output often contained jittery or broken joint data, making animations unusable without significant manual cleanup. For many users — especially those newer to 3D workflows — this became a dead end.
I proposed a “Motion Fix” feature: a one-click tool that would automatically smooth and repair common issues using PLASK’s existing AI models. To make the case, I brought user interviews that showed clear frustration at the editing stage, then paired them with visual examples of what a clean output could look like. By reframing the problem through the user’s eyes, I helped shift the conversation from technical limitations to user outcomes — and got the team aligned on prioritizing the feature.
Why I Made These Decisions
The most important insight I learned early was that even highly technical users still wanted predictability and progress. That informed every design decision:
A significant challenge was timing: I often had to design flows while the underlying features were still in development. I learned to be flexible and utilize prototypes as a shared space to quickly align vision across design, product, and AI teams.
Defining Success
Rather than wait for assigned KPIs, I helped define success around user outcomes:
To track this, I monitored drop-off points, reviewed session recordings, and conducted interviews with repeat users. This gave us a clear signal when changes made the product more usable, and when they didn’t.

Outcomes & Impact
After we launched the redesigned flow and Motion Fix tool:
The redesign didn’t just make the product prettier — it made it useful, which unlocked traction for the business.
Final Reflections
This project taught me the value of deeply understanding the user journey before solving surface-level UI problems. It also reminded me that design can be a powerful force for alignment, not just for users, but across teams. By visualizing the flow, advocating with evidence, and focusing on outcomes, I helped turn a promising tool into a product that creators could actually use.


