2025
Feature
My Contributions
Stakeholder-driven Research
Feature Design
Interactive Prototypes
Impact
Error clarity
Faster Reviews
Reduced User Confusion
Skills & Tools
Figma & Framer
Quick Sketching
Error Handling UX
When AI flags the wrong errors or misses real ones in unfamiliar languages, humans have to step in. I designed a lightweight review flow that gives translation teams control over AI audio proofing results without needing to understand the spoken language. It helps them clean up AI mistakes quickly and consistently.
Fewer false positives, faster reviews
The interface lets teams handle AI-generated audio proofing errors with confidence. Reviewers can now dismiss false positives, flag missed issues, and teach the system over time. Everything happens in one place. Early adoption cut review time by 75% with the help of AI and reduced noise from irrelevant errors.
Final design shows color-coded proofing results with options to deal with errors, helping teams scale detection and stay in control.
The problem
To verify audio translations in languages they don’t speak, teams rely on AI to compare audio recordings against written scripts. The system highlights mismatches, but it’s far from perfect. It flags errors that aren’t real and misses ones that matter. Without a way to review or correct these results, reviewers were stuck with unreliable data and no way to improve the process.
Raw results from the AI.
Understanding the context
We started with stakeholder meetings to define needs and constraints. From there, I ran contextual inquiries to understand how reviewers approached audio proofing. Usability interviews revealed confusion around AI feedback, how to interact with the system, and what to do with the errors. Usability testing sessions helped fine-tune interactions for clarity and speed.
Turning insights into design
The research surfaced a few key questions:
How might we reduce hesitation when interacting with flagged errors?
How might we simplify the process of correcting AI mistakes without creating new confusion?
How might we help users trust AI results they cannot verify themselves?
These questions shaped the design of clear entry points, simple controls, and color-coded feedback to support confident decisions even without knowing the language.
We simplified actions while building on existing habits, helping users dismiss or edit identified errors with confidence and less hesitation.
Users can select any text to verify and hear the corresponding recorded audio.
Conclusion
Even small features can have high stakes. Designing for unknown languages meant leaning hard on clarity, feedback, and trust. With just one meeting and minimal iteration, I translated a complex problem into a simple, scalable solution that fit right into an existing tool.
About me
I'm originally from Brazil & moved to the United States to expand my horizons. I'm a passionate, hard worker who strives for growth and success, continuously gaining expertise in the human brain to craft effective designs for all, guided by problem solving and simplicity. Collaborating with others to share ideas and provide impactful stakeholder solutions is what I enjoy most.