My UX process is iterative and user-centered, moving from discovery through research, synthesis, ideation, design, validation, handoff, and post-launch measurement. I begin by aligning stakeholders on business goals, success metrics, constraints, and target users, then run qualitative and quantitative research such as interviews, analytics, surveys, and competitive reviews to surface user needs, pain points, and mental models. I synthesize findings into personas, journey maps, prioritized problem statements, and "How Might We" opportunities, then ideate broadly with sketches and low-fidelity wireframes before developing high-fidelity designs and interactive prototypes that follow accessibility and design system standards. I validate solutions with usability testing and A/B experiments, iterate based on evidence, and deliver engineering-ready assets with clear specs and documentation while supporting QA. Post-launch, I monitor KPIs and user behavior, run further tests, and iterate. Throughout, I emphasize cross-functional collaboration, testing the riskiest assumptions first, inclusive design, and measurable outcomes to ensure we build the right thing and implement it effectively.
Interviews: Use interviews to explore users’ motivations, needs, and mental models in depth. They surface qualitative insights, context, and stories you cannot get from numbers, help uncover unmet needs and pain points, and validate assumptions early in the design process. Best for discovery, complex problems, and learning the “why” behind behavior.
Usability Testing: Use usability testing to evaluate how easily real users can complete tasks with your designs or product. It reveals friction, confusing interactions, and accessibility issues, and shows where users make errors or take unexpected paths. Best for validating interaction design, refining flows, and catching problems before launch.
Surveys: Use surveys to collect structured feedback at scale and quantify attitudes, preferences, and self-reported behavior. They are efficient for measuring satisfaction, prioritizing features, segmenting users, and validating hypotheses across a larger sample. Best for broad validation, benchmarking, and tracking trends over time.
Analytics: Use analytics to observe actual user behavior in production, measure funnels, conversion rates, and drop-off points, and identify where problems occur at scale. Analytics provide objective, quantitative evidence to prioritize research and design work, validate impact post-launch, and guide A/B testing. Best for performance measurement and discovering patterns you should investigate further with qualitative methods.
When time or budget is limited, I validate assumptions with fast, low-cost methods and by leveraging lessons from analogous projects with similar objectives or functionality, even if they are in a different domain. First, I run a quick competitive and analog analysis to extract patterns, constraints, and interaction conventions that worked elsewhere. I treat those findings as provisional hypotheses to guide early concepts. Then I prioritize the riskiest assumptions and validate them with lightweight techniques: one or two short contextual or guerrilla interviews, rapid paper or click-through prototypes tested with 5–8 users, and unmoderated remote tests for quick task success rates. I also use analytics or product telemetry from similar features to estimate expected behavior and set benchmarks. When possible, I run very small experiments like gated releases, A/B tests, or smoke tests to measure real user response. Finally, I document confidence levels and stop conditions so we can move forward quickly but iterate based on early feedback and data. This approach lets me make informed decisions fast by combining learnings from analogous projects with focused, cheap validation.
I balance usability with visual appeal by treating them as complementary goals rather than trade-offs. I start with a clear hierarchy and user flows so the interface solves core tasks first, then applies visual design to reinforce that hierarchy through typography, spacing, color, and motion. I use established patterns and design-system components to ensure consistency and reduce cognitive load while adding visual polish to enhance clarity and delight. Accessibility and performance are non-negotiable constraints that guide aesthetic choices, for example, selecting color palettes with sufficient contrast and avoiding heavy effects that harm load times. I validate decisions with usability testing and visual reviews: if a beautiful treatment reduces task success or increases errors, I simplify it; if a usable layout feels bland, I iterate on micro interactions, imagery, and refined details. Finally, I collaborate closely with product and engineering to balance feasibility and maintenance, document trade-offs, and prioritize refinements that deliver the most user value.
When designing for accessibility, consider standards like WCAG and relevant laws as the baseline, ensure content is perceivable with text alternatives, captions, sufficient color contrast, and adaptable layouts, and make interfaces operable by supporting full keyboard navigation, visible focus indicators, alternatives to time-based or gesture-only controls, and clear tab order. Use simple, predictable language and helpful error messages to keep content understandable, apply correct semantic markup and ARIA so assistive technologies can interpret controls, and provide input alternatives for voice, touch, switch devices, mouse, and keyboard with large touch targets.
Account for visual design needs such as readable type sizes, line length, spacing, and not relying on color alone to convey meaning, and respect motion preferences by offering reduced motion options and avoiding triggering animations. Make forms accessible with clear labels, instructions, error prevention, and autosave where helpful, and ensure performance and responsive layouts for users on assistive tech or limited devices. Combine automated audits with manual keyboard and screen reader testing and usability testing with people with disabilities, document accessibility requirements and keyboard shortcuts in specifications, and consider cultural and language differences so accessibility is built into every phase and validated with real users.
I use a mobile-concurrent approach, designing for mobile and larger screens in parallel so decisions scale rather than being retrofitted; I start by defining a clear information hierarchy that prioritizes the most important content and actions for small screens, then extend that hierarchy outward to tablet and desktop layouts so additional details are progressively disclosed rather than overwhelming the user. I work with fluid grids, scalable typography, and responsive components that adapt their layout and behavior based on available space, ensure touch targets and input differences are accounted for, and optimize assets and loading strategies for performance on constrained networks. Throughout, I build reusable components in the design system with clear states and constraints, create prototypes that demonstrate flow across breakpoints, validate on real devices and orientations, and collaborate closely with engineering to implement CSS patterns and accessibility behavior so the hierarchy and core tasks remain clear at every screen size.
I design complex workflows by first clarifying business goals, user goals, and success metrics, then mapping end-to-end user journeys to surface every step, decision point, dependency, and exception. I decompose the workflow into smaller tasks and entry/exit states, define the ideal happy path and alternative paths, and prioritize steps by user value to minimize cognitive load. Information architecture and a clear hierarchy guide what to show when, using progressive disclosure, chunking, and contextual help to keep screens focused. I design explicit affordances for state, progress, and undo/recovery, and handle errors with clear messages and remediation paths. I iterate with low-fidelity flows and clickable prototypes to validate assumptions, run usability tests focused on task completion and edge cases, and instrument key steps to track drop-offs. Throughout, I collaborate closely with product, engineering, and domain experts to ensure feasibility, align on data and validations, and capture design specs, decision logs, and component patterns for consistent implementation and future iteration.
I primarily use Figma for wireframing and prototyping because it combines collaborative real-time editing, easy component and design system management, responsive constraints, and interactive prototyping in one cloud-based tool, which speeds iteration and simplifies handoff to engineering. The tool does not drive the design process; research, problem framing, user journeys, information architecture, and validation come first, and any tool that supports those activities, whether pencil and paper, whiteboards, or another app, can be effective. I choose tools to enable collaboration, versioning, and fidelity appropriate to each phase, but the core process, decisions, and user insights remain independent of the software used.
I choose prototype fidelity based on the goal, audience, stage of the project, and available time. Use low fidelity for early discovery, concepting, and validating flows or information architecture because it is fast, encourages divergent ideas, and reduces attachment to details. Use mid-fidelity to test interaction patterns and task flows where layout and basic interactivity matter but visual polish is not required. Use high fidelity when you need realistic visual design, micro interactions, content localization, accessibility checks, or to validate performance and handoff to engineering. Consider who you are testing with: stakeholders and executives often need higher fidelity to judge look and feel, while recruiting real users for usability testing can work well with lo-fi prototypes focused on tasks. Prioritize testing the riskiest assumptions with the cheapest effective fidelity, aim for 5–8 participants for quick usability rounds, and iterate fidelity upward as evidence accumulates or implementation nears.
How I conduct usability testing depends on the design stage and the questions we need to answer. Early on, I run quick moderated or guerrilla tests with low-fi prototypes to validate flows and find major friction; later, I run moderated or unmoderated tests with hi-fi prototypes or live features to validate visual design, microinteractions, and accessibility. Typical process: define goals and success criteria, recruit representative users, create realistic tasks and test script, run sessions (moderated remote or in person, or unmoderated remote), record sessions and observations, synthesize findings into prioritized issues and recommendations, then iterate and retest. I focus on both quantitative and qualitative metrics: task success or completion rate, time on task, error rate, and task efficiency for performance; System Usability Scale or simple satisfaction scores for overall perceived usability; first click or path analysis for navigation; and qualitative data such as observed pain points, mental models, quotes, and suggested improvements. For launched features, I add behavioral metrics from analytics, such as conversion rates, drop-offs, and retention, and use A/B tests to measure impact.
I collaborate closely with product managers and engineers from kickoff through launch to ensure feasibility and alignment. Early, we align on goals, success metrics, constraints, and technical risks, and I involve engineers in discovery and design reviews so implementation implications surface before decisions are locked. I frame designs as prioritized hypotheses and trade-offs, deliver incremental artifacts from sketches to interactive prototypes, and use component-based patterns and the design system to reduce unknowns. I document requirements, edge cases, and acceptance criteria, annotate interactions and data contracts, and run regular syncs and handoff sessions to clarify questions. I also build lightweight prototypes or proof of concepts with engineering when a solution is risky, iterate on feedback from technical reviews, support QA with test scenarios, and track post-launch metrics to catch implementation gaps. This continuous, pragmatic collaboration balances user needs, business priorities, and engineering constraints.
When stakeholders disagree with my design recommendations, I start by listening to understand their concerns, constraints, and underlying goals, then restate the shared product objectives to find common ground. I present evidence from research, analytics, or usability tests that support my proposal, and I show alternatives and the tradeoffs for each option so the team can evaluate risk, cost, and impact. If the disagreement is about unknowns, I propose quick validation, such as a prototype test or an A/B experiment, to get data rather than rely on opinion. I keep the conversation collaborative and empathetic, document the decision, agreed success metrics, and fallbacks, and escalate only when necessary with a clear rationale and impact. Finally, I follow up after implementation with metrics and user feedback to learn from the outcome and inform future decisions.
I present and defend design decisions to leadership by tying each recommendation to clear business goals and user outcomes, presenting key research and usability test findings that validate the approach, and showing the metrics we expect to move. I use a concise narrative with visuals and artifacts such as the problem statement, user insights, journey or flow, wireframes or prototype screenshots, and test results or analytics to illustrate the user need, the proposed solution, and evidence that it will work. I call out tradeoffs, implementation effort, risks, and mitigation plans, and alternative options with their expected impact, so leaders can evaluate cost versus benefit. When uncertainty remains, I propose lightweight validation such as a prototype test, pilot release, or A/B experiment and define success criteria and measurement plans. Finally, I document the decision, agreed KPIs, and follow-up checkpoints, and commit to sharing post-launch results so we can learn and iterate.
Yes. I have worked with design systems and actively contribute to and maintain them by creating and documenting reusable components, design tokens, layout rules, and interaction patterns, ensuring accessibility and responsiveness are built in, and keeping components aligned with product needs. My process includes auditing existing components, proposing and prototyping new patterns, running cross-functional reviews with designers, engineers, and product, and updating the system with clear usage guidelines and examples. I coordinate implementation by pairing with front-end engineers, publishing coded components to a shared library such as Storybook, and maintaining a single source of truth in Figma or a similar tool. Version control is essential, so I manage releases with semantic versioning and a changelog, use branching or feature flags for risky updates, and publish release notes and migration guidance so teams can adopt changes safely. I also track usage, collect feedback, run periodic audits, and schedule roadmap work to evolve the system while minimizing breaking changes.
I measure UX success after launch by tracking both quantitative and qualitative signals tied to agreed KPIs. Quantitative measures include task completion and success rates, conversion and funnel metrics, drop off points, time on task, error rates, retention and engagement, performance metrics like load time, and A/B test results. Qualitative inputs include post-launch usability sessions, customer interviews, NPS or SUS scores, user feedback channels, and support ticket trends. I instrument key flows with analytics and event tracking, set dashboards and alerts for regressions, and run experiments to validate improvements. I also monitor accessibility compliance and cross-device performance. Finally, I synthesize findings into regular reports, prioritize fixes by impact and effort, and feed outcomes back into the product roadmap for iterative improvement.