No shared components. No documented standards. No agreed way of building anything. Mortgage Automator was scaling fast and the cracks were showing. I led the effort to build the foundation it needed.
My Role
Lead Product Designer
Team
Mike Hutchison
Design Head
Responsibilities
Systems Audit, Component Architecture, Pattern Documentation, Design Standards
Project Timeline
3 Months

Key Outcomes
Every major UI component audited & standardized
~30%
Reduction in design task overhead per project
~67%
Reduction in designer onboarding time
1
Centralized source of truth for the entire team
Context
Mortgage Automator is a Fintech SaaS platform for private lenders across North America — a loan origination and servicing system where users managed complex lending workflows, compliance documents, investor portfolios, and payment processing within a single product. It was dense, data-heavy software where precision mattered. In this context, a UI that behaved unpredictably wasn't just a design problem — it was a trust problem.
When I joined, the team comprised two designers — myself and the Design Head — supporting a product built largely by engineers with minimal design involvement from the start. The software worked — but it carried the weight of that history. Design debt had accumulated quietly over years of shipping without a component governance layer, and it was compounding faster than the team could address it. There was no shared source of truth, no component library, and no documented rationale for why patterns looked or behaved the way they did.
Research & Discovery
The problem wasn't invisible — but understanding it fully required looking beyond what was visible inside the product. Discovery drew from three sources.
Internal Design Observations
Working within the product daily had built deep familiarity with its structural shortcomings. A systematic audit of every major UI component mapped inconsistency patterns across the platform, identifying where the system was failing users and costing the team time. These observations formed the baseline — the internal signal that something needed to change.
Customer Onboarding & Support Team Feedback
The customer onboarding team walked every new client through the platform during setup. Settings-related questions and interaction confusion were a recurring theme in support interactions, and the team had developed informal verbal guides to compensate for the interface's lack of clarity. Their accumulated knowledge of where users got stuck — and what required repeated explanation — provided the most direct signal of what the product wasn't communicating on its own.
Public Review Audit
User reviews across G2, Capterra, and GetApp confirmed the same signal from the outside. Users described the interface as visually overwhelming, flagged unpredictable behaviour following product updates, and noted a steep learning curve despite rating the platform highly. The absence of clear navigational structure and missing interaction feedback — cited consistently across sources — made clear that the product's interaction model had no documented foundation to build from.
The research didn't surface new problems — it confirmed and sharpened the ones already visible from the inside.
| User Signal | Source | Design Decision | |
|---|---|---|---|
| Interface feels visually overwhelming | Public reviews | Component structure, information hierarchy, spatial token system | |
| Onboarding team explains self-explanatory interactions | Internal — onboarding team | Standardized interaction patterns reduce cognitive load and accelerate learnability | |
| Support calls generated by UI confusion | Internal — customer care | Consistent feedback states eliminate ambiguous system responses | |
| Unpredictable UI behaviour after product updates | Public reviews | Design system as single source of truth | |
| Data loss from missing interaction feedback | Public reviews | Form pattern documentation, feedback and alerting standards | |
| Limited navigational structure in complex workflows | Public reviews + onboarding team | Standardized UI component anatomy with defined zones and hierarchy rules |
The Problem
The inconsistency had three distinct costs — each compounding the others.
The Team Was Doing the Wrong Work
For every new project, designers spent time reverse-engineering existing patterns, hunting for the closest available component, then reconstructing it without a shared reference. Time that should have gone toward solving user problems went toward pattern archaeology instead.
Users Couldn't Trust the System
Similar interactions produced different outcomes depending on which module the user was in. Forms submitted silently — no confirmation state, no error feedback, no validation response. The system completed the action without communicating it, quietly eroding user confidence. For legacy users these inconsistencies had become workarounds. For new users, they were friction from day one.
The Product Couldn't Scale
Without a standardized component taxonomy or usage guidelines, every new person — designer or end user — had to learn the system by excavation rather than by reference. There was no governance mechanism to contain the accumulating debt, and no foundation to build confidently on top of. Without intervention, the debt would keep compounding until it became too expensive to repay.
Old screenshots from the system

How I Built It
Restructuring the page, fixing the logic, building for what comes next.
The Approach
Before building anything, I audited the entire product — mapping every major UI component, cataloguing variant proliferation across the product, and identifying where the system was failing users and costing the team time. I prioritized components using two criteria: frequency of appearance across the product, and severity of the usability damage their current state caused. That prioritization framework produced a clear, defensible build order.
I researched and benchmarked established design systems principles to identify standards most applicable to our product context, grounding component decisions in proven conventions rather than building conventions from scratch. I worked in close collaboration with my manager throughout — weekly reviews to pressure-test decisions and ensure every component solved a real problem. His direction was the original catalyst for the project.
Four UI components received deep-dive treatment — each audited instance by instance across the product, then rebuilt with documented standards. A fifth deliverable standardized the action vocabulary that governed how all of them behaved.
Modals — Layout & grouping inconsistency
Modals - Audit: The audit revealed inconsistent footer anatomy, conflated navigation and action controls, and no shared logic for button grouping across modal instances — creating an unpredictable experience for users navigating high-stakes workflows. Action groups applied equal visual weight to primary and destructive actions. Navigation controls (Back, Next) were placed inside the action group rather than treated as a separate logical group. Footer zones mixed unrelated content types — status messages, download links, and confirmation actions — with no visual separation or hierarchy between them.

Navigation controls placed inside the action group alongside destructive and primary actions — conflating flow control with form submission.

Two Destructive Primary actions placed side by side with no spatial buffer or confirmation safeguard — treating irreversible consequences as visual equals.

Modal title lacks hierarchy to anchor context. Primary and dismiss left-aligned, breaking the spatial convention users expect.

Category labels carry no visual dominance over child items — flattening IA across a configuration-heavy modal.

Critical financial constraint communicated through subdued helper text with no visual prominence. Equal weight on Cancel and Confirm removes action hierarchy for a financially significant decision.

Navigation controls placed inside the action group alongside destructive and primary actions — conflating flow control with form submission.

Two Destructive Primary actions placed side by side with no spatial buffer or confirmation safeguard — treating irreversible consequences as visual equals.

Modal title lacks hierarchy to anchor context. Primary and dismiss left-aligned, breaking the spatial convention users expect.

Category labels carry no visual dominance over child items — flattening IA across a configuration-heavy modal.

Critical financial constraint communicated through subdued helper text with no visual prominence. Equal weight on Cancel and Confirm removes action hierarchy for a financially significant decision.

Navigation controls placed inside the action group alongside destructive and primary actions — conflating flow control with form submission.

Two Destructive Primary actions placed side by side with no spatial buffer or confirmation safeguard — treating irreversible consequences as visual equals.

Modal title lacks hierarchy to anchor context. Primary and dismiss left-aligned, breaking the spatial convention users expect.

Category labels carry no visual dominance over child items — flattening IA across a configuration-heavy modal.

Critical financial constraint communicated through subdued helper text with no visual prominence. Equal weight on Cancel and Confirm removes action hierarchy for a financially significant decision.
Modals-Build & Document: A standardized modal anatomy was established across three zones — header, body, and footer — based on the most consistently used patterns identified in the audit. Clear rules were defined for margin values, spacing tokens, and button group logic per zone. Navigation controls were separated from action groups with defined placement rules. Destructive actions were given explicit spatial and visual treatment standards. Best practices documentation covered do's and don'ts, variant rules, and three corrections identified during the drafting process before the document was adopted by the team.
Modal - Build and Document— Figma
FORMS — Missing feedback and interaction logic
Forms - Audit: Input component states were absent or inconsistently applied across the product — no defined error, success, or disabled states, and no validation feedback to confirm or reject a submission. File input components were built independently per module with no shared interaction model. Spatial token values between fields, field groups, and sections were undefined, resulting in inconsistent density across form layouts. Action group placement at form footers varied with no governing rule.
Forms - Build & Document: A complete input component state system was designed — default, focused, error, success, and disabled — applied consistently across all form contexts. Validation was redesigned to surface at the field level on interaction rather than deferred to submission, with descriptive inline messaging specifying the nature of the error rather than relying on color emphasis alone.
File input components were unified into a single pattern with defined variants for different upload contexts. Drag-and-drop affordance, file type constraints, size limits, and error states were standardized across all instances. Upload error messaging was redesigned to be prominent and actionable — specifying what failed and how to resolve it.
A spatial token system was established for field gaps, group gaps, and section gaps — creating consistent density and clear visual separation between field groups in complex forms. Label-to-field alignment rules and typographic standards for inline contextual content were documented, ensuring high-complexity forms maintained a scannable hierarchy regardless of field density. Usage guidelines covered when each input state applied, how validation errors were surfaced, and how success confirmation was communicated across different form types.
Form - Build and Document — Figma
Master Table
Master Table - Audit: Across all master table instances in the product, the toolbar zone lacked a defined anatomy — search, pagination, filters, primary actions, export controls, and bulk actions were distributed without placement rules or zone separation, producing a different header layout for every table in the system. Navigational controls competed visually with data manipulation actions, and filter controls appeared across unstructured single or double rows with no consistent grouping logic or order. Bulk action controls surfaced inconsistently — present in some instances, absent in others, with no standardized trigger pattern or visual relationship to existing toolbar controls. The absence of a shared header anatomy meant every new table instance resolved the same structural problem independently, compounding inconsistency across a component that appeared more frequently than any other in the product.
Master Table - Build & Document: A standardized table header anatomy was established with clearly defined zones — title, primary action, search, filters, export, and pagination — each assigned a fixed position within the header structure. Navigational controls were separated from data manipulation actions, with placement rules governing the left-to-right hierarchy across the toolbar. Filter controls were grouped into a dedicated zone with a defined order, removing the need for each table instance to resolve filter placement independently. Bulk action controls were standardized with a consistent trigger pattern, visual treatment, and placement rule — surfacing predictably relative to existing toolbar controls whenever a row selection was made. Best practice documentation covered the full header anatomy, edge cases for empty states and bulk selection, column hierarchy rules for data-dense tables, and overflow handling for tables with variable column counts.
Master Table - Build and Document — Figma
Action Vocabulary — Standardizing How the Product Communicates Intent
Action Vocabulary - Audit: One of the root causes behind the interaction inconsistency identified across the modal and form audits wasn't just layout or structure — it was language. The same action was labelled differently across modules depending on which engineer had built the feature or which designer had last touched it. Cancel appeared where Close was correct. Save and Apply were used interchangeably. Confirm and Apply resolved the same interaction type in adjacent screens with no consistent logic.
Action Vocabulary - Build & Document: 15 core action types were defined, documented, and standardized across the platform — Add, Apply, Cancel, Close, Confirm, Copy, Delete, Download, Duplicate, Edit, Export, Next / Back, Preview, Reset, Save. For each action the documentation established what it does and doesn't do, where it sits in the button hierarchy, when to use it versus a similar action, and real product examples. The distinctions that had previously been invisible were now explicit, reasoned, and documented — giving engineers and designers a shared reference and giving users buttons that behaved predictably regardless of which module they were in.
Action Vocabulary - Build and Document — Figma
Supporting Components
Five additional UI components were audited and standardized as part of the library. Each followed the same audit → build → document process, though with less complexity than the four deep dives above.
| Components | Audit | Build & Document | Figma Links |
|---|---|---|---|
| Notifications | Close to nothing existed; scattered, incomplete, inconsistent | Designed a complete notification system covering all system states and feedback requirements | |
| Empty States | Barely existed; where present, lacked useful information or guidance | Created a template covering all empty state types: first use, no results, error, permission denied | |
| Loaders & Progress Bars | Existed but gave users no information about what was happening | Added contextual messaging to loaders; defined progress indicator variants for different operation types | |
| Tooltips | Tooltips were used inconsistently across the product — appearing in contexts where inline text or help text would have been more appropriate, with no rules governing when to use them | Defined clear usage rules for tooltips — when to use them, what content belongs inside, and when inline text or help text is the better choice instead | |
| Chips & Tags | Inconsistent design and usage — chips and tags used interchangeably with no distinction | Separated chips (interactive) from tags (informational); standardized design and usage rules for each | |
| Side-Panels | Multiple layouts built independently; no shared anatomy or decision logic for which variant to use | Five side-panel variants defined — Facets, Navigation, Switcher, Loan View, and Form Builder — each with documented anatomy, components, and usage rules for when to use which |
Notification usage guide
Impact
The library extended its value beyond the design team — across the business, the product, and the people building it.
For the Business
Faster delivery with same headcount — with a centralized component library in place, general design tasks per project required an estimated 30% less time. Decisions that previously required debate were already resolved. Components that previously required reconstruction from scratch were available and documented.
UI-related support overhead reduced — patterns that had previously required explanation during onboarding were standardized into predictable, self-explanatory interactions. Standardizing 15 core action types eliminated the label ambiguity — Cancel vs Close, Save vs Apply vs Confirm — that had been a consistent source of user confusion across modules.
Design team scalable through rapid growth — the library enabled new designers to onboard faster and maintain interaction consistency as new modules were built, removing design as a bottleneck during a period of rapid expansion.
Design system migration reduced from weeks to days — when Figma introduced its token system, the team rebuilt the entire design system from DS3 to DS4. Components that had taken approximately a week each to build originally were recreated in 1-2 days — because the logic, anatomy, and documentation were already established. The foundation didn't just survive the migration. It accelerated it.
For the Product
Every major UI component standardized across the platform — for the first time, there was a single answer to the question: how should this work? Every component had defined variants, documented interaction states, and clear usage guidelines.
Consistent, predictable interactions users could trust — similar interactions now produced the same outcome regardless of which module the user was in. Forms communicated feedback. Actions confirmed completion. The silent, ambiguous system behaviour that had been quietly eroding user confidence was replaced with a reliable, self-explanatory interaction model.
For the Team
Onboarding time reduced by ~67% — when a new designer joined the team, she reached full working independence within one week — building screens using documented components without reverse-engineering existing project files. What previously required several weeks of on-the-job pattern-matching now had a centralized reference.
Component governance replaced ad hoc decision-making — ad hoc, undocumented pattern creation dropped significantly after the library launched. When a new component was needed, it was added to the library at point of creation rather than left as an orphaned project file. The contribution model shifted the team's default from building in isolation to building for the system.
Reflection
This project confirmed something that's hard to learn until you've built it: the most valuable design work isn't always the most visible. The output isn't the screen — it's the system that makes every future screen better.
Reflecting back, I would have involved a frontend developer earlier — not just to validate component specs, but to build shared ownership of the system from the start. A component library that only designers trust delivers half the value it could.
I would also have formalized the contribution model sooner. The library was strong at launch — maintaining that strength required structure. A clear protocol for how new components were proposed, reviewed, added, and deprecated. That process developed organically, but formalizing it earlier would have made the system more resilient as the team and product grew.
Almost a year after the library launched, Figma introduced its token system — and the team rebuilt the entire design system from scratch. What had taken a week per component in DS3 took one to two days in DS4. Not because the work was easier, but because the groundwork was already done. The logic held. The patterns translated. The documentation answered questions before they were asked.
The broader lesson: component governance isn't a project with a delivery date. It's infrastructure. Its real value doesn't appear at launch — it compounds in everything built on top of it afterward.











