hack-a-thon
*
march 2026
Dec 2025

Casting light on lending decisions

Tools

Figma Design

*
Wireframing
*
Prototyping
Wireframing + Prototyping

Figma Make

Ideation + Prototyping
*
Ideation
*
Prototyping

Perplexity AI

Research
*
Research
*
Copywriting
Role

UX Lead

Research
*
UI Design
*
Design Systems
*
Prototyping
*
Research
*
UI Design
*
Design Systems
*
Prototyping
team

6 people

Timeline

3 days

Mar 2026

Overview

Shining a light on the black box of mortgage approvals.

A cross-disciplinary sprint to make AI-driven lending decisions legible to the people most affected by them.

During BrainStation's sprint, I joined a team of six to tackle a real-world challenge. The starting point: a broad "How Might We" around AI transparency and human oversight. This could have taken us anywhere. We chose to aim it directly at one of the most consequential and least transparent experiences in American financial life: the mortgage approval process.

Shimon Brathwaite

Front-end

Charles Cai

Data Science

Karim Elshazly

Back-end

Hillary McLean

UX Lead

Divya Natarajan

Front-end

Eric Robinson

Data Science

The Problem

The algorithm decided their future. Nobody told them why.

We began with the sprint's original HMW prompt: "How might we help distributed teams turn AI-generated outputs into trustworthy, actionable decisions without losing human oversight?" It was broad by design. Our job was to find a human reality inside it.

When we applied a fintech lens, one question kept surfacing: who are the most vulnerable stakeholders in an AI-driven decision? Not the lenders, who have access to the model logic. Not the analysts, who can interrogate the data. The most vulnerable person is the one sitting on the other side of a rejection letter with no explanation attached.

Research

A transparency crisis.

In 2024 the mortgage loan rejection rate nearly doubled what it was in 2019.

Yet most applicants never understand why. The algorithm that decided their future remains invisible to them.

13 M

Americans apply for a mortgage every year

20.7%

Rejection rate in 2024 — nearly double the 2019 rate

defining our user

Keeping the user at the centre of the process.

To keep our design grounded, we developed a target user to represent the person at the center of this problem.

Sam is nervous, not unintelligent. She doesn't need the system dumbed down, she needs it made visible. That distinction shaped everything we built.

Sam

TARGET USER: FIRST-TIME APPLICANT

Sam is in her 30s and recently married. She and her partner want to buy their first home, but she's been following the news. She knows rejection rates are climbing. She's scared to approach a mortgage broker because she doesn't want to feel embarrassed if she gets declined — and she has no idea how any of this actually works. She wants to understand the process before she's inside it.

Now that we knew our user better, we could refine our How Might We statement:

"How might we give nervous mortgage applicants the same visibility into AI decision-making that lenders already have, so the approval process becomes a conversation rather than a black box?"

The solution

Giving mortgage applicants the same visibility into AI decision-making as lenders.

In language applicants use. Not financial jargon. Not a credit score with no context. Plain language, clear variables, and real insight into what the model is actually weighing.

Built on real HMDA data and trained on over 4.6 million applications, Prism lets applicants enter their financial picture, see their approval likelihood, and — crucially — explore how changing any single variable shifts their odds in real time. The approval process stops being a verdict and starts being a conversation.

Describing
design is one thing.

Feeling it, is another.

Note: this is a prototype built in Figma Make. It does not contain the trained AI dataset we presented at the end of the hack-a-thon.

Try it yourself
Try it yourself

insights

Two root causes, one consequence.

Our data scientists led the research into how mortgage AI systems actually work.

They drew on HMDA (Home Mortgage Disclosure Act) data and published literature on explainable AI in financial services. What emerged was a clear structural picture of why people like Sam are left in the dark.

Our research spanned peer-reviewed literature on explainable AI in financial services, FCA guidance on AI in credit decisions, and HMDA datasets covering millions of real applications. Perplexity and Claude were used to synthesize sources, but all findings were human-verified by myself and our data scientists.

Mortgage AI engines are opaque gatekeepers

Consumers and their advisors are systematically excluded from lending systems until the moment they submit a high-stakes application. There's no preview, no simulation, no rehearsal — just a verdict.

A "data hostage" situation

Teams can't collaboratively interrogate AI logic or provide human oversight to correct data nuances before a decision is made. The model has already run. The window to intervene is closed.

Blind financial leaps that erode trust

Without the ability to simulate outcomes before applying, applicants must either take a shot in the dark or disengage from the process entirely. Both outcomes harm them — and harm lenders who want qualified borrowers in the pipeline.

the end result

Prism: the mortgage approval calculator.

The product is built on real HMDA data, trained on a logistic regression model covering over 4.6 million applications, and designed to feel more like a conversation than a calculator.

feature 01

Your financial picture

Users enter the key variables that drive mortgage decisions: loan amount, down payment, income, debt-to-income ratio, property value, credit score, and loan type. Each field includes a plain-language tooltip explaining why it matters. Nothing is hidden behind jargon.

feature 02

Your approval likelihood

Based on the HMDA-trained model, Prism returns a clear probability score alongside a breakdown of which factors are working in the applicant's favor and which are working against them. The goal is not just a number — it's understanding.

feature 03

The "What if?" sandbox

This is the heart of Prism. Users can adjust any variable — income, down payment, credit score, loan term — and see their approval likelihood update in real time. A "Scenario Impact Summary" shows exactly what changed. This transforms a scary one-time submission into an ongoing, exploratory conversation.

outcomes + Next steps

Where Prism goes from here.

Prism in its sprint form is a proof of concept — and a strong one. The model works, the interface is functional, and the core UX argument holds: when applicants can see inside the process, it stops being a black box and starts being something they can prepare for.

The design decisions that matter most for business impact are the ones that reduce application drop-off from anxiety. Users like Sam don't apply because they can't afford a home — they don't apply because they're afraid of feeling embarrassed. A tool that lets her rehearse privately before she ever talks to a broker, fundamentally changes that dynamic.

Complete the playground

Full scenario comparison mode, so users can save and contrast multiple "what-if" configurations side by side — not just view one at a time.

AI Assistant

A chat interface that lets users ask follow-up questions in plain language: "What would happen if I paid off my car loan first?" — and get a real, model-informed answer.

demographic context

Layer in aggregated HMDA demographic data so users can understand how their profile compares to applicants with similar characteristics — giving context to their results, not just a score.

Collaborative mode

Couples in different locations, family members across states, and remote financial advisors should all be able to use Prism together — sharing a live session and making decisions collaboratively.

reflection

What the sprint taught me.

Designing at the intersection of AI and anxiety

The biggest design challenge wasn't the interface — it was the emotional context. Sam doesn't just need information; she needs to feel less afraid. That meant every decision about copy, structure, and interaction had to be evaluated not just for clarity, but for emotional tone. A progress bar that feels clinical can be just as alienating as one that's opaque.

Cross-disciplinary collaboration is a design skill

Working with data scientists and engineers from day one changed how I approach design. I had to understand what the model actually computed before I could decide how to present it. That constraint wasn't limiting — it made the design more honest. The variables we surfaced in the UI are the variables the model actually uses. There's no gap between what Prism shows and what Prism knows.

What I'd do differently

Given more time, I would have run usability tests with real first-time home buyers — ideally people who'd recently been denied. The persona of Sam was grounded in research, but user testing would have revealed which specific moments in the flow trigger anxiety versus confidence. I'd also push harder on the Results screen: I think there's a more emotionally intelligent way to deliver a low approval score than a number on a page.

The thing I'm most proud of

Using Figma Make to build a working prototype — not a click-through mockup, but something with real interactions and live inputs — was a stretch for this sprint format. It made our presentation tangible in a way that slides alone couldn't. And honestly, seeing the Playground respond to real slider input, updating a live score in real time, made the whole concept click in a way that no wireframe could replicate.

I started with a question:

How might we help distributed teams turn AI-generated outputs into trustworthy, actionable decisions without losing human oversight?

the answer:

A tool that gives mortgage applicants the same visibility into AI decision-making that lenders already have, in language they already use.

Next case study

Finding flow in the lululemon app