Company RetailNorm
Role Founder & Designer
Key outcome Live in production
Commerce / Data Systems / End-to-End

Reduced cross-platform reporting from 2–4 hours to 3 minutes by designing a normalization system that made incomparable advertising data trustworthy for the first time.

End-to-end ownership. Research, service design, IA, interface design, growth strategy. AI-compressed execution: validated concept to live product in weeks. No engineering team.

40xReporting time reduction
18.6%Attribution inflation exposed
+$17.4kRevenue unlocked per cycle
$0Engineering team. AI execution.
RetailNorm Normalization Engine
Try the live productUpload sample CSVs. Watch the engine work.
Open RetailNorm →
01 Context

The data wasn’t wrong. It was structurally incomparable.

Amazon Ads reports ROAS on a 14-day last-click window. Walmart Connect uses 30-day multi-touch. Criteo uses 7-day first-click. Same $10K campaign, same 500 conversions, three different performance numbers. Agencies compare them as if they share one measurement system. Budget decisions worth hundreds of thousands of dollars flow from structurally invalid comparisons.

Scope Role: Founder & Product Designer
Scope: Full product (research to production)
Users: Mid-size media agencies
Work: Research, service design, IA, interface, growth, AI execution
Attribution windows

Three platforms, three measurement systems. The ruler changes depending on who made it.

"I need accurate cross-platform performance without spending half my Monday in Excel." The functional job. The real job was emotional: stop feeling like a spreadsheet operator in front of senior clients.
Persona + experience map
Ecosystem map
02 The Real Problem

The data wasn’t wrong. It was structurally incomparable. And nobody downstream knew.

Three problems were silently breaking every cross-platform report:

  • Attribution windows differ across platforms (7-day, 14-day, 30-day). Same campaign, different numbers. Not a bug, but a structural incompatibility.
  • Media planners spend 2–4 hours per client per week on manual Excel normalization. None trust their macros.
  • No engineering team. Solo designer. Every architectural decision had to be simple enough to maintain alone.
Market positioning

Enterprise tools: $3K-10K/mo. Mid-size agencies: priced out. Their alternative: Excel. RetailNorm sits in the gap nobody was serving.

Competitive UX audit

Speed is the moat. Output beats input. Zero configuration wins.

03 Key Insight

Agencies don’t buy tools. They buy deliverables.

The PDF report wasn’t a feature, it was the product. Concierge validation (manually delivering Excel reports) taught me this before I opened Figma. If I’d started with the dashboard, I would have built the wrong thing.

Evidence 4-week validation sprint. 8 interviews (6+ confirmed pain). Landing page got 20+ signups in 5 days. Concierge MVP: agencies requested normalized reports again the following week. 3+ agencies committed at $200–600/month before a product existed.
Validation framework
Research to decisions

Every feature traces to a validated signal. The bottom line: nothing was built on assumption.

04 What I Changed

Four interventions. Each targeted a structural gap.

1. Built a normalization engine

Upload CSVs from any platform. Get comparable ROAS in seconds. Normalizes to 7-day last-click (strictest standard). Every normalized number is lower than what agencies are used to, and that is intentional. The explanation of why is the strongest trust signal.

2. Designed dual-view architecture

Executive view (3 numbers, plain-language insight) for directors. Technical view (decay parameters, z-scores, corrections) for planners. Same engine, different cognitive loads. Resolved 80% of the IA debate.

3. Made uncertainty visible

Every analysis shows confidence: 71%, with 2 flags. Transparent imperfection is more credible than polished certainty. Most counterintuitive bet. It was correct.

4. Invested in the PDF report as retention mechanism

Agencies judge the product by its output artifact. If it looks like an Excel export, they rebuild in PowerPoint. Branded template with institutional typography. Report quality drives repeat usage, not feature depth.

Branded report

This PDF is not a feature. It is the reason agencies come back every Monday.

Prioritization matrix

MVP = 4 must-haves. Everything else ships after paying users validate retention. Scope discipline was the competitive advantage.

Service blueprint

User emotion: Curious, Hopeful, Anxious, Relieved, Confident, Loyal. I designed for the anxious moment.

Journey: before vs after

6 steps, 2-4 hours. Became 4 steps, 3 minutes.

Normalization pipeline
Architecture

Deliberately simple. The architecture is the feature.

Design process
IA
Task flow

Critical path: Upload, Review, Export. Everything else is progressive disclosure.

Design system
Engine
Simulator
Alerts
Reports
Mobile Technical
Technical
Mobile Executive
Executive

Monday morning. Train. Check the numbers.

GTM
Retention loops
Metrics framework

North star: reports exported per week per active user. A product without a growth model is a prototype.

AI roadmap

Three layers of AI intelligence, shipped incrementally. Each phase proves retention before the next ships.

05 Key Decisions

What trade-offs did I make?

Every decision involved choosing between conventional wisdom and what the research actually showed:

DecisionChosenRejectedWhy
Normalization baseline 7-day last-click (strictest) 14-day (Amazon default) Lower numbers force the “why” conversation, the strongest trust signal
Uncertainty display Visible confidence scores (71%) Hidden/clean 100% Transparent imperfection more credible than polished certainty
Architecture Single HTML file, no framework React + microservices Solo maintainer. Ship fix → live in 30 seconds. Every complexity layer is a liability
Data input CSV upload API integrations OAuth, credentials, rate limits would triple scope. CSV takes 15 seconds. APIs are v2
06 Results

Structurally incomparable data, made trustworthy.

40xReporting time reduction
18.6%Average attribution inflation exposed
+$17.4kRevenue unlocked per analysis cycle
71%Average data confidence score
Validation Live in production at retailnorm.com. Concierge validation before any code. Pre-sales at $200–600/month before product existed. Growth system: cold email pipeline (38% open target), SEO glossary pages, 3 retention loops (weekly habit, multi-client expansion, report-driven word-of-mouth).
07 Takeaway

The most impactful work was deciding what I refused to build.

The pattern Design the output artifact before the interface. Make uncertainty visible instead of hiding it. Validate the business model before opening a design tool. Treat architecture as a design decision, not an engineering detail. The dashboard is not the product. The artifact the dashboard produces is.
What failed First version was an API (upload CSV, get JSON), but agencies don’t use endpoints. They use screens on Monday morning. Pivoted to dashboard-first in sprint 1. Also budgeted 1 week for CSV parsing, took 3 because Amazon headers change by report type. Also dark theme default was designer preference, not user context. Users needed light mode for client presentations.
Try the live productUpload sample CSVs. Watch the engine work.
Open RetailNorm →
Next case study GetYourGuide Checkout →
×