Reduced cross-platform reporting from 2–4 hours to 3 minutes by designing a normalization system that made incomparable advertising data trustworthy for the first time.
End-to-end ownership. Research, service design, IA, interface design, growth strategy. AI-compressed execution: validated concept to live product in weeks. No engineering team.
The data wasn’t wrong. It was structurally incomparable.
Amazon Ads reports ROAS on a 14-day last-click window. Walmart Connect uses 30-day multi-touch. Criteo uses 7-day first-click. Same $10K campaign, same 500 conversions, three different performance numbers. Agencies compare them as if they share one measurement system. Budget decisions worth hundreds of thousands of dollars flow from structurally invalid comparisons.
Scope: Full product (research to production)
Users: Mid-size media agencies
Work: Research, service design, IA, interface, growth, AI execution
Three platforms, three measurement systems. The ruler changes depending on who made it.
The data wasn’t wrong. It was structurally incomparable. And nobody downstream knew.
Three problems were silently breaking every cross-platform report:
- Attribution windows differ across platforms (7-day, 14-day, 30-day). Same campaign, different numbers. Not a bug, but a structural incompatibility.
- Media planners spend 2–4 hours per client per week on manual Excel normalization. None trust their macros.
- No engineering team. Solo designer. Every architectural decision had to be simple enough to maintain alone.
Enterprise tools: $3K-10K/mo. Mid-size agencies: priced out. Their alternative: Excel. RetailNorm sits in the gap nobody was serving.
Speed is the moat. Output beats input. Zero configuration wins.
Agencies don’t buy tools. They buy deliverables.
The PDF report wasn’t a feature, it was the product. Concierge validation (manually delivering Excel reports) taught me this before I opened Figma. If I’d started with the dashboard, I would have built the wrong thing.
Every feature traces to a validated signal. The bottom line: nothing was built on assumption.
Four interventions. Each targeted a structural gap.
1. Built a normalization engine
Upload CSVs from any platform. Get comparable ROAS in seconds. Normalizes to 7-day last-click (strictest standard). Every normalized number is lower than what agencies are used to, and that is intentional. The explanation of why is the strongest trust signal.
2. Designed dual-view architecture
Executive view (3 numbers, plain-language insight) for directors. Technical view (decay parameters, z-scores, corrections) for planners. Same engine, different cognitive loads. Resolved 80% of the IA debate.
3. Made uncertainty visible
Every analysis shows confidence: 71%, with 2 flags. Transparent imperfection is more credible than polished certainty. Most counterintuitive bet. It was correct.
4. Invested in the PDF report as retention mechanism
Agencies judge the product by its output artifact. If it looks like an Excel export, they rebuild in PowerPoint. Branded template with institutional typography. Report quality drives repeat usage, not feature depth.
This PDF is not a feature. It is the reason agencies come back every Monday.
MVP = 4 must-haves. Everything else ships after paying users validate retention. Scope discipline was the competitive advantage.
User emotion: Curious, Hopeful, Anxious, Relieved, Confident, Loyal. I designed for the anxious moment.
6 steps, 2-4 hours. Became 4 steps, 3 minutes.
Deliberately simple. The architecture is the feature.
Critical path: Upload, Review, Export. Everything else is progressive disclosure.
Monday morning. Train. Check the numbers.
North star: reports exported per week per active user. A product without a growth model is a prototype.
Three layers of AI intelligence, shipped incrementally. Each phase proves retention before the next ships.
What trade-offs did I make?
Every decision involved choosing between conventional wisdom and what the research actually showed:
| Decision | Chosen | Rejected | Why |
|---|---|---|---|
| Normalization baseline | 7-day last-click (strictest) | 14-day (Amazon default) | Lower numbers force the “why” conversation, the strongest trust signal |
| Uncertainty display | Visible confidence scores (71%) | Hidden/clean 100% | Transparent imperfection more credible than polished certainty |
| Architecture | Single HTML file, no framework | React + microservices | Solo maintainer. Ship fix → live in 30 seconds. Every complexity layer is a liability |
| Data input | CSV upload | API integrations | OAuth, credentials, rate limits would triple scope. CSV takes 15 seconds. APIs are v2 |