Company Confidential (Enterprise CPG)
Role Product Design Lead
Scope End-to-end system design
Key outcome 1 brief → 36 content packages
B2B SaaS / AI-Assisted Workflow / Content Operations

The workflow operating system for retail content at scale.

One brief in, 36 retailer-ready content packages out. AI accelerates the work. Humans own every decision that ships.

Retail content operations were slow, fragmented and expensive. Every product launch meant weeks of manual coordination across SKUs, retailers and markets, with approvals scattered across email threads and spreadsheets. I designed ShelfFlow: a structured workflow platform that turns a single product brief into retailer-ready content through AI-assisted generation, gated human approvals and multi-retailer deployment.

I led product design end-to-end: workflow architecture, state machine design, decision authority mapping, AI orchestration boundaries and the full operator interface.

6Workflow stages, 4 decision gates
36Content packages from 1 brief
18Granular human approvals per launch
6Embedded AI agents, zero autonomous shipping
ShelfFlow platform overview
ShelfFlow Commerce Workspace dashboard

Commerce Workspace. The single surface where operators track active briefs, pipeline status and AI agent activity across all launches.

01 The Operational Breakdown

Every product launch triggered the same cascade of manual work. Teams weren't failing at content creation. They were drowning in coordination.

A single product launch at an enterprise CPG company generates content across multiple SKUs, multiple retailers, and multiple markets. Each retailer has different spec requirements, character limits, image guidelines, and compliance rules. Amazon needs bullet-point formats within strict character counts. Sephora requires ingredient narratives in a different tone. Walmart has its own taxonomy and claim restrictions.

The content team was producing all of this by hand. One brief would take weeks to turn into retailer-ready packages. Errors compounded silently. Versioning happened in file names. Approvals moved through email threads that no one could reconstruct. When a launch slipped, no one could point to where the breakdown actually happened.

This wasn't a creativity problem or a tooling gap. It was a coordination overhead problem that got exponentially worse with every new SKU, retailer, or market added to the matrix.

  • 6 SKUs × 6 retailers = 36 content packages from a single product brief. Each with unique spec requirements, character limits, and compliance constraints.
  • No single source of truth. Content lived in spreadsheets, email threads, and shared drives. Version conflicts were constant. Teams regularly shipped outdated copy because they couldn't tell which file was current.
  • Approval chains were invisible. No one knew who had approved what, or what was still pending. A single missing sign-off could block an entire launch, and the team often wouldn't discover it until the deadline.
  • Multi-market duplication. Teams in different regions recreated the same content independently, introducing inconsistencies across markets that took days to reconcile.
  • AI tools existed but weren't trusted. Teams experimented with generative AI but had no governance model. Without clear boundaries for what AI could and couldn't decide, nothing AI-generated shipped without full manual rewriting.
02 Why This Problem Mattered

The real bottleneck wasn't content quality. It was decision architecture: who approves what, when, and with what authority.

Every enterprise content workflow has the same hidden structure: a series of decision gates where someone must say yes or no before work moves forward. The teams I studied didn't lack tools. They lacked a system that made decision authority explicit and enforceable.

Faster drafting wouldn't solve this. The team could generate copy in hours, but approvals took weeks because accountability was distributed across email threads, Slack messages and informal handoffs. No one could answer "who needs to sign off on this?" without asking three other people first.

AI could accelerate the generation phase, but only if humans retained clear authority over what ships. The strategic insight: design the decision architecture first, then embed AI as an accelerator within it. Not the other way around.

Evidence Stakeholder interviews revealed that roughly 70% of launch delays originated not in content creation, but in approval ambiguity. "Who needs to sign off on this?" was the most frequently asked question. The workflow needed explicit gates with clear ownership, not faster drafting tools.
03 Product Strategy

One brief in, 36 retailer-ready packages out. Six stages, four decision gates, zero ambiguity about who owns what.

ShelfFlow is a structured workflow platform that takes a single product brief and coordinates the entire content production pipeline across SKUs, retailers, and markets. The system is built around 6 workflow stages with 4 mandatory decision gates where human approval is required before content advances.

The architecture follows a deliberate logic: separate generation from adaptation, separate adaptation from approval, and make every transition between phases explicit and auditable. This staging exists because each phase has different owners, different risks, and different criteria for "done".

Six-stage pipeline

S1Brief IntakeProduct brief parsed, SKU matrix generated
S2AI Draft Generation◆ Gate 1: Brief approval
S3Human Review & Edit◆ Gate 2: Draft quality
S4Retailer AdaptationSpecs, character limits, compliance
S5Compliance & Legal◆ Gate 3: Compliance sign-off
S6Publish & Distribute◆ Gate 4: Final launch approval
ShelfFlow workflow architecture: 6 stages, 4 decision gates

End-to-end workflow architecture. Six stages, four decision gates. AI operates between gates. Humans own every gate transition.

04 Core Workflow

The operator interface: one screen to track 36 packages across 6 stages. No drilling, no email, no ambiguity.

The core interface is a workflow board where operators see every content package, its current stage, who owns it, and what's blocking it. Each package card surfaces its SKU, target retailer, current gate status, and AI confidence score at a glance.

The design goal was specific: at any moment, an operator should be able to answer "what's blocking this launch?" in under 5 seconds. That constraint shaped everything from information density to colour-coding to the decision to surface time-in-stage on every card. When you're managing 36 parallel packages, the interface has to surface problems, not require you to hunt for them.

  • Package cards show SKU, retailer, stage, owner, gate status, and time-in-stage. Color-coded by urgency.
  • Batch operations let operators approve, reject, or reassign multiple packages at once. Critical when managing 36 packages per launch.
  • Filters by retailer, SKU, or gate status so operators can focus on what needs attention now.
Brief intake formS1 · Brief intake / empty state
Brief with business contextS1 · Brief filled / markets, brands, retailers
Brief validation side-by-side diff

Gate 1: Brief validation. Side-by-side diff showing user brief vs. AI-enriched brief. Human confirms or edits before lock.

Completeness check scanning SKUsS3 · Completeness scanning / AI analysing SKUs
Completeness results with gapsS3 · Results / 9 ready, 3 with gaps
SKU detail panel with gap flags

SKU detail panel. Field-level gap detection with AI recommendations. One-click delegation to the ops team for resolution before the package can advance.

Generate content empty stateS4 · Generate / select SKUs, trigger AI
Building PDPs in progressS4 · Generating / A/B variants per retailer
Amazon PDP previewAmazon · Full-fidelity PDP preview
Sephora PDP previewSephora · Adapted to retailer format
Walmart PDP previewWalmart · Marketplace specs applied
Amazon enhanced contentAmazon · Enhanced variant (Option B)

Same SKU, three retailers, completely different specs. Previews render in each retailer's actual page layout so operators review what the customer will see, not an abstraction.

Review and approve matrix

Gate 4: Full matrix review. Per-cell approval across 18 human decisions, every SKU × retailer combination. "Approve for Launch" is the final gate.

Push to retailers launch screen

S6: Launch. Package assembled, SKU names mapped to retailer IDs, push to retailers via API. Point of no return.

Detailed user journey with happy path and exception handling

Full user journey mapping. Happy path, branching decisions, rejection loops and recovery flows. Rejection is not an error state. It is a structured feedback loop that routes content back with reviewer notes.

05 State Architecture

Every content package has a state machine. No package moves without an explicit transition, and no transition happens without a record.

Each of the 36 content packages operates as an independent state machine. States are explicit, transitions are guarded by gate conditions, and every state change is logged with who triggered it and why.

This was a deliberate architectural choice. In the existing workflow, packages got stuck in ambiguous in-between states: "probably approved", "waiting on someone", "I think legal saw it". The state machine eliminates ambiguity. A package is either Draft, In Review, Approved, Published, Rejected, or Blocked. Nothing else. Every transition requires an explicit human action or a system event, and the audit trail is permanent.

Content package states

DraftAI-generatedAwaiting review
In ReviewHuman editingOwner assigned
ApprovedGate passedReady for next stage
PublishedRetailer-liveAudit trail complete
RejectedGate failedReturns to previous stage with notes
BlockedDependency unmetWaiting on external input
Design rationale Every state transition is logged: who, when, why. Rejected packages carry reviewer notes back to the previous stage. No silent failures. No packages stuck in limbo. The state machine is the accountability layer.
SKU lifecycle state machine diagram

SKU lifecycle: 8 possible states, 2 loops (fix + re-scan), per-retailer approval. 36 parallel state machines converging to a single launch.

06 What Shaped the System

Every design decision came down to the same tension: how fast can we move without losing control over what ships?

In enterprise commerce, shipping wrong content to a retailer is more expensive than a delayed launch. Incorrect claims create regulatory exposure. Mismatched specs trigger portal rejections that delay an entire product line. These aren't hypothetical risks. They're the operational reality that shaped every trade-off in the system.

The decisions below weren't made in a vacuum. Each one responded to a specific failure mode I observed during research, or a structural constraint imposed by the multi-retailer, multi-market reality of the workflow.

DecisionChosenRejectedWhy
AI authority Draft-only (never ships) Auto-publish with confidence threshold Enterprise compliance requires human sign-off on every claim that reaches the shelf. AI that ships autonomously creates regulatory liability. The value of AI here is speed-to-draft, not autonomy.
Gate model 4 mandatory gates, no bypass Flexible approval chains Flexibility introduces ambiguity. When a retailer or legal team asks "who approved this?", the system must return one unambiguous answer. Configurable chains make that impossible to guarantee.
Package granularity 1 package = 1 SKU × 1 retailer Grouped by SKU or by retailer Retailer specs diverge too much for grouping to be safe. Amazon character limits differ from Sephora's by 40%+. Grouping hides critical differences that cause portal rejections downstream.
State machine Explicit states, logged transitions Implicit progress tracking Every state change records who, when, and why. This isn't just UX. It's the audit trail that makes the system legally defensible when a claim is questioned months after launch.
Staged generation Brief → Draft → Adapt → Review Single-step generation Generating retailer-adapted content in one pass produces output that looks right but fails spec validation. Staged generation lets operators catch problems at each layer before they compound.
Batch operations Batch approve only after individual review Unrestricted batch approval v1 allowed batch-approving packages that hadn't been individually reviewed. Testing revealed operators rubber-stamped to save time. Added a "reviewed" flag requirement. Speed without review is liability.
Decision authority map

Decision authority map: who owns which decisions across the workflow. AI agents act autonomously between gates. Humans own every approval that ships.

07 AI Layer

Six AI agents, each with a bounded scope. Every agent accelerates work. None can ship content.

ShelfFlow embeds 6 specialised AI agents across the pipeline. Each agent operates within a clearly defined boundary: it can draft, suggest, check, or flag, but it cannot approve or publish. This distinction is fundamental to the system's design.

The AI layer exists to eliminate the repetitive, high-volume work that made manual workflows collapse at scale: parsing briefs into structured SKU matrices, generating retailer-adapted copy that respects character limits, cross-checking claims against compliance databases, and detecting inconsistencies across 36 parallel content packages. These are tasks where AI creates genuine operational value. The critical decisions, approvals and sign-offs that carry legal and brand accountability, remain with humans.

Embedded AI agents

BRIEF PARSERExtracts SKU matrix, claims, and key messages from product brief
COPY DRAFTERGenerates retailer-adapted copy per SKU. Respects character limits and tone.
SPEC CHECKERValidates against retailer spec sheets. Flags violations before human review.
COMPLIANCE SCANNERChecks claims against regulatory databases. Flags risk, never clears it.
CONSISTENCY AUDITORCross-checks messaging across all 36 packages. Detects contradictions.
DIFF REPORTERSummarises what changed between versions. Enables faster re-approval.

Every agent drafts or flags. No agent approves. Zero autonomous shipping.

AI agent orchestration flow

AI agent orchestration. The workflow orchestrator routes tasks to the right agent, schedules generation jobs and enforces gate rules. No agent can bypass a gate.

Human-in-the-loop decision model

Human-in-the-loop decision model. Three authority zones: AI acts freely (parse, scan, format), AI proposes (enrichments, variants), human decides (validate, approve, ship). The boundary between zones is the product's trust architecture.

08 Operational Scale

36 packages. 18 approvals. 200+ state transitions. The combinatorial reality that makes manual workflows collapse.

The core complexity of ShelfFlow isn't any single content package. It's the combinatorial explosion when you multiply SKUs by retailers by markets. A "simple" 6-SKU launch across 6 retailers generates 36 unique content packages, each needing at least 3 human touch points.

This is where manual workflows break. Not at package #1, but at package #27, when reviewer fatigue sets in and the differences between Amazon and Walmart specs blur together. ShelfFlow's job is to make package #36 as reviewable as package #1.

Scale math 6 SKUs × 6 retailers = 36 packages. Each package passes through 4 gates. That's 144 gate evaluations per launch. With roughly 50% requiring at least one revision cycle, the system manages 200+ state transitions per product launch. Without structured workflow, this volume becomes untrackable within the first week.
Edge cases and constraints The system had to handle partial approvals (3 of 6 retailers ready, 3 still in review), mid-cycle brief changes (product claims updated after generation), and cross-package dependencies (a compliance flag on one SKU potentially affecting all packages sharing the same claim). These weren't theoretical scenarios. They were weekly occurrences in the existing workflow.
Content matrix explosion: 1 brief becomes 36 packages

The content matrix explosion: 1 brief → 12 SKUs × 3 retailers × 2 variants = 36 content packages → 18 approval decisions → 1 launch.

09 Ecosystem Integration

ShelfFlow sits between the brand team and the retailer. It's the coordination layer, not another content tool.

Enterprise content teams don't work in isolation. Product data lives in PIM systems. Brand assets live in DAMs. Compliance rules live in regulatory databases. Finished content ships through retailer portals with their own APIs and validation rules.

ShelfFlow doesn't replace any of these systems. It orchestrates the workflow between them: pulling structured product data at brief intake, validating claims against compliance databases during review, and pushing retailer-formatted packages to distribution portals at launch. The design assumption was that integration points would be messy and unreliable, so the system had to handle partial data, format mismatches and failed API calls gracefully without blocking the entire pipeline.

  • Upstream: Product briefs, PIM data, brand guidelines, regulatory databases
  • Core: ShelfFlow workflow engine, AI agents, decision gates, state machine
  • Downstream: Retailer portals (Amazon, Walmart, Target, etc.), content delivery, audit logs
ShelfFlow in the commerce content ecosystem

Integration map. Upstream sources (PIM, DAM, brand guidelines) flow in. Retailer-ready content flows out to Amazon, Sephora, Walmart and other retail channels. ShelfFlow is the coordination layer, not the data layer.

10 Roles and Authority

Four roles, clear authority boundaries. The system enforces who can approve what. Process docs don't.

In the existing workflow, anyone with email access could informally "approve" content. That created accountability gaps. When something shipped with an incorrect claim, no one could determine who had actually signed off on it.

ShelfFlow defines four operator roles, each with explicit permissions mapped to specific workflow stages and gate authority. A Content Lead can approve briefs and drafts, but cannot sign off on compliance. A Compliance Officer owns Gate 3, but has no authority over launch. Role boundaries are enforced by the system, not by process documentation or team agreements.

Authority mapping

CONTENT LEADOwns briefs, approves drafts (G1, G2)
COMPLIANCEOwns regulatory review (G3)
BRAND MANAGERFinal launch authority (G4)
OPERATORManages workflow, no gate authority
Service blueprint: 5 operational layers across 6 workflow stages

Service blueprint. Five operational layers mapped across all six workflow stages. This diagram drove conversations with engineering about where AI runs server-side vs. where humans interact in the UI.

11 Product Principles

Five rules that constrained every design decision. Not aspirational values. Operational guardrails.

These principles emerged from research, not from brainstorming. Each one addresses a specific failure mode I observed in existing workflows, or a structural constraint that the system had to respect to work at enterprise scale.

  • Keep AI assistive, not autonomous. No AI agent can advance content past a decision gate. AI drafts, suggests and flags. Humans validate, approve and ship. This isn't a philosophical position. It's a compliance requirement in enterprise retail.
  • Every state transition must be auditable. If a retailer or legal team asks "who approved this claim?", the system returns a timestamped answer with the approver's name. No ambiguity. No reconstruction from email threads.
  • The board is the single source of truth. If it's not on the board, it doesn't exist. No side channels, no email approvals, no Slack-based sign-offs. The system must be the canonical record of what happened and when.
  • Design for operational scale, not ideal conditions. Package #36 must be as reviewable as package #1. The interface assumes reviewer fatigue, not reviewer enthusiasm. Urgency signals, comparison tools and batch operations all exist to fight decision quality decay.
  • Structure complexity without slowing teams down. 36 packages, 144 gate evaluations, 200+ state transitions per launch. The operator sees what needs attention now. The system absorbs the combinatorial complexity so the human can focus on judgement calls.
12 Reflection

The hardest design problem wasn't the AI. It was making 18 approval moments feel fast instead of bureaucratic.

The pattern AI in enterprise workflows fails when it's positioned as a replacement for human judgement. It succeeds when it's positioned as an accelerator within an explicit decision architecture. The value of ShelfFlow isn't that AI writes copy. It's that the system makes the coordination overhead of 36 packages manageable, traceable and auditable. The workflow is the product. The AI is infrastructure.
What failed v1 had 6 decision gates, one per stage. Testing revealed that gates at Brief Intake and Retailer Adaptation added friction without adding value. Content leads were rubber-stamping them within seconds. I reduced to 4 gates positioned where genuine decision-making happens. The lesson: more governance doesn't mean better governance. Gates only work when they protect against real risk.
What success looks like A content team launches a new product across 6 retailers in days instead of weeks. Every approval has a timestamp and an owner. Every AI-generated draft has been reviewed by a human before it reaches a retailer portal. When compliance asks "who approved this claim for Walmart?", the answer takes seconds, not a forensic email search. The system doesn't make the work disappear. It makes the work structured enough to scale.
Next case study KODO Arbitrage →
×