I'm Dolly Kapadia, a senior UX · UI · AI · HX designer who makes magic through product & brand.

Previous All Work Next
Case Study · Benchmark Gensuite

Enterprise Forms — from 60 minutes to 15.

Redesigning a global compliance platform around user confidence, not system logic.

Company
Benchmark Gensuite
Role
Senior Product Design Lead
Industry
Enterprise SaaS · EHS Compliance
↓70%
Form creation time reduced
5→2
UAT revision cycles
12+
Design system components shipped
3
User mental models unified
01
Context

A compliance platform nobody wanted to open.

The problem in short

500+ compliance forms. A user base that avoided opening them.

Benchmark Gensuite's Enterprise Forms platform housed 500+ compliance forms used daily across global manufacturing, energy, and logistics operations — but it was hemorrhaging time at every touchpoint.

The numbers told a clear story. Administrators were spending 45–60 minutes building forms that should take 15. UAT cycles were running 5 rounds deep. EHS managers were running compliance programs on spreadsheets — not inside the platform.

02
The Problem

Three different users. One broken surface.

Root cause

Jakob's Law — the interface was organized around system logic, not user cognition.

Administrators spent 45–60 minutes building forms that should take 15. Templates missing or unusable. Every form started from scratch.

EHS Managers had zero visibility into form lifecycle. Active, rejected, pending — all buried in a separate reporting module nobody opened.

Frontline Users were completing the wrong form versions. Scope assignment unclear. System architecture exposed to users who never needed to see it.

03
Research

What I found before I opened Figma.

Methods

Contextual inquiry · Usability testing · Task analysis · Mental model mapping · Competitive audit

Structured discovery across three user types — observed in real workflows, not conference rooms. I ran contextual inquiry with EHS managers and system admins on-site, moderated usability testing across all three user groups, and mapped each journey against the interface.

Applied frameworks

Double Diamond · Jobs-to-be-Done · Mental Model Mapping · Heuristic Evaluation · Affinity Mapping

Key insight

"Users weren't creating forms — they needed confidence that what they built would work the first time, without a developer, without rework."

04
Design Decisions

Five decisions. All argued for.

01

Re-architected the home surface

Hick's Law · Jakob's Law

Split the home screen into two clear intent zones: scope navigation (operational) and Open Access Forms (public-facing). Administrators land in the right mode instantly — no ambiguity about where to start.

Pushed back on adding Smart Suggest in this sprint — it would have added cognitive weight at the most critical entry point. We deferred it.
02

Made status visible inside the workflow

Nielsen #1 · Visibility of System Status

Real-time status dashboard — Active · Pending · Achieved · Draft · Rejected — as a permanent fixture inside Form Management. Not a widget. Not collapsible. Always on.

Engineering pushed for a collapsible panel. Research said 8 of 10 administrators didn't know if things were up to date. Collapsed panels don't solve that.
03

Turned empty states into entry points

Error Prevention · Recognition Over Recall

Redesigned template library zero-results as an active onramp — warm illustration, clear voice, direct CTA. From "nothing here" → "this is where you begin."

04

Rebuilt the form builder as a feedback loop

Direct Manipulation · Miller's Law

Live field preview. Real-time Properties sidebar. Control panel chunked into named categories — Basic, Static, Date-Time, System, Entity — keeping each group within cognitive grasp.

05

Absorbed complexity so users never felt it

Tesler's Law

Every decision applied one consistent principle: move complexity from the user into the system. Someone always carries the complexity. We chose the system every time.

05
The Turn

When the design proved itself.

UAT moment

Previous cycles averaged 5 UAT revision rounds. This one completed in 2.

During UAT, an EHS manager building a 10-field form from template said — unprompted:

"I actually know what this is going to look like when it's done."

Not "it's faster." Not "it looks better." Certainty. That was the design north star.

06
Outcome

What shifted — and how we measured it.

AreaResult
Form creation time45–60 min → under 15 min · ~70% reduction
UAT revision cycles5 rounds → 2 rounds
Compliance visibilityFirst real-time status dashboard embedded in workflow
Open Access FormsNew capability — public data collection outside registered users
External workaroundsSpreadsheet dependency eliminated
Design system12+ reusable components contributed
07
Reflection

What this taught me about senior design.

Takeaway

Senior design is about knowing which questions to protect.

"Senior design is not about having all the answers. It's about knowing which questions to protect."

The hardest part wasn't the interface — it was holding the line on cognitive simplicity when every stakeholder had a legitimate reason to add something. Tesler's Law in practice: someone always carries the complexity. We chose the system every time.

What I'd do differently

Frontline field workers were underrepresented in our research. The mobile submission experience deserved its own dedicated sprint.

More Work

View All →

Genny AI

AI
AI, Product Design

Ally

Brand
Branding, Identity

Healthcare Research

Research
UX Research, Healthcare
Previous All Work Next
Case Study · Enterprise SaaS / EHS Compliance

Enterprise Document Manager — compliance at the speed of work.

Redesigning compliance workflows at scale — from fragmented document handling to an AI-enabled, governed operational layer.

Company
Enterprise EHS Platform
Role
End-to-end Product Design Lead
Industry
Enterprise SaaS · EHS Compliance
↓35–45%
Workflow efficiency improved — reduced operational delays
↑25–30%
Higher user confidence in system reliability and accuracy
↓30%
Version-related errors reduced — minimized compliance risk
30+
Enterprise compliance workflows impacted across multi-role teams
01
Context

A platform users worked around, not with.

Why this matters

Audit readiness, compliance outcomes, and operational speed were being throttled by documents users couldn't find.

Enterprise compliance teams were executing high-stakes workflows against fragmented document storage scattered across multiple systems. Retrieval during audits was slow. Teams leaned heavily on manual workarounds sitting outside the product.

Direct impact showed up everywhere: audit readiness slipped, compliance risk rose with missing traceability, and operational execution slowed across every team that touched a document. The scaling challenge was existential — the platform wasn't designed for the volume enterprise teams were generating.

02
The Problem

Three user types. One fragmented system.

EHS Managers

Running compliance programs across large, unstructured repositories. Slow retrieval during audits. Heavy reliance on memory over system navigation.

Compliance Auditors

Inconsistent metadata reducing search accuracy. Limited version traceability introducing significant audit and compliance risk.

Operations Teams

Approval workflows executed outside the system, causing delays and lack of visibility across review cycles.

Constraints

Legacy system architecture. Regulatory requirements. High volume of structured + unstructured data.

Root cause. The system was file-based, not metadata-driven. Users had to know where a document lived before they could use it. At enterprise scale — thousands of active documents, dozens of workflows, multiple regulatory bodies — that broke down fast.

03
Research

What the data actually told us.

Methods

1:1 interviews · Workflow shadowing · Legacy usability testing · Behavioral analysis of search patterns.

Structured discovery across EHS leaders, auditors, and operational users. I ran 1:1 interviews, shadowed real compliance and audit processes, and conducted usability testing on the legacy retrieval and approval workflows to see where the product broke down under real pressure.

Observed breakdown points

  • High friction retrieving documents across large repositories
  • Inconsistent metadata reducing search accuracy
  • Limited version traceability increasing audit risk
  • Approval workflows executed outside the system

Findings that shaped the redesign

  • Users relied on memory due to lack of structured navigation
  • Search was inefficient because of weak metadata
  • Approvals happened off-platform, reducing visibility
  • Version ambiguity introduced compliance risk
Key research insight

"The issue wasn't lack of functionality — it was lack of system structure, visibility, and workflow coherence. Users were forced to rely on manual workarounds instead of the product itself."

04
Design Decisions

Five decisions. From file-based to governed.

01

Re-architected around metadata, not folders

Information Architecture · Jakob's Law

Shifted from file-based organization to a scalable, metadata-driven architecture — enabling consistent classification, faster retrieval, audit traceability, and governed access across enterprise-scale datasets.

Leadership wanted a folder view kept as fallback for familiarity. I argued that reintroducing folders would reintroduce the original problem. We shipped metadata-only. Users adapted within 2 weeks.
02

Embedded AI into the workflow, not next to it

Automation · Progressive Disclosure

AI was positioned at critical interaction points — tagging on upload, predictive search, summarization before review — rather than bolted on as a standalone feature. Intelligence amplifies existing workflows instead of creating new ones.

Product initially wanted a "dedicated AI page" for visibility with buyers. I pushed for embedded AI instead — buyers care about outcomes, not chrome. Decision held; outcomes backed it.
03

Made status visible across the lifecycle

Nielsen #1 · Visibility of System Status

Status visibility across every stage — Draft · Under Review · Published · Archived · Expired — surfaced inline so users stop context-switching to reporting modules to confirm where a document sits in its lifecycle.

04

Designed for density, not white space

Miller's Law · Scan-optimized density

Enterprise teams scan hundreds of documents a day. Dense, structured layouts with inline actions let users scan, act, and navigate without leaving flow. Every click saved compounds.

05

Brought approvals back into the system

Workflow Integrity · Governance

Approval workflows — previously executed in email, chat, or spreadsheets — were rebuilt natively with version control and role-based access. Every approval is now auditable, versioned, and governed. Compliance stopped being a report people ran and became a state the product enforced.

05
The Turn

When confidence replaced memory.

Validation moment

The behavior shift users reported was the one I had been designing for.

During UAT, one auditor summed up the behavior change:

"I don't have to remember where things live anymore — I trust the system to surface them."

Not "it's faster." Not "it looks better." Trust in the system. That was the north star the research had pointed to from the start.

06
Outcome

Measurable impact across the business.

AreaResult
Workflow efficiency35–45% improvement — reduced operational delays across compliance teams
User confidence25–30% increase in system reliability trust scores
Version errors~30% reduction in version-related compliance errors
AI-assisted taggingStandardized metadata on upload — reduced manual tagging dependency
AI summarizationRapid document evaluation without full-file read — faster review cycles
Contextual searchPredictive, role-aware filtering — reduced reliance on exact metadata
Workflow coverage30+ compliance workflows re-architected end-to-end
07
Reflection

What this project taught me about enterprise AI.

Takeaway

The best AI in enterprise isn't a feature. It's a quieter version of the product.

"AI in enterprise design should reduce cognitive load at every step — not create a new one to learn."

The hardest work was resisting the temptation to surface AI as a feature. Every AI capability we shipped was invisible until it had work to do — tagging on upload, summarizing on review, predicting on search. Users rarely named it as "the AI." They just noticed the system felt sharper.

What I'd do differently

I'd invest earlier in a lightweight governance dashboard for compliance leads. The system produced better data than any previous version — but we shipped without a first-class way to see the health of it. That's the next chapter.

Previous All Work Next
Case Study · Benchmark Gensuite · Sustainability & Disclosure Management

Disclosure Director — the ESG system of record.

Designing the disclosure platform sustainability teams stop apologizing for. 400+ KPIs, six global frameworks, one governed surface where the audit trail is the product.

Role
Senior Product Design Lead · End-to-end
Domain
Enterprise ESG SaaS · Disclosure Management
Scope
Research · Personas · Journeys · IA · UI System · Expression Builder
Frameworks
CDP · GRI · CSRD · SASB · TCFD · 400+ KPIs
/div>
400+
KPIs structured into one searchable, framework-tagged library
5
Global frameworks unified — CDP, GRI, CSRD, SASB, TCFD
4+ wks
Annual hours reclaimed from manual collection per program team
End-to-end
Owned product design across discovery, research, journeys, IA, UI, design system, and a modern Expression Builder
01
Context

ESG reporting stopped being a deck. It became infrastructure.

The shift

CSRD passed. CDP scoring tightened. Investors started reading footnotes. Sustainability went from narrative to ledger overnight.

Disclosure Director is the platform Benchmark Gensuite customers run their ESG program on — 400+ KPIs across CDP, GRI, CSRD, SASB, and TCFD, owned by people in five different functions, audited by a sixth, and signed by a seventh. The old product was built when ESG reports were brochures. By 2024 they were filings.

My design brief, in one line: make this feel less like a survey tool and more like a system of record.

02
Users

Three roles. One filing.

Program Owner

Sustainability lead accountable for the whole filing. Lives in portfolio view: which material topics are red, which KPIs are stale, which framework deadline is coming first.

Data Owner

Operations, HR, finance — the people who actually have the number. Needs a scoped task, a clear input field, and zero exposure to framework taxonomy they didn't sign up to learn.

Reviewer / Auditor

External or internal sign-off. Needs the chain: who entered the value, what evidence backs it, what changed since last quarter, who approved.

The IA pivot

Killed the old "modules" structure. Rebuilt navigation around material topics — because that's how regulators ask, and how owners think.

The original IA was a list of features: KPI Library, Frameworks, Evidence, Reports. Users had to assemble the journey themselves. I rebuilt it around material topics — Climate, Water, Workforce, Governance — so a Program Owner can drop into "Climate" and see every KPI, owner, framework tag, and open task without crossing four screens.

Mental Model Mapping made this obvious. Tesler's Law made it non-negotiable. Someone has to carry the cross-framework complexity. The system carries it now, not the user.

03
Research

Density wasn't the problem. Ambiguity was.

Methods

Stakeholder interviews · persona modeling · journey mapping · workflow shadowing · heuristic review of the existing product · competitive teardown of Workiva, Persefoni, Watershed.

I sat with sustainability leads at three customers during a real CDP submission cycle. The pattern was identical at every site: people weren't drowning in data, they were drowning in questions about the data — Is this number current? Whose number is it? Which framework needs it? Has anyone signed off?

That ruled out the obvious move (simplify the UI). Enterprise users were going to look at thousands of KPIs no matter what we did. The job was structured density — give every cell a clear owner, status, framework tag, and evidence link, then let users filter their way to the question they actually had.

What broke confidence

  • No way to see KPI Finalization Status without opening each record
  • Framework tags hidden three clicks deep — owners didn't know if their input mattered for CDP or GRI or both
  • Calculated KPIs lived in a black box — formulas existed but nobody could read them
  • Audit trail was a separate report, not a thing you could glance at

What that meant for the UI

  • Tables had to be decision surfaces, not record lists — owner, status, framework, evidence count, all at scan-level
  • Status had to split: workflow state (Draft / In Review / Final) separated from risk state (Stale / At-Risk / Verified)
  • Expression Builder needed a plain-English preview, not just a formula box
  • Audit history had to live in a side panel beside the work, not a separate module
Key research insight

"They didn't need fewer KPIs. They needed every KPI to answer four questions on sight: who owns it, what state is it in, which framework cares, and where's the proof."

04
Design Decisions

Five decisions. All argued for.

01

Pivoted IA from features to material topics

Mental Model Mapping · Jakob's Law

Old nav: KPI Library, Frameworks, Evidence, Reports. New nav: Climate, Water, Workforce, Governance, Supply Chain — with framework views as a lens, not a destination. Owners think in topics. Regulators ask in topics. The IA finally matched.

PM wanted to keep the framework-first nav for "compliance customers." I pulled the customer interviews — even the compliance leads were filtering to a topic before doing anything. We shipped topic-first.
02

Made KPI Finalization Status a permanent column

Nielsen #1 · Visibility of System Status

Every KPI row shows its workflow state and its risk state, side by side, always. Draft / In Review / Final next to Stale / At-Risk / Verified. No drilling. No "open the record to find out." The table became the dashboard.

03

Built the modern Expression Builder

Tesler's Law · Recognition Over Recall

Derived KPIs (Scope 1 emissions, water intensity ratios, gender pay gap calculations) used to live in a developer-style formula box that no sustainability lead could read. I designed a token-based builder: drag in variables, pick operators from a typed menu, see the formula render in plain English underneath, see the live preview value on the right.

Eng wanted to ship a Monaco-style code editor "because it's faster to build." It would have been faster to build and impossible to use. I held the line on tokens with a plain-English summary. Customers built derived KPIs in the first UAT session without help.
04

Unified the My Tasks panel across modules

Hick's Law · Single Source of Truth

Data Owners had three separate inboxes — KPI inputs, evidence requests, review comments — each with its own list and notification. I collapsed them into one My Tasks panel with type filters. One place to start the day, one queue to clear.

05

Made the audit feed always-visible, never modal

Auditability · Progressive Disclosure

Audit trail used to be a separate report. I moved it into a persistent right-side panel beside every KPI and disclosure record — every change, comment, approval, evidence upload, in time order. Reviewers stop asking "what changed?" because the answer is right there.

06
Outcome

What shifted — and how we measured it.

AreaResult
Information architecturePivoted from feature-led nav to material-topic-led IA — owners and regulators now share a model
KPI Finalization StatusWorkflow state and risk state visible at row-level — no drill-in required
Expression Builder0→1 token-based pattern shipped — non-technical users built derived KPIs in first UAT session
My Tasks panelThree separate inboxes collapsed into one cross-module queue
Audit feedAlways-visible side panel — reviewers stopped requesting "what changed" reports
Time saved~4+ weeks of annual manual collection effort reclaimed per program team
Design system15+ governed components: framework badges, KPI cards, owner chips, evidence drawers, audit timelines, expression tokens
07
Reflection

What this taught me about regulated UX.

Takeaway

In regulated software, the audit trail isn't a feature. It's the product.

"The best enterprise design doesn't hide complexity. It organizes it so users can act with confidence — and prove they did."

The hardest argument was the simplest one: that an audit trail belongs in the same view as the work, not in a quarterly export. Tesler's Law again — someone always carries the complexity. Putting it on the system meant fighting for screen real estate every sprint. Worth it.

What I'd do differently

I'd have run the framework-mapping research with the auditors earlier. We learned which tags actually mattered for assurance opinions in week ten. That should have been week one.

Previous All Work Next
Case Study · Internal Enterprise Tool

Champions Builder — the shared brain for sales pursuits.

An internal lead-collaboration tool for Marketing Incubator teams. Built from zero to replace the spreadsheet, the Slack thread, and the "wait, who was talking to her?" — all at once.

Product
Internal lead-collaboration workspace
Role
End-to-end Product Design · 0→1
Platform
Mobile-first — used between meetings, not at desks
Focus
Open tickers · Contact capture · Conversation log · Distribution lists
0→1
Built from scratch — replaced spreadsheets, Slack threads, and tribal memory
1
Surface for capture, conversation, affiliation, and follow-up
Mobile
Phone-first — designed for the elevator, not the desk
10+
Reusable component patterns: status pills, ticker cards, log entries, list builders
01
Context

The pursuit was happening. The record-keeping wasn't.

The setup

Marketing Incubator teams ran lead pursuits across conferences, calls, intros, and warm reach-outs. The work was fast. The handoffs were not.

Champions Builder is the internal product Marketing Incubator teams use to collaborate on leads — capture new contacts in the moment, find existing contacts before someone else cold-emails them again, log conversations so the next person knows what was said, and manage targeted distribution lists without rebuilding them every campaign.

The brief: a mobile-first workspace that's faster to update than a Slack message and more reliable than anyone's memory.

02
The Problem

Three failure modes. One missing surface.

Scattered context

Notes lived in DMs. Contact details in spreadsheets. Status in someone's head. Three teammates would each cold-email the same person before realizing two others had already met her.

No continuity

When someone went on leave or rotated off a pursuit, the relationship reset to zero. No record of what was promised, by whom, or when the next nudge was due.

List work was a tax

Building a targeted distribution list meant a fresh export, a fresh filter, a fresh "wait, is this list current?" — every single time a campaign went out.

The opportunity

A lightweight CRM surface — without becoming Salesforce. Fast to update, ambient to read, status-first by default.

The trap with internal tools like this is the slow drift toward becoming a real CRM — fields beget fields, and within six months nobody updates it. The product had to stay aggressively narrow: capture, search, log, affiliate, list. Five verbs. Status visible without a click.

03
Research & Mapping

Mapped the journey before opening Figma.

Approach

Stakeholder interviews · workflow shadowing · persona modeling · entity mapping · task-flow analysis.

The journey turned out to be a loop, not a funnel: meet someone → check if they're already in the system → add or update → log what was said → tag the affiliation → drop them on a list → follow up before someone else does. Every stage failed in the same way — context loss between people.

The bigger finding wasn't about screens. It was about workflow architecture: which actions deserved the home screen, which fields could wait behind a "more" tap, and how status had to read at a glance without anyone opening a record.

User groups

  • Marketing owners — running active pursuits and open tickers
  • Program stakeholders — checking status without doing the typing
  • Coordinators — owning contact data quality and list accuracy
  • Sales/partnership collaborators — picking up where someone else left off

Outputs

  • Lead journey map across discovery, qualification, logging, follow-up
  • Persona needs split by owner, collaborator, stakeholder
  • Entity model: contacts, companies, affiliations, conversations, lists
  • Task flows for search, add, log, affiliate, manage
Key research insight

"The contact record wasn't the product. The product was the relationship history — and it had to travel with the contact across every person who touched it."

04
Design Decisions

Five decisions. All in service of speed and shared memory.

01

Open tickers own the home screen

Nielsen #1 · Visibility of System Status

The first thing you see on launch isn't a search bar or a feed — it's your open tickers, sorted by what's overdue. Past-due and in-progress states are scan-level signals, not metadata you have to drill in to discover.

PM wanted a "Recently Viewed" feed at the top because "it's what people scroll to anyway." Recently Viewed is a vanity surface — it tells you what you did, not what you owe. Open tickers tell you what to do next. We led with the tickers.
02

Search before create — every time

Jakob's Law · Error Prevention

The "Add Contact" flow opens with search, not a blank form. Type the name, see if they're already there, then either reuse or create new. Eliminates the duplicate-contact problem at the entry point — where it actually starts.

03

Conversation log = the team's shared memory

Continuity · Recognition Over Recall

Every contact has a chronological log: what was said, who said it, what was promised, what's next. Quick-add at the bottom, expandable detail above. The handoff that used to take a 15-minute call now takes a scroll.

04

Affiliations and distribution lists as one data model

Data Model · Tesler's Law

Affiliations (contact ↔ company ↔ program) and distribution lists are two views of the same underlying graph. Build a list once, the affiliations keep it current. No more "is this list still good?" exports.

05

Reusable mobile patterns over bespoke screens

Design Systems · Miller's Law

Ticker cards, status pills, search bars, tabbed accordions, inline CTAs — defined once, reused everywhere. Kept the interface compact and predictable as scope grew. Adding a new module became a configuration job, not a redesign.

06
Outcome

From scattered activity — to a shared brain.

AreaDesign result
Open workTickers and overdue states became the home screen — no more digging to find what you owe
Contact captureSearch-first add flow killed the duplicate-contact problem at the source
CollaborationConversation log replaced individual recall — handoffs survived rotations and PTO
Distribution listsLists became a query over the affiliation graph, not a static export — always current
Mobile-first IATabs, bottom nav, accordions, contextual CTAs — usable in an elevator
Design system10+ reusable patterns for status, search, cards, forms, lists, and workflow actions
07
Reflection

What this taught me about internal tools.

Takeaway

Internal tools fail by becoming too useful — every "small" addition is the start of a CRM. Saying no is the design.

"The product wasn't the form. It was the continuity of the relationship — and the discipline to keep the surface narrow enough that people actually used it."

Champions Builder is end-to-end 0→1: discovery, personas, journey, IA, workflow modeling, mobile UI density, component scale. The decision I'm proudest of is the one that doesn't show on screen — the features I argued against. Stages, custom fields, opportunity values, forecast dashboards. All requested. All declined. The product stayed a shared brain instead of becoming Salesforce-Lite.

What's next

The natural v2 is ambient intelligence — follow-up nudges based on log timestamps, owner-based queues, and a prompt when a conversation says "I'll send it next week" but next week came and went.

Previous All Work Next
Case Study · Enterprise AI Platform

TeamGPT — where AI becomes a teammate.

End-to-end UX/UI design for a unified AI workspace — streamlining agent discovery, interaction, and management across enterprise teams.

Team
Enterprise AI Platform
Role
End-to-end UX/UI Design Lead
Industry
Enterprise SaaS · AI Agents · Copilot
↑ Adoption
Centralized agent discovery drove higher AI tool adoption across enterprise teams
↓ Context switching
Unified GPT + Copilot workflows eliminated tool-hopping
↑ Reuse
Prompt reuse, task execution, and knowledge retention built into the workspace
↑ Monetization
Subscriber-ready AI workspace enabled external revenue path
01
Context

AI tools everywhere. Governance nowhere.

Business & system challenges

AI usage was fragmented, ungoverned, and invisible — scaling risk faster than scaling value.

Enterprise teams were using AI tools across disconnected surfaces — GPT instances, Copilot apps, custom agents — with no centralized ecosystem. Governance didn't exist, so outputs were inconsistent and risk exposure grew every month.

Agents were hard to discover, capabilities were unclear, and users had no structured way to reuse prompts or continue prior conversations. Every session started from zero.

02
The Problem

Operational impact of ungoverned AI.

Adoption Gaps

Reduced efficiency in AI adoption because users couldn't find the right agent for the task, and had no trust that the outputs were governed.

Operational Friction

Increased operational friction from context-switching between tools and re-creating prompts that had already been written 20 times across the team.

Missed Monetization

Missed opportunity for a scalable, customer-facing AI platform — internal infrastructure could have been the product, but wasn't packaged for external subscribers.

Key breakdowns

No continuity across conversations. No control over AI responses. No reuse of prompts or outputs.

At its core: users had power without structure. They could generate anything, but couldn't reliably retrieve, reuse, or govern what they had generated. The product was an engine without a workspace around it.

03
Research

Dogfooded with real enterprise workflows.

Dogfooding

Internal teams used agents for real workflows: project planning, code evaluation, task execution — not synthetic scenarios.

I shadowed engineers, product teams, and internal enterprise users running real AI workflows — then interviewed them about friction points. Mapped actual interaction patterns across prompts, responses, and reuse behavior to ground every design decision in observed reality.

Data insights

  • High drop-off after initial prompt usage
  • Low reuse of previous conversations and prompts
  • Frequent switching between agents and tools
  • Lack of control over AI responses and outputs

Findings

  • AI workflows require continuity, not one-off interactions
  • Discovery must be fast and contextual (recents, favorites)
  • Users need control over models, parameters, and outputs
  • Prompt reuse and history are critical for efficiency
  • AI systems must balance power + simplicity
Key research insight

"Users weren't asking for more AI. They were asking for somewhere to put the AI they already had — with memory, structure, and guardrails."

04
Design Decisions

Five decisions. From chaos to workspace.

01

Built a centralized Agent Marketplace

Discoverability · Jakob's Law

Designed a centralized agent marketplace (All, Recents, Favorites) with a card-based layout for quick scanning and selection. Inline actions (open, favorite, expand) let users act without losing context. Agent capability was surfaced on the card, not hidden behind a click.

Engineering wanted a searchable list view only, arguing cards took too much space. Cards won because scanning agent capability at a glance was the entire point — speed comes from visibility, not density.
02

Designed a structured conversational interface

Clarity · Visual Hierarchy

Structured conversational interface for prompt → response workflows. Clear separation of user input vs AI output. Enabled contextual responses with "Remember Conversations" so continuity carried across sessions — not lost every time a tab closed.

An "infinite canvas" conversation model was proposed. Dogfooding showed users got lost fast. Structured thread with clear turn boundaries won — speed of comprehension > apparent flexibility.
03

Built Prompt & Knowledge Management

Memory · Reuse Efficiency

Designed a bookmark system for saved prompts and reusable knowledge. Folder-based organization for scalability. Quick access to frequently used workflows. Users stopped rewriting the same prompts across teams — reuse became the default, not the exception.

04

Introduced Governance & Controls

Transparency · Control

Introduced model selection, parameters, and response controls (temperature, token limits, memory). Provided transparency and control over AI behavior. Enabled enterprise-level configuration across tools — turning AI from a black box into a governed layer.

05

Added History & Continuity as a first-class surface

Cognitive Load · Miller's Law

Designed conversation history with resume capability. Users could continue workflows without restarting context. Simplified workflows into structured, repeatable patterns. Reduced ambiguity with clear system feedback and states. Balanced flexibility with guided interactions — power users got depth, new users got the rails.

05
The Turn

When AI stopped feeling like a lottery.

Validation moment

The behavior shift users named was the one the research pointed to from day one.

During internal rollout, a product manager summed up the shift:

"I used to gamble on prompts. Now I build on them."

Not "the AI is smarter." Not "the interface is cleaner." Compounding trust. Prompts had become assets — reusable, governed, and stacked across workflows.

06
Outcome

Final design — unified AI workspace at scale.

AreaResult
AI workspaceUnified platform integrating agent discovery, interaction, and management
Workflow continuitySeamless flow: prompt → response → reuse → continuation
GovernanceCentralized control — model selection, parameters, response controls embedded into configuration layer
ScalabilitySupports internal enterprise use and external subscribers
AdoptionIncreased adoption of AI tools across enterprise teams
Multi-agent frictionReduced through prompt reuse and workflow continuity
Platform readinessEstablished the foundation for a scalable, customer-facing AI product
07
Reflection

What this taught me about designing for AI.

Takeaway

AI design isn't about the model. It's about the workspace around it.

"Users don't need a smarter AI. They need a place to put it — with memory, structure, and trust."

The hardest decisions were restraint decisions — holding the line against AI feature creep. Every new capability risked adding cognitive load without adding value. Choosing memory over novelty, choosing governance over velocity, choosing reuse over generation. Each one required trading perceived "AI magic" for real workspace utility.

What I'd do differently

I'd ship a team-level knowledge graph earlier. Prompt reuse worked at the individual level — but the collective intelligence of a team should compound in a visible surface. That's the next chapter.

Previous All Work Next
Case Study · Benchmark Gensuite · EHS/ESG Enterprise SaaS

From Legacy to Digital Home.

End-to-end UX redesign of a global enterprise EHS/ESG platform — trusted by 4M+ users across 35+ industries. Transformed a site-first, cognitively overloaded legacy system into a personalized, AI-augmented Digital Home experience across web, tablet, and mobile.

Company
Benchmark Gensuite
Role
Senior Product Design Leader · End-to-end
Industry
Enterprise EHS · ESG · Sustainability SaaS
4M+
Global users on the redesigned platform
300+
Enterprise clients across 35+ industries
230K+
Mobile users on the redesigned app
25+
Platform modules unified under one system
01
Context

A powerful platform. A user experience that hadn’t kept pace.

25 years of product. One legacy UX.

Organized around the system’s own architecture — not around how users think about their work.

Benchmark Gensuite had built a powerful EHS platform over 25 years — trusted by Fortune 500 clients for compliance-critical operations. But its interface had drifted into a site-first structure: users had to understand the system before they could complete their work.

For frontline workers operating in high-stakes environments — safety inspectors, field teams, infrequent users — this created dangerous friction where there should have been clarity. The business signal was equally sharp: client churn risk, low frontline adoption, a mobile experience broken for field use, and a competitive gap against modern EHS platforms.

Design opportunity: If users had to understand the system before they could complete their work, the system was organized for the product — not the person. The redesign had one north star: make the work come to the user, not the other way around.

02
The Problem

Three user archetypes. One platform failing all of them differently.

EHS Manager · Power User

Daily platform user. Needs cross-program visibility, reporting, and team action management. High digital literacy but time-pressured.

Design implication: Needs a command center, not an app launcher. Dashboard-first, real-time status, no hunting.

Frontline Worker · Field User

Infrequent or task-specific user. Accesses platform primarily on mobile to log concerns, complete audits, or action assigned tasks. Low tolerance for friction.

Design implication: Every extra tap is a failure. Tasks must be reachable in one action. AI assistance essential for complex record completion.

System Admin · Client Configurator

Manages platform setup, user permissions, and client branding. Infrequent deep-use. Needs control without complexity.

Design implication: Personalization must be powerful enough for admins to configure but invisible enough end users never feel it.

The cost of one-size-fits-all

Designing only for the EHS Manager meant failing the frontline worker and the compliance contributor.

All three personas had to succeed on the same platform — but each had fundamentally different relationships to it. The legacy experience treated them identically. The redesign had to treat them as three different products with one shared foundation.

03
Research & Discovery

Ten methods. One thesis per finding.

Research leadership

Every design decision in this project traces back to a research finding. Methods were chosen to surface the gap between what users said, what they did, and what the system provided.

I owned research end-to-end — from method selection through synthesis through stakeholder communication. The highest-leverage finding didn’t come from interviews. It came from contextual inquiry in field conditions — watching a safety inspector try to log a concern on mobile while wearing gloves in variable lighting. Mobile-first wasn’t optional; it was a compliance risk if we didn’t deliver it.

Methods applied

  • Heuristic Evaluation (Nielsen’s 10)
  • Contextual Inquiry — field observation
  • User Interviews — across roles & industries
  • UX Journey Mapping — end-to-end workflows
  • Task Analysis & Flow Mapping
  • Jobs-To-Be-Done framework
  • Affinity Mapping — synthesis
  • Competitive & Analogous Analysis
  • UX Writing Audit
  • Persona Development (behavioral, not demographic)

Four pain themes surfaced consistently

  • Navigation overload — too many apps, no hierarchy
  • Task visibility — no status, no priority, no surface
  • Personalization gaps — one UI for every role
  • AI-assist absence — manual, multi-step workflows

These four themes became the four solution pillars of Digital Home.

Reframing insight — Jobs-To-Be-Done

"I need to access the Incident Management app" became "I need to log a concern before I forget the details." Different statement. Entirely different design response.

04
Design Decisions

Six principles. Every decision argued for.

Six principles, established before screens

Task-first · Role-aware · AI-augmented · Brand coherent · Frontline-ready · Clear signal over noise.

The biggest leverage in enterprise UX wasn’t the interface — it was establishing shared design principles before a single screen was drawn. Principles created alignment speed, gave the team a basis for saying no to scope creep, and kept a 25+ module platform coherent across years of iteration.

01

Replaced site-first navigation with a task-first homepage

Jakob’s Law · Hick’s Law

Designed Start Record · Look Up · Take Action · Quick Access as the four primary entry points. Replaced app-by-app access with one routing action. "Start Record" takes the user into Concern, Injury, Action, Audit, or Observation without requiring system knowledge. "Take Action" surfaces a prioritized, status-aware task list (Past Due, Open) so users triage at a glance.

Engineering pushed for retaining the app grid as a fallback for familiarity. I held the line with research: the app grid was the problem. Keeping it would reintroduce the friction we were rebuilding the platform to remove. Task-first shipped.
02

Built a two-layer personalization architecture

Personalization · Governance

Designed a client-level layer (brand colors, logo, background imagery per enterprise client) and a user-level layer (pinned apps, Quick Access shortcuts, role-based defaults). The platform feels like a native product of the client’s organization — not generic SaaS dropped into their environment. Powerful enough for admins to configure, invisible enough that end users never feel it.

03

Initiated Genny AI placement & interaction model

AI-augmented UX · Persistent Assistance

I initiated and designed how Genny AI exists within Digital Home — persistent left-side positioning, default-on, with proactive alerts, action summaries, generative AI features, and natural-language search from a single consistent entry point. AI assistance that requires users to seek it out gets ignored. Genny is always in peripheral view — available when needed, never intrusive when not.

"Hide it behind an icon" was the default proposal — cleaner visually, safer politically. Competitive research showed the best enterprise AI assistants are persistent and proactive, not menu-hidden. I argued for persistent placement. It shipped that way.
04

Rebuilt mobile as the platform, not a reduced version

Frontline-ready · Progressive Disclosure

The legacy mobile app was a fragmented, leaky experience. I rebuilt it end-to-end as a unified, persona-aware mobile platform with the same Digital Home logic — task-first, role-aware, client-branded, with Genny AI accessible on every screen. Critical flows rebuilt with clear progress, error recovery, and offline capability for field use. The mobile app is not a reduced version of the platform — it is the platform, optimized for the device.

05

Owned UX writing as a core workstream — not an afterthought

Plain Language · Outcome-oriented copy

System-centric jargon was a primary source of cognitive load. I rewrote the platform’s core navigation and action vocabulary from scratch — replacing system language with human, outcome-oriented phrasing. Every label written to meet users where they are, not where the system expects them to be. "Start Record" instead of "Incident Management App." "Look Up" instead of "Global Search." UX writing was scoped as a core workstream, with ownership.

05
The Turn

When users named the shift in their own words.

Post-launch validation

The redesign shipped as "My Home" to the full subscriber base after a successful beta. Community response was immediate.

From real EHS leaders using the product post-launch:

"Loved the ability to configure the UI in a way that works best for me."
— HSE Manager · Benchmark Gensuite subscriber
"This layout allows me to see the things I want to see all at once, and not have to go searching through different applications."
— EHS Leader · Benchmark Gensuite subscriber

Users named the two design principles the redesign was built on — personalization and task-first visibility — in their own words, unprompted. That’s validation the research predicted, and the design delivered.

06
Outcome

The shift — measurable, structural, shipped.

BeforeAfter
Site-first navigation requiring system knowledgeTask-first homepage — work surfaces immediately
Uniform experience for all rolesRole-aware, client-personalized Digital Home
No AI — manual, multi-step workflowsGenny AI persistent across all screens
Fragmented legacy mobile appUnified persona-driven mobile redesign
No brand identity — 25 years of visual driftCohesive design system at enterprise scale
System-centric jargon throughoutHuman UX writing — owned end-to-end

UX metrics instrumented for ongoing measurement: task completion rate, time-on-task reduction, navigation error rate, feature adoption by role, mobile session depth, Genny AI engagement rate per user group.

07
Reflection

What I’d push further — and what this taught me.

Design leadership insight

The biggest leverage in enterprise UX is shared design principles — established before a single screen is drawn.

"Designing for enterprise at scale means designing for a spectrum of technical literacy within a single product. The same interface must succeed for a VP reviewing dashboards and a field worker logging a safety concern on a job site."

Senior design leadership means knowing what you’d push further — not just celebrating what shipped. Principles create alignment speed, give teams a basis for saying no to scope creep, and keep a 25+ module platform coherent across years of iteration.

What I’d validate further

Deeper usability testing with infrequent users — particularly those accessing the platform monthly for compliance tasks — to measure whether Digital Home truly eliminates re-learning friction at scale across diverse industries and geographies.

What I’d push harder on

The Genny AI onboarding flow — specifically designing role-aware first-run experiences where Genny proactively introduces itself based on the user’s persona, reducing time-to-value for new platform subscribers.

08
Design Ownership

End-to-end — research through system governance.

DeliverableOwnership
UX research & synthesisLed end-to-end
UX writing & content strategyOwned end-to-end
Personalization architectureOwned end-to-end
Genny AI placement & interaction modelInitiated & led
Mobile app redesignOwned end-to-end
Visual design & brand systemOwned end-to-end
Design system & component libraryOwned end-to-end
Stakeholder alignment & design leadershipLed throughout
Previous All Work Next
Case Study · 0→1 AI Experience Layer · Enterprise EHS

Ultra — from navigation to execution.

Defined and designed Ultra — an AI-first experience layer that transforms enterprise software from navigation-heavy interfaces into intelligent work execution surfaces. Ultra sits on top of existing applications and answers one question exceptionally well: "What do I need to do right now?"

Company
Benchmark Gensuite
Role
Product Design Lead · AI Experience Architect
Industry
Enterprise SaaS · EHS · Agentic AI
/div>
0→1
New AI experience layer defined, designed & shipped
3
Personas with distinct AI experience architectures
↓ Cognitive load
Context-switching eliminated across EHS workflows
↑ Execution speed
Inline actions replaced multi-step, multi-app flows
01
Context

Site-first software in an agentic AI world.

The framing shift

Users don’t operate in systems. They operate in jobs-to-be-done. Software still asks them to learn the system first.

Enterprise EHS work lived across fragmented systems — Incident Management, Action Tracking, Concern Reports, Calendar, Inspections. Every one of them asked users to first understand where their work lives before they could act on it.

For frontline operators executing under time pressure, functional leaders tracking accountability across programs, and executives needing risk visibility at a glance — navigation had become the bottleneck. Not the features. Not the data. The interface itself.

Ultra was a 0→1 product definition — a new experience layer sitting on top of existing applications. The strategic restraint mattered: we weren’t rebuilding 25+ modules. We were replacing the question users ask when they open the product.

02
The Problem

Three personas. Three entirely different relationships to work.

Operational User · Frontline

Technicians, operators, field workers executing tasks safely. Don’t care which system work lives in — they care about getting it done. Low tolerance for navigation. Mobile-first. Context switching kills productivity.

Functional Leader · Control Tower

Site leaders, program owners accountable for outcomes. Need cross-program visibility with clear ownership and status. Move between oversight and action throughout the day. Need drill-down without friction.

EHS Executive · Source of Truth

Senior leaders owning enterprise risk and narrative. Need clear credible visibility into risk posture. Care about trends, not transactions. Need assurance, not operations. Mobile-accessible, presentation-ready.

The friction

High cognitive load. Manual review overhead. No prioritization. Missed or delayed actions.

The same platform had to serve all three — but at completely different abstraction levels. Designing one default homepage and assuming it would work for everyone would fail all three differently. Ultra had to adapt to the user, not the other way around.

03
Research & Discovery

The reframe — "Where do I go?" → "What should I do?"

Methods

Contextual interviews · Workflow decomposition · Task analysis · Heuristic audit · JTBD framing.

I ran contextual interviews across Operational, Functional, and Executive users. Decomposed actual workflows across ATS, IM, CR, and CC. Measured time-to-action and dependency mapping. Audited the legacy IA, feedback loops, and prioritization gaps.

The synthesis surfaced one reframe that reset the entire problem space:

What users actually need

  • Users operate in job-to-be-done flows, not systems
  • Work requires decision → action, not navigation
  • Cognitive load comes from scattered data + missing prioritization
  • AI is effective only when embedded, context-aware, and actionable

Design implications — four pillars

  • Intelligent prioritization — surface what matters first
  • Inline execution — act without switching context
  • Persistent AI co-pilot — always available, never intrusive
  • Role-aware experience — one platform, three defaults
The design thesis

"From navigation system to execution system. From 'Where do I go?' to 'What should I do?'"

04
Design Decisions

Six decisions. Each one an argument for execution over navigation.

01

Made Ultra the entry point — replacing app navigation entirely

Jobs-To-Be-Done · IA Transformation

Ultra became the starting point of work — surfacing Tasks · Insights · Records directly. Users no longer need to know which app holds which record. Ultra understands role, permissions, and ownership, then routes intent into the right place. The app grid still exists — but users rarely need it.

Stakeholders proposed keeping the app launcher as the default landing page "for familiarity." I held the line: the launcher was the friction we were rebuilding to remove. Ultra shipped as the default entry. The launcher moved to a secondary surface for admins and power users.
02

Designed "My Live Feed" — proactive intelligent prioritization

Progressive Disclosure · Behavioral Signals

The Live Feed surfaces a dynamic stream of critical actions, risks, and deadlines powered by role context, behavioral signals, and AI ranking logic. Every card has a priority state (In Progress · High Priority · Completed) and — crucially — an inline action. Users triage and execute from the same surface.

A filter-heavy inbox pattern was proposed. Research showed filters add cognitive load when users already don’t know what they’re looking for. I argued for AI ranking with transparency — the system decides priority, users see why. Feed shipped with confidence states ("Genny AI: Investigation typically takes 3 days").
03

Built an inline Action Layer — execution without navigation

Direct Manipulation · Fitts’s Law

Every Live Feed card has a clear execution path — Verify · Resolve · Log · Escalate — directly from the surface. No searching, no switching systems, no multi-step flows. Paired with a dashboard of workflow counts ("8 Open · 2 Past Due") that are themselves entry points into prioritized views.

04

Designed Genny AI as a persistent, proactive co-pilot

HX Design · Conversational UX · Embedded Intelligence

Genny AI is contextually embedded, not hidden behind an icon. She initiates — reaching out first ("I see your lighting concern in B11 was resolved. Can you also let me know if that Zone C rail repair is still solid?") — not waiting for users to ask. Interaction is conversational, context-aware, and embedded within tasks.

"Hide it behind an icon" was the default proposal — cleaner visually, safer politically. Competitive analysis showed the best enterprise AI co-pilots are persistent and proactive. I designed her with a pin state, quick-response chips ("Yes, It’s Good · Check Status · All Prompts"), and a conversation history that threads across the incident lifecycle.
05

Role-based experience — three personas, three AI defaults

Personalization · CX + EX + HX integration

Ultra loads differently for each persona. Operational users see task-first, mobile-optimized layouts. Functional leaders get cross-program visibility with bottleneck detection. Executives see AI summaries, risk signals, and drill-down only when needed — never required. Same platform, three entirely different first impressions. Layout pre-sets determined by Role APIs; content presented is AI-determined.

06

AI-threaded workflows — continuity across the incident lifecycle

Agentic AI · System Continuity

Genny AI persists across lifecycle stages — Incident → Investigation → Closure → Summary. She tracks state, surfaces next steps, and maintains continuity across the personas handing work off to each other. When an Operational User provides proof of closure, Genny proactively notifies the Functional Leader. When the Leader rejects it, Genny brings the feedback back to the originator. The AI becomes the connective tissue between handoffs.

05
The Turn

When navigation stopped being a question.

The behavior shift

Users stopped asking "where do I go?" and started asking "what’s next?"

The goal of Ultra was never features. It was this exact moment:

"I don’t have to think. Ultra tells me what matters, and I can just do it."
— Operational user · target success statement
"I know where my attention is needed and I can act immediately."
— Functional leader · target success statement
"I can answer the question with confidence, without digging."
— EHS executive · target success statement

Three personas. Three different relationships to the same platform. One experience layer designed to meet each of them where they are.

06
Outcome

The shift — measurable, structural, agentic.

DimensionShift
User impactReduced cognitive load across workflows · Faster task execution · Increased confidence in decisions
Product impactNavigation system → Execution system · Increased cross-application engagement · Reduced workflow fragmentation
AI impactTool → Embedded workflow intelligence · Real-time assistance · Predictive task surfacing
Entry pointApp launcher → Ultra Home as the default start-of-day surface
PrioritizationManual interpretation → AI-ranked Live Feed with transparency on why
ExecutionMulti-step, multi-app flows → Inline actions from the surface itself
ContinuityDisconnected handoffs → AI-threaded workflows across the incident lifecycle
07
Reflection

What this taught me about designing for agentic AI.

The HX insight

Designing for AI is designing for a new relationship between user and system — not a new feature.

"The hardest part of agentic AI design isn’t the model. It’s earning the right for the AI to initiate — and knowing when to stay quiet."

Ultra is a 0→1 product definition — not a UI refresh. The work that moved it from concept to shipped platform happened in three layers: IA transformation (from site-first to job-first), interaction architecture (inline execution replacing navigation), and AI orchestration (Genny as a persistent, proactive co-pilot across the lifecycle).

The senior design leverage in AI products is resisting the temptation to bolt AI on. The best AI experiences are the interface. Users shouldn’t need to find the AI — the AI should find the work.

What I’d push further

Onboarding for agentic AI is still an unsolved problem. First-run experiences where Genny proactively introduces herself based on persona — and earns trust by demonstrating value within the first minute — is the next frontier. Without that, even the best AI co-pilot risks being dismissed as a chatbot.

What I’d validate further

Measuring behavior change at scale: tracking the specific moment a user shifts from navigating to Ultra to starting their day in Ultra. That behavioral shift is the real success metric — not feature adoption, not click-through, not time-on-task. It’s whether users reach for Ultra first.

08
Design Leadership & Ownership

0→1 definition — through cross-functional delivery.

AreaOwnership
0→1 product definition & visionLed end-to-end
UX research across Operational, Functional, Executive personasLed end-to-end
IA transformation (site-first → job-first)Owned end-to-end
AI interaction architecture (Genny placement & behavior)Initiated & led
Role-based experience layer systemOwned end-to-end
AI-threaded workflow design (lifecycle continuity)Owned end-to-end
Cross-functional alignment (Product, Engineering, AI, Stakeholders)Led throughout
HX maturity framework & principlesDefined & led
Previous All Work Next
Case Study · ESG Disclosure Management · AI → Agentic

Responsio — from hours of manual effort to minutes of meaningful work.

Led ground-up research, product design, and AI interaction architecture for a new AI-powered disclosure management platform. Built from the ground up with AI at the core — evolved from a recommendation-based "Suggestions" model into a fully agentic "Automated Intelligence" system that drafts, applies, and files responses at scale. Winner of three industry awards including TITAN Gold & E+E Leader.

Company
Benchmark Gensuite
Role
Senior Product Designer · UX Researcher · AI Experience Lead
Industry
Enterprise SaaS · ESG · Sustainability · Generative AI
4 hrs
Saved per disclosure request (documented outcome)
$80K
Annual productivity savings per organization
3 Awards
TITAN Gold · E+E Leader · BiBA 2025 Silver
4+ weeks
Productivity saved annually at corporate level
01
Context

Disclosure requests were eating sustainability teams alive.

The signal from the field

Spreadsheets. Lost threads. Missed deadlines. Repeated answers across 40+ stakeholder types.

ESG, Sustainability, and EHS teams at enterprise scale are under relentless pressure. Customer questionnaires, investor ESG surveys, regulatory filings, supplier audits, DEI disclosures — requests come from every direction, in every format (Word, Excel, PDF), demanding consistent, high-accuracy responses under tight deadlines.

The pre-Responsio workflow was broken: teams built disclosure answers from scratch every time, hunted through email threads for prior responses, and manually copy-pasted across spreadsheets. The same factual answer was being rewritten four, five, six times a year across different templates. Lean teams. Rising demand. No system.

Responsio was a new product definition — conceived, researched, designed, and shipped as a ground-up solution. Built with AI at the core from day one, not bolted on after.

02
The Problem

Three user types. One broken workflow failing all of them.

Sustainability Lead · Coordinator

Owns the disclosure queue. Logs incoming requests, assigns ownership, tracks deadlines. Today: spreadsheets + email + status pings. Design implication: Needs a dashboard that shows request state across the lifecycle at a glance, with clear ownership and priority signals.

Subject Matter Expert · Drafter

Answers specific domain questions — emissions, governance, supplier practices. Today: asked the same questions across requests. Design implication: Needs AI-powered recall of prior approved answers and the ability to customize them, not re-write from scratch.

Reviewer · Sign-Off Lead

Final approver accountable for accuracy and brand voice. Today: reviews drafts across email with zero traceability. Design implication: Needs a consistent review surface with version history, approval workflows, and automated notifications.

The risk layer

Inaccurate or inconsistent disclosures create business and reputational risk — the stakes aren’t just efficiency.

Sustainability disclosures are investor-grade data. An inconsistent answer across two customer questionnaires isn’t just embarrassing — it’s a potential compliance and reputational risk. The existing workflow optimized for throughput. Responsio had to optimize for throughput and consistency, simultaneously.

03
Research & Discovery

Ground-up research — listening before architecting.

Research leadership

I led end-to-end research — stakeholder interviews, workflow decomposition, document audit, competitive scan, and jobs-to-be-done framing — before a single pixel was designed.

This wasn’t a feature refresh. It was a new product, which meant research had to earn the shape of the solution from scratch. The research had two jobs: (1) define the product surface, and (2) define the AI interaction model — specifically, how much to let AI initiate in a domain where accuracy is legally and reputationally consequential.

Methods applied

  • Stakeholder interviews — ESG, Sustainability, EHS leads across multiple enterprise subscribers
  • Contextual workflow mapping — receipt to delivery lifecycle
  • Document & artifact audit — real customer questionnaires (Word, Excel, PDF)
  • Competitive analysis — disclosure and RFP response tools
  • Jobs-to-be-done framing — reframing the ask from the SME’s perspective
  • AI trust research — how far users will let AI go before requiring oversight
  • Co-design sessions — pressure-testing mockups with real users
  • Iterative usability testing — pre-launch and post-pilot

Four findings that shaped the product

  • Questionnaire ingestion is the #1 friction — 30-60 min lost just formatting raw docs before any work begins
  • Every SME is answering the same questions repeatedly — "single source of truth" became a north star
  • Users trust AI suggestions more than AI actions — phased rollout required (recommend → apply → auto-draft)
  • Transparency beats speed — users want to see why AI recommends an answer, not just the answer
The reframe that shaped the AI architecture

"Disclosure response isn’t a writing problem. It’s a retrieval & re-use problem with an accuracy gate. Which means AI’s job is to surface and adapt prior approved answers — not generate new prose from scratch."

04
Design Decisions

Six decisions. Each one a bet on how far AI should go.

01

Designed the Admin Dashboard around lifecycle state — not record lists

Progressive Disclosure · Task-first IA

The dashboard surfaces New Requests · In Progress · Past Due · Delivered as the primary information scent — matching how coordinators actually think about their work. Paired with "My Tasks" by stage (Draft Pending Completion, Sign-Off, Closure, Delivery, Review) so users see exactly what’s waiting on them, not the whole queue.

An early proposal was a single unified inbox like email. I argued against it: research showed the cognitive cost wasn’t "finding a request" — it was "knowing what stage each request is at." Stage-based IA shipped.
02

Built the Response Repository as the system’s source of truth

Single Source of Truth · Data-first UX

Every approved response becomes a repository asset — searchable, reusable, taggable by topic and source. The repository isn’t a feature hidden in a menu; it’s the spine of the product. AI retrieval, response suggestions, and drafts all trace back to this single library, which means users trust what AI surfaces because they authored it originally.

03

Shipped Genny AI in v1 as Suggestions — not as Automated Intelligence

AI Trust Gradient · Recommendation-first UX

The first AI surface in Responsio was intentionally conservative: "Genny AI Suggestions" with a "Use Response" button. Each suggestion came with a visible rationale — "GennyAI recommends this AI-generated response based on a holistic analysis of all responses in the Response Repository." Users chose. AI didn’t act.

Product leadership wanted a more ambitious agentic launch. I pushed back with research: users in disclosure — where inaccuracy is a compliance risk — need to see the AI working before they let it act. Building trust first, autonomy second.
04

Evolved v2 into Automated Intelligence — the agentic pivot

Agentic AI · Progressive Autonomy

Post-launch, with trust established, I shipped the agentic evolution. "Suggestions" became "Automated Intelligence" with Apply Response — the AI now surfaces "4 Response suggestions found" and users can apply directly. The rationale layer stayed, but the interaction shifted: from "AI proposes, user composes" to "AI drafts, user approves." Same repository, same transparency, different autonomy level. This is the agentic pivot.

Required careful sequencing. I held the line that this could only ship after subscribers validated the suggestion model. The agentic version went live once usage data showed 80%+ of suggestions were being accepted without edit — the trust threshold.
05

AI-powered questionnaire ingestion — killing the #1 friction

Automation · Workflow Consolidation

Research surfaced that 30-60 minutes was wasted just formatting incoming Word / Excel / PDF questionnaires into actionable question lists. I designed the ingestion flow: upload → AI parses structure → user confirms question boundaries → ready to respond. The first 30 minutes of every disclosure became 3 minutes.

06

Integrated with Disclosure Director & Sustainability Reporting

Platform Continuity · Shared Data Model

Responsio doesn’t live in isolation. I architected the integration surface so responses pull from (and push to) Disclosure Director, Sustainability Reporting, and Dashboards & Analytics. One shared data layer. A disclosure answer drafted in Responsio updates the enterprise’s central ESG data library — closing the loop between request-response and platform-wide reporting.

05
The Turn

When a subscriber named the outcome in their own words.

Post-launch validation

Responsio shipped and earned three awards in its first year. But the validation that mattered came from real users.

From a Benchmark Gensuite subscriber, unprompted, describing the shift:

"We get requests from every single stakeholder we can think of. This is a nice tool to be able to upload the questions and then send them out to the right SME to make sure that it’s getting answered correctly. We are also able to house those questions so that we can pull those responses and give a quick reply back to similar requests."
— Benchmark Gensuite subscriber

And from internal leadership, naming the design intent exactly:

"Responsio shows what’s possible when we apply AI with purpose — turning hours of manual effort into minutes of meaningful work."
— Amanda Petzinger, VP Sustainability, Stewardship & Supply Chain Solutions
06
Outcome

The shift — documented, awarded, compounding.

DimensionMeasured outcome
Productivity per request4 hours saved per disclosure request (documented average)
Annual productivity savingsUp to $80,000 per organization using Responsio
Corporate-level savings4+ weeks of productivity saved annually at the corporate level
Industry recognitionTITAN Innovation Awards — Gold Winner (Emerging Technology)
E+E Leader Awards — Winner (Top Product: Software + Cloud)
BiBA 2025 — International Silver
Product evolutionv1 Suggestions → v2 Automated Intelligence (agentic) shipped
Integration footprintUnified with Disclosure Director, Sustainability Reporting, Dashboards & Analytics
07
Reflection

What Responsio taught me about shipping AI into high-stakes domains.

The AI design insight

In compliance-adjacent domains, AI autonomy is earned — not granted. Ship the trust layer before the automation layer.

"The agentic pivot wasn’t a feature release. It was a trust release. The Suggestions model trained users to believe the AI — which is what earned the system the right to act on their behalf in v2."

Responsio is the clearest case I’ve shipped of progressive AI autonomy as a design strategy. Starting conservative (recommend + rationale), proving value, then evolving to agentic (apply + autonomous draft) once the trust data was in. This sequencing mattered more than any individual feature.

What I’d push further

Agentic question clustering — surfacing, unprompted, "You’ve answered this question 14 times across 8 questionnaires. Would you like me to draft all future occurrences automatically?" This would move Responsio from a reactive tool to a proactive platform agent.

What I’d validate further

Cross-subscriber learning. Every Benchmark Gensuite subscriber has a unique Response Repository today. The next step — with consent and privacy boundaries — is sector-level pattern learning so a first-time CDP responder can benefit from anonymized industry patterns. The research and governance work here is as much of the design problem as the UI.

08
Design Leadership & Ownership

End-to-end — from discovery to agentic evolution.

AreaOwnership
UX research across ESG, Sustainability, EHS personasLed end-to-end
Product definition & workflow architectureLed end-to-end
Response Repository as single source of truthOwned end-to-end
Genny AI interaction design (v1 Suggestions)Owned end-to-end
Agentic evolution (v2 Automated Intelligence)Initiated & led
Questionnaire ingestion flowOwned end-to-end
Cross-product integration architectureOwned end-to-end
Stakeholder alignment & design leadershipLed throughout
Previous All Work Next
Case Study · Enterprise Admin UX · License & Seat Management

User Management — making invisible limits visible.

Pure product design — no research phase. Redesigned the User Administration portal and license seats monitor for Benchmark Gensuite admins, introducing progressive capacity alerts, module-specific gauges, and in-flow seat-limit messaging that prevents failed add-user actions before they happen.

Company
Benchmark Gensuite
Role
Senior Product Designer · UX/UI end-to-end
Industry
Enterprise SaaS · Admin Tooling · License Management
3 States
Progressive capacity alerts: healthy · approaching · blocked
↓ Failed Actions
Seat-limit errors surfaced pre-submit, not post-submit
Module-aware
Per-application gauges replaced one-size-fits-all view
↓ Support Load
Capacity-driven tickets addressed in-product
01
Context

Admin tools hide the information admins need most.

The product design brief

Not a research project. A pure UX/UI redesign — solving a concrete, recurring admin frustration.

Benchmark Gensuite admins manage user access across multiple modules — Disclosure Director, Responsio, DXP, Risk AI, and more. Each module has its own seat license. The legacy interface showed a single blended number that obscured module-level exhaustion. Admins would only discover they’d hit a seat limit after trying to add a user — creating frustration, failed actions, and support tickets.

The design job: make capacity visible before it blocks work. Show admins exactly where they stand per module, warn them progressively, and communicate seat availability in every place a user might be added.

02
The Problem

Capacity was invisible until it failed.

Blind aggregation

A single blended license count across all modules. An admin at 78% overall might be at 95% in Disclosure Director without knowing it — until an add-user action failed unexpectedly.

Post-submit failure

The Add User modal happily collected First Name, Last Name, Email, Application selections — then threw a seat-limit error on submit. Classic failed-transaction UX pattern that destroys trust.

No escalation path

When an admin did see they were out of seats, there was no clear next step. Just a blocked button and a dead end. No CTA to contact an account manager or request additional seats.

03
Design Decisions

Five decisions. All built around surfacing capacity before it blocks.

01

Redesigned License Seats Monitor as a dual-layer view — Overview + Per-Module

Progressive Disclosure · Information Scent

The Overview state shows the aggregate utilization gauge (e.g., 78% of 530 seats used) — the macro picture. A dropdown toggles to per-module view showing gauges for Disclosure Director, DXP, Responsio, Risk AI, each with their own utilization percentages and progress bars. Admins can see both the whole and the parts without leaving the surface.

02

Built a three-state visual language for capacity alerting

Status Visibility · Color Semantics

Three states, three colors, three messages:

  • Healthy (under 80%) — green progress bar, no alert
  • Approaching Limit (80-99%) — red gauge with warning icon + plain-language message: "You are at 95% of your seats capacity in Disclosure Director. Contact your account manager to increase the license seats limit for this module."
  • At Capacity (100%) — blocked state with hard error + account-manager CTA

Each state is readable at a glance. No interpretation required.

03

Moved seat-limit awareness into the Add User flow itself

Error Prevention · Just-in-Time Feedback

The biggest UX shift. The Add User modal now shows live seat awareness tied to the Application(s) selected. When an admin selects "Disclosure Director" and that module has 1 seat left, a yellow warning strip appears above Submit: "Limited seats available: Only 1 seat(s) left. Consider purchasing more seats soon. 199 / 200 seat(s) used."

When the module is at 100%, the warning escalates to a red error strip: "You’ve reached the seat limit for this module. To continue, release a seat or contact your account manager." — and Submit is disabled.

Engineering initially resisted the live-lookup on module selection because of API cost. I argued the support-ticket cost of failed adds was higher. We shipped the live lookup with a debounced call; the ticket volume drop justified it within the first quarter.
04

Surfaced Users-By-Application as a donut — pattern before numbers

Data Visualization · Pattern Recognition

The Users By Applications donut chart sits next to the totals, breaking down distribution across Cullet Manager, Disclosure Director, DXP, Responsio, Risk AI Advisor, User Management. Admins see the shape of their user base instantly — which modules are saturated, which are underused, where licensing is being wasted. Pattern before numbers.

05

Added the 90-day trend strip — Total Users & Total Roles

Trend Awareness · Rate-of-Change Visibility

Two quiet but important cards: Total Users: 199 · 14 new added last 90 days and Total Roles Assigned: 46 last 90 days. These answer questions admins were asking anyway ("are we growing faster than we planned?") — now visible without running a report. Simple data. High operational value.

04
Outcome

From failed submissions to prevented ones.

BeforeAfter
Single blended license count — module exhaustion invisiblePer-module gauges with live utilization percentages
Post-submit seat-limit errors — failed add-user actionsLive seat awareness surfaced inside Add User modal
Dead-end blocked state with no escalationAccount manager CTA embedded in both warning and blocked states
No visibility into user-base compositionUsers By Application donut surfaces saturation patterns
No trend awareness for capacity planning90-day added-users and roles-assigned cards
05
Reflection

What this taught me about admin UX in B2B SaaS.

The operational UX insight

Admin tools are judged on error prevention, not feature count. Failing a submit is a broken promise.

"The most important line of copy I wrote on this project was 'Only 1 seat(s) left. Consider purchasing more seats soon.' It replaced a failed transaction with a plan."

This was a deliberate non-research project — a pure product design sprint based on support ticket patterns and my own usage audit. Sometimes the signal is already loud enough; research would have been deceleration, not insight. Senior design judgment includes knowing when to skip discovery.

What I’d push further

A capacity planning forecast layer — "Based on your growth rate (14 users in 90 days), you will hit Disclosure Director capacity in ~28 days. Consider expanding now to avoid disruption." Turning the alert from reactive to predictive.

Previous All Work Next
Case Study · Agentic AI Design System · Benchmark Gensuite

Genny AI Agentic Hub

Led the 0→1 design of the Genny AI Agentic Hub — the command center for agentic AI across the Benchmark Gensuite platform. Established the complete AI design system — voice, chat, alert, file, history, and personalization patterns — shipped 4 launch agents, and handed the same framework to product teams to ship remaining agents with consistency, governance, and trust built-in from day one.

Company
Benchmark Gensuite
Role
Senior Product Designer · AI Experience Lead · Design System Owner
Industry
Enterprise SaaS · Agentic AI · EHS / Sustainability / Quality
4
Agents launched with the design system on day one
3 Tiers
Helpers → Assistants → Agentic Apps — full progression
1 System
Voice · Chat · Alerts · History · Files · Personalization
Market Leader
Verdantix Green Quadrant 2025 — AI Integration leadership
01
Context

Every team wanted to ship their own AI. The platform needed one voice.

The strategic problem

Without design system governance, every Genny AI agent would arrive looking, speaking, and behaving differently — fragmenting the user experience at the exact moment the platform needed to signal AI maturity.

Benchmark Gensuite’s Genny AI was scaling fast — from content-drafting Helpers, to decision-guiding Assistants, to fully autonomous Agentic Apps. Multiple product teams were racing to ship agents for Chemical Management, Permits, Compliance, Disclosure, and more. Each team had their own idea of what "the AI panel" should look like, how history should behave, where alerts should surface, how voice input should work, and what file references should display.

Without intervention, each agent would ship as a one-off. The platform would feel like ten different AIs built by ten different teams — exactly the anti-pattern for a product whose value proposition is unified, trustworthy, enterprise-grade automation.

The brief I set for myself: design once, scale infinitely. Establish the Agent Hub as the canonical surface for all agentic AI — then ship the design system, voice, and interaction patterns so every agent launched from day one (and every agent shipped after) would feel like a single coherent product.

02
The Problem

Three tiers of AI. Three different interaction contracts.

AI Helpers

Draft and summarize content, extract insights, auto-populate forms. Low autonomy, high frequency. Design implication: lightweight, embedded, fast — no pomp. Appears inline inside workflows.

AI Assistants

Analyze data, uncover hazards, analyze images and documents. Medium autonomy, high complexity. Design implication: conversational surface with quick-action chips, file reference display, decision rationale visible.

Agentic Apps / Process Agents

Fully automate multi-step workflows. High autonomy, high stakes. Design implication: governance surface with Pending Actions, approval gates, Work Products trail, full audit-ready history. The user must stay in control.

The system job

Design one language that expresses three entirely different levels of AI autonomy — without fragmenting the experience or overwhelming the user.

The Agent Hub had to handle an Operational user asking Genny "what’s in this SDS?" and an admin approving an autonomous permit-intake workflow — in the same design language. Same chat primitives, same file reference pattern, same voice, same alert grammar — but different governance surfaces per tier. The art was in the restraint: one system, flexing.

03
Design Decisions

Seven decisions. Each a piece of the AI design system.

01

Established the Agent Hub as the platform’s AI home

IA · Discoverability · Scalability

Before the Agent Hub, AI was scattered across modules. I designed the Hub as the canonical entry point — a 2-column grid of agent tiles (Permit Compliance · Chem Management · Compliance · Disclosure Management, with Emerging Agent Configurator below) with each tile carrying consistent anatomy: icon, name, description, chat CTA, and state indicator for "Coming Soon." The persistent right-rail Genny AI panel ensures the ambient assistant is always one message away — even when the user hasn’t picked a specific agent yet.

Stakeholders wanted agent pages nested inside each module (Permit agent inside Permit Compliance). I argued for a unified Hub with deep-linking back into modules. Research showed users think of AI as a capability, not a feature of a specific app. Hub shipped as the default surface.
02

Designed the conversation primitives — message, timestamp, audio, copy, rationale

Chat UX · Interaction Consistency

Every Genny message uses the same anatomy: agent avatar + name, message bubble, timestamp, micro-actions (TTS playback, copy, bookmark/save). User messages mirror the same grammar on the opposite side. When Genny produces a generated answer with a source, the rationale pattern appears — same design as Responsio’s. This is the core conversation primitive: one pattern, every agent, every tier, every language.

03

Built the input pattern — ask, attach, speak, send

Fitts’s Law · Multimodal UX · Voice Accessibility

The input bar pattern became one of the most-copied components across the platform: a single "Ask me anything..." field with three adjacent affordances — paperclip for file attach, microphone for voice input, arrow for send. Every agent uses it. Voice input is not an afterthought; it’s peer to text. Critical for field workers, and a differentiator against competitors who treat voice as a separate surface.

Engineering wanted voice as a separate "enterprise add-on." I argued voice must ship at parity with text for the platform to feel native-AI. Shipped together.
04

Designed the right-rail governance pattern — Pending Actions, Pinned Chats, Work Products, File References

Agentic AI Governance · Audit Trail UX

For tier-3 Agentic Apps, autonomy without oversight is a liability. I designed a standardized right-rail surface — Pending Actions · Pinned Chats · Work Products · File References — that every process agent uses. Pending Actions shows what the AI wants to do (and waits for approval). Work Products shows what the AI has done (and lets users audit). File References shows what the AI used (and lets users verify). Governance becomes part of the UX — not an afterthought.

05

Designed the history and context-persistence model

Continuity · Mental Model Preservation

The left-rail history surface lets users return to prior conversations without losing context. Messages persist. Pinned chats float to the top. Conversation threads carry across sessions — and across agents when handoff is needed. Users don’t restart from zero every time; the AI remembers what they were working on.

06

Established the alert, notification, and state lexicon

Status Visibility · Color Semantics · Plain Language

Defined the full state vocabulary every agent uses: idle · thinking · proposing · executing · awaiting approval · complete · error. Each state has one component, one color treatment, one animation rule, one plain-language microcopy template. When any team ships a new agent, they inherit the vocabulary — no invention required.

07

Shipped 4 launch agents using the system — then handed it to product teams

Design System Governance · Scaling Leadership

The four agents shipped at first-go — Platform Agent, Chem Management Agent, Permit Agent, and Emerging Agent Configurator — proved the system works across different autonomy levels and domains. I then documented every pattern, every component, every state, every voice guideline, and handed the system to product teams so they could ship the remaining agents without me in the room. This is the actual measure of a design system: it keeps producing consistent, high-quality work when its author steps away.

04
The Turn

When product teams started shipping on their own.

The proof

A design system is judged by what it produces without its author.

After the initial 4 agents launched, product teams began shipping new agents using the same patterns — chat, voice input, right-rail governance, input bar, history, alert states — without needing me to design each one. The Agent Hub became the template, and new agents simply plugged in.

That’s the quiet outcome of design system work: you stop being a bottleneck and become a force multiplier. One designer’s patterns became every agent’s baseline. And the platform got its industry recognition for it — Verdantix Green Quadrant 2025: Market Leading AI Integration.

05
Outcome

From scattered AI to unified agentic platform.

BeforeAfter
AI scattered across modules, no discoverable homeAgent Hub — unified command center for all agentic AI
Every team inventing their own chat UIOne chat primitive — message, timestamp, audio, copy, rationale
Voice input proposed as an enterprise add-onVoice at parity with text, native on every agent
Agentic autonomy without user oversightStandardized governance rail — Pending Actions, Work Products, Files
No memory across sessionsHistory surface, pinned chats, context persistence
Inconsistent alert and state languageComplete state lexicon: idle · thinking · executing · awaiting · complete
Every agent designed from scratch4 launch agents + scalable system for teams to ship independently

Recognized in Verdantix Green Quadrant 2025 as market-leading AI integration — a platform-level signal that the unified design approach translates to industry perception of product maturity.

06
Reflection

What designing an AI system taught me about leverage.

The design leadership insight

In platforms with many AI surfaces, the single most valuable design is the one that stops other designers from designing the wrong thing.

"The strongest AI design work doesn’t ship as one feature. It ships as a vocabulary — voice, chat, alert, file, history, approval — that every other team adopts because it’s cleaner than what they would have invented."

Designing four agents was the visible work. Designing the system that produces every agent after was the actual work. The four launch agents were the proof-of-concept. The design system was the leverage.

What I’d push further

A canonical "agentic approval" interaction — a universal pattern every Process Agent uses when asking for human approval of an autonomous multi-step workflow. Today each agent solves it slightly differently. Standardizing this would be the next piece of system work — because as agents get more autonomous, the approval surface becomes the trust surface.

What I’d validate further

Cross-agent conversation handoff. Today, each agent’s context is siloed. Letting users move a conversation from Chem Agent → Compliance Agent without losing state would be the next frontier in agentic UX — and would reveal whether the platform truly feels like one AI or many.

07
Design Leadership & Ownership

System-level ownership — from primitive to platform.

AreaOwnership
Agent Hub IA & 0→1 product definitionLed end-to-end
AI design system — voice, chat, alerts, history, files, personalizationEstablished & owned
Multimodal input pattern (text, voice, attachment)Owned end-to-end
Right-rail governance surface (Pending Actions, Work Products, Files)Owned end-to-end
State & alert lexicon across all three AI tiersDefined & led
4 launch agent designs (Platform, Chem, Permit, Configurator)Led design
Design system handoff & governance to product teamsLed throughout
Cross-functional alignment with Product, AI, Engineering, SecurityLed throughout
Previous All Work Next
Case Study · Supplier Portal · Product Stewardship & Supplier Risk

Supplier Portal — from supplier surveys to award-winning engagement.

Redesigned the external-facing Supplier Portal for Benchmark Gensuite — the platform where thousands of global suppliers complete Conflict Minerals, ESG, Anti-Human Trafficking, and product compliance questionnaires on behalf of enterprise customers. Shipped an outcome-first Overview dashboard, consolidated action-items queue, and multilingual portal experience that earned the Top Supply Chain Projects Award 2024.

Company
Benchmark Gensuite
Role
Senior Product Designer · UX/UI end-to-end
Industry
Enterprise SaaS · Supplier Compliance · ESG · Supply Chain
Award
Top Supply Chain Projects 2024 — Supplier Manager
hrs→mins
Compile time reduced from "hours and weeks" to "a couple of hours"
Multi-
lingual
Portal ships in 10+ languages for global supplier base
4 Surveys
CMRT · ESG (Ceres) · Anti-Human Trafficking · Supplier Self-Assessment
01
Context

The portal is an external face. It has to earn trust in 30 seconds.

The stakes

This isn’t an internal tool. It’s the interface a global supplier sees the first time an enterprise customer asks them for disclosure data.

The Supplier Portal is how thousands of global suppliers — from Fortune 500 manufacturers to small specialty vendors — respond to their enterprise customers’ compliance asks: Conflict Minerals Reports (CMRT v6.22 / RMI-aligned), ESG surveys (Ceres-developed), Anti-Human Trafficking questionnaires, Supplier Self-Assessments, and product stewardship data requests.

The portal carries a unique design weight: it represents the enterprise customer’s brand to their supply chain. A confusing portal makes the customer look disorganized. A slow portal creates deadline miss. A broken portal creates compliance risk. Every UX decision here has trust consequences outside the platform.

The design brief: make this the clearest, fastest, most multilingual-ready supplier experience in the market — and keep it scalable across survey types and regulatory categories.

02
The Problem

Three user frustrations. All pointing at the same structural issue.

"I don’t know what’s waiting on me"

Suppliers respond to multiple enterprise customers, each with multiple questionnaires, each with multi-part actions. The legacy portal buried action items inside individual questionnaires. Design implication: needs a unified cross-questionnaire action surface.

"I don’t know if I’m on track"

Status was ambiguous — is "Pending" good or bad? Is "On Going" different from "In Progress"? Due dates weren’t visible at a glance. Design implication: needs a tight status vocabulary + clear due-date signals (colored dates for past-due).

"I can’t tell what’s changed"

Announcements from enterprise customers were missed. Portal-wide alerts had no home. Design implication: needs a persistent announcement surface that supports multiple alerts with See All pattern and attached guidance docs.

03
Design Decisions

Five decisions. Each tuned for clarity at supplier-scale.

01

Led with a 4-metric Overview — Open, Actions, In Progress, Past Due

Information Scent · Primary-first IA

The first thing a supplier sees: 86 Open Questionnaires · 186 Open Actions · 186 In Progress · 286 Past Due — with red treatment on Past Due so triage happens before scrolling. Each card is a jump-point into a filtered view. This replaces the old "scroll through every survey to find what’s urgent" pattern with "I see the urgency in 2 seconds."

Product wanted richer analytics widgets (charts, trends, historical performance). I argued suppliers don’t come here for insight — they come here to finish work. Shipped the 4-metric primary dashboard. Analytics moved to a secondary view for enterprise users.
02

Consolidated all open actions across questionnaires into one surface

Cross-Entity UX · Task Consolidation

The Action Items view pulls every pending action from every questionnaire — CMRT, ESG, Anti-Human Trafficking, Supplier Self-Assessment — into a single queue with Action ID, Action Name, Associated Item, Parts Pending Response, Parts Completed, Due Date, Status, Survey Owner. One place to see what’s waiting. Massive reduction in context-switching for suppliers juggling 20+ simultaneous requests.

03

Designed a tight status + due-date visual lexicon

Status Visibility · Color Semantics · Plain Language

Two status chips, color-coded: Pending (blue, neutral) and On Going (amber, in-flight). Due dates shown in red when past-due, default text otherwise. "Parts Pending Response" surfaces as a blue deep-link when multi-part, "Not applicable" in muted grey when single-part. The vocabulary is small on purpose — suppliers don’t need nuance, they need certainty.

04

Built the announcement banner — multi-item with "See All" & attached guidance

Broadcast UX · Progressive Disclosure

Portal-wide communications used to be an afterthought. I designed a dedicated announcement surface on the Overview — high-level alert messages visible portal-wide, with capability to attach import documentation and guidance documents, display the latest 3 with See-All expansion. The design unblocked enterprise customers from sending supplier-base communications through the portal instead of email.

05

Designed for multilingual-first — 10+ supported languages

I18n UX · Global Supply Chain Reality

Global suppliers don’t default to English. The language selector sits in the top-right with a flag, instantly discoverable. Every string is externalized. Layouts were stress-tested against character-length expansion (German, Japanese) and RTL support contingency. The portal ships in 10+ languages on launch — not a v2 afterthought. This is what "enterprise-ready" actually means at the supplier edge.

04
The Turn

A customer named the outcome in plain language.

Post-launch signal

Supplier Manager (the enterprise-side complement to the Supplier Portal) earned industry recognition in the first year.

Kimberly Gillen, Strategic Accounts Manager at The Anderson’s, describing her team’s experience:

"What was taking hours and weeks to compile [supplier] information is now taking us less than a couple of hours to complete."
— Kimberly Gillen, Strategic Accounts Manager, The Anderson’s

The platform went on to earn the Top Supply Chain Projects Award 2024 for Supplier Manager — and Verdantix Green Quadrant 2025 recognition for Market Leading AI Integration across the product stewardship and supplier risk suite. Industry validation of the entire supplier experience.

05
Outcome

The shift — from survey maze to action clarity.

BeforeAfter
Action items buried inside individual questionnairesUnified cross-questionnaire Action Items queue
Overview with ambiguous status across open work4-metric primary dashboard (Open / Actions / In Progress / Past Due)
Ambiguous statuses — "what does this mean?"Tight status vocabulary + red past-due date treatment
No portal-wide communication channel for announcementsPersistent announcement banner with multi-item See-All & attachments
English-first with partial translationMultilingual-first — 10+ languages on launch, discoverable selector
Supplier compile times measured in "hours and weeks"Measured in "less than a couple of hours" (customer testimonial)

Recognition: Top Supply Chain Projects Award 2024 · Verdantix Green Quadrant 2025 Market Leading AI Integration

06
Reflection

What designing an external portal taught me about enterprise trust.

The external-surface insight

An external-facing portal is where B2B design judgment shows most — because every friction in the supplier UX becomes a business problem for the enterprise customer.

"When you design an internal tool, failure means your users complain. When you design a supplier portal, failure means your customer’s supply chain complains. The stakes are different — the design discipline has to be tighter."

This project reinforced a principle I apply to all external-facing enterprise UX: reduce invention, amplify certainty. Don’t introduce novel patterns where boring patterns will finish the work faster. Don’t hide actions in dropdowns where a flat list would show them. Don’t use clever status copy when a single word would do. External users pay the cost of designer ego more than internal users do.

What I’d push further

AI-assisted questionnaire response. Today suppliers manually fill each field. The next step — applying the Responsio agentic pattern to the supplier side — would let a supplier say "use my last Conflict Minerals response from Q2 and update only the changed fields" and have Genny draft the entire questionnaire. This moves supplier response time from "couple of hours" to "couple of minutes."

07
Design Leadership & Ownership

External-facing UX — owned end-to-end.

AreaOwnership
Supplier Portal Overview & IAOwned end-to-end
Cross-questionnaire Action Items consolidationOwned end-to-end
Status & due-date visual lexiconOwned end-to-end
Multi-announcement broadcast surfaceOwned end-to-end
Multilingual-first portal architectureOwned end-to-end
Survey-specific upload flows (CMRT, ESG, Anti-Trafficking)Led design
Stakeholder alignment with Product Stewardship & Supplier Risk teamsLed throughout
Previous All Work Next
Case Study · Brand Identity · Package Design · Web · Graphic

Wild Tiger Rum — from a clean slate to a global premium brand.

Led the end-to-end brand build for Wild Tiger Rum — India’s first super-premium rum and a tiger-conservation cause-brand. Started not on a screen, but in a workshop: physical bottle and packaging design first, logo and visual identity second, website and digital playground last. An unconventional founder-led process, a C-suite that demanded the bottle speak before the brand did, and a final product that now sells across 16+ countries with 10% of profits funding tiger conservation.

Wild Tiger Rum — premium tiger-print bottle with claw and tag
Client
Wild Tiger Beverages — Founded by Gautom Menon
Role
End-to-end Designer · Package · Brand Identity · Web
Industry
Premium Spirits · Conservation Cause-Brand · Global Travel Retail
16+
Countries — UK, US, France, Belgium, Cyprus, Czechia, UAE, Maldives, Thailand, more
10%
Of profits donated to the Wild Tiger Foundation for tiger conservation
100k+
700ml premium bottles sold globally — including international travel retail
1st
Indian premium rum to launch internationally — London Rumfest debut
01
Context

A founder with a wild idea.

The brief

India is the second-largest spirits market in the world. It had no recognizable global premium-rum brand. The founder wanted to build one — and tie it to the cause closest to him: tiger conservation.

Gautom Menon, a Kerala-born entrepreneur, had spent eight years tasting more than 500 rums and walking the world’s rum festivals before he was ready to build his own. He came to me with the full ambition: he wanted India’s answer to Jack Daniel’s and Guinness. A premium rum that could pour with pride at duty-free in Heathrow, on Tiger Airways flights, in cocktail bars in New York and Copenhagen.

And the rum had to do good. 10% of every bottle’s profit would fund the Wild Tiger Foundation — Gautom’s NGO working with conservationists and forest authorities across South India to protect the Royal Bengal Tiger. Only ~2,000 wild tigers remained in India, down from 40,000 at Independence.

This wasn’t a typical brand brief. The product, the cause, and the visual story all had to be the same story.

02
The Approach

Bottle first. Brand second. Pixels last.

An unconventional process

Most brand projects begin with a logo, move to packaging, end on a website. Wild Tiger did the opposite — and that order shaped everything.

The founder’s instruction at the C-suite kickoff was direct: "Don’t draw me a logo. Show me the bottle a person picks up at duty-free and won’t put down." That set the working order — and it’s a sequence I’ve since trusted in every brand-led project where the physical product carries the story:

01

Physical package & bottle design

Form, weight, sleeve material, claw ornament, tag system. The object had to feel like a tiger sighting before it had a single character of typography on it. Every stripe pattern hand-painted — no two bottles alike, mirroring real tiger genetics.

02

Logo & brand identity

Once the bottle struck, the wordmark, monogram, conservation seal, color palette (deep black, tiger orange, cream, gold), and typographic hierarchy all flowed from the physical object — not the other way around.

03

Website, graphics, digital playground

Once the bottle and brand were locked, the digital surfaces — website, conservation campaign pages, retail collateral, neck tags, brochures, recycled-paper cartons — all extended the same physical language into pixels.

The founder’s "bottle first" approach felt backwards at the time — most agencies open with logo concepts. Holding the line with this sequence let every later decision (logo, web, retail) inherit a fully-formed physical identity. The sequence saved months of brand drift.
03
The Bottle

A package that argues for the cause before the rum is even tasted.

Design as cause

Every material choice was a decision in two languages — premium spirits and conservation.

The bottle is wrapped in a velvet sleeve printed with a tiger-stripe pattern — but the deeper design choice is that every sleeve is uniquely patterned, so no two bottles in the world look identical. The same way no two tigers in the wild share the same stripes.

Fastened to the neck is a replica tiger claw — symbolizing "No Fear" in ancient Indian mythology — paired with a hand-tied conservation tag. Glass is recycled. All paper — labels, neck tags, cartons, brochures — is recycled. The packing line itself is over 80% women, intentionally hired to extend the brand’s commitment to empowerment beyond the cause statement on the box.

The package is the argument. Pick it up, and the bottle has already told you what the brand believes in — before you read a word.

04
Brand & Web

Once the bottle locked, everything else fell into place.

From object to identity system

The brand identity didn’t need to invent a visual language — it inherited one from the bottle.

The wordmark uses bold, slightly distressed display type that feels carved rather than drawn — paying homage to the hand-finished feel of the bottle. The "WTF" sub-mark for Wild Tiger Foundation doubles as both an irreverent acronym ("WTF? Only 2,000 tigers left") and a serious conservation seal. The color system is restricted: deep black, tiger orange, cream, and a single accent of gold for premium signaling.

The website (wildtiger.in) extends the same restraint — a near-monochrome palette, full-bleed bottle photography, conservation as a primary navigation item (not buried in a footer), and an age-gate that opens with the same hand-painted stripe pattern as the bottle. Every digital surface is downstream of the physical one.

Collateral — recycled-paper cartons, neck tags, in-flight cards for Tiger Airways, retail point-of-sale, the Wild Tiger Foundation campaign assets — all snap into the same identity grid without negotiation.

05
Outcome

India’s first premium rum, on the world stage.

Where it landed

Wild Tiger launched at the London Rum Festival, then expanded across the world’s premium spirits market.

Wild Tiger Rum debuted at UK Rumfest, London (October 2015), followed by USATT New York (March 2016). Within a year it was retailing in the US, UK, France, Belgium, Cyprus, Czechia, Hungary, Poland, Denmark, the UAE, Maldives, and Thailand — across more than 16 countries. It became the only Indian liquor product available at international duty-free, and the official rum onboard Tiger Airways — a first for an Indian spirits brand for onboard sales.

The founder was named one of GQ India’s 50 Most Influential Young Indian Innovators of 2017. The Wild Tiger Foundation has since adopted the Wayanad Tiger Reserve in Kerala and works directly with tiger conservationists, local authorities, and NGOs on the ground.

What started as a founder’s wild idea — a premium rum from India that could fund tiger conservation — became a globally distributed product whose every bottle pours back into the cause it was named for.

06
Reflection

What an unconventional sequence taught me about brand work.

The cross-discipline insight

Working with a C-suite that demanded the bottle precede the brand was a forcing function — and the discipline transferred everywhere I’ve worked since.

"When the physical artifact carries the story, the digital work doesn’t have to invent — it has to extend. That’s how a small team ships a coherent brand. Lock the most expensive object first, and the cheapest objects fall in line."

Every brand project I’ve led since has borrowed this discipline. In enterprise SaaS at Benchmark Gensuite, the equivalent of "design the bottle first" is design the canonical surface first — the dashboard, the agent hub, the home — and let the rest of the product’s pixels inherit the visual contract from there. Same pattern, different industry.

What I’d push further now

Tighter integration between the conservation story and the buying moment. The bottle whispers conservation; the website shouts it; the in-store retail moment is the gap. A QR-led "this bottle helped fund X reserve" experience — bottle scan → real-time impact dashboard — would close the loop between purchase and cause.

07
Design Leadership & Ownership

End-to-end ownership across every brand surface.

AreaOwnership
Bottle & primary package design (sleeve, claw, tag, structure)Led end-to-end
Logo system, monogram, WTF conservation sealOwned end-to-end
Brand identity guidelines (color, typography, stripe library)Owned end-to-end
Website design (wildtiger.in)Led design end-to-end
Collateral — neck tags, brochures, cartons, retail POSOwned end-to-end
Wild Tiger Foundation campaign extensionLed design
C-suite alignment with founder & conservation partnersLed throughout
Previous All Work Next
Case Study · Brand · Package · Web · Graphic

Mum’s Sana Vita — from market shelf invisibility to a UK premium spice brand.

Led the end-to-end brand build for Mum’s Sana Vita — a BRC Grade AA-certified Indian premium spice brand selling in the UK across Whole Spices, Ground Spices, and Spice Blends. Owned the package design, brand identity, web design, and retail-ready collateral — building a system that translates an Indian-spice heritage into a clean, trust-led UK premium-grocery aesthetic.

Mum's Sana Vita — premium spice packaging on UK retail shelf
Client
Mum’s Sana Vita — UK Premium Spice Brand
Role
End-to-end Designer · Package · Brand · Web · Graphic
Industry
Premium Food & Grocery · UK Retail · Amazon Marketplace
3
Product ranges — Whole Spices, Ground Spices, Spice Blends
BRC·AA
UK-grade food-safety certification — the highest manufacturing trust signal
100%
Natural, handpicked, sustainably sourced — every SKU
UK
Distributed nationally via Amazon UK and direct-to-consumer web
01
Context

An Indian spice brand walking onto a UK shelf.

The category problem

Indian spices in UK retail compete on two extremes — bargain-bin ethnic-aisle SKUs, or hyper-premium artisanal jars. Mum’s needed to land cleanly in the middle: premium-feeling, accessibly priced, trustworthy on first glance.

The challenge wasn’t the spice — the spice was excellent. The challenge was that UK shoppers don’t evaluate Indian spice brands the way Indian shoppers do. A British home cook walking down a Sainsbury’s or browsing Amazon UK is scanning for: clean typography, a recognizable trust seal, ingredient clarity, and a brand that signals "this belongs in my pantry next to Ottolenghi’s spice tin."

Most Indian spice brands import their domestic packaging directly to the UK — heavy graphics, multi-color clutter, dense product copy. The result is shelf-invisibility. Mum’s brief was the opposite: build a brand that speaks UK premium-grocery first, while staying authentically Indian in story.

The product line spanned three ranges (Whole Spices, Ground Spices, Spice Blends) across multiple SKUs and weights — each pouch needed to feel like part of one family, while staying distinctive enough that a shopper picking the cardamom doesn’t accidentally pick the coriander.

02
The Approach

Pouch first. Logo second. Web last.

The brand sequence

Same physical-first sequence I’ve trusted in every brand-led project: lock the most expensive, most-touched object first, then let everything else inherit.

The pouch is where the brand earns its first second of attention — on the shelf, on Amazon’s grid, in the customer’s hand. Every later surface (logo extensions, web, recipes, Amazon A+ content) had to extend what the pouch already established. Not invent a parallel identity.

01

Package & pouch design

Pouch structure, material, the red-banner masthead, ingredient illustration band at the bottom, ingredient name in 4 languages (UK retail = multilingual market). Per-SKU color coding for Whole vs Ground vs Blends.

02

Logo & brand identity

Mum’s wordmark in red banner shape (the same masthead silhouette as the pouch), Sana Vita endorsement in serif italic, leaf-and-chili seal as a recognition mark. Restricted palette: red, cream, leaf-green accent, charcoal text.

03

Website, recipes, Amazon A+

mumsfood.co.uk — clean editorial homepage with the same "It’s all about good spices" voice. Recipes section as a content extension. Amazon UK listings with A+ content using the same pouch language.

03
The Package

A pouch that earns trust before the spice is ever opened.

Designing the trust signal

Every element of the pouch was chosen to telegraph "premium UK-grocery" while still letting the heritage breathe through.

The cream-white pouch background is intentional — most Indian spice brands lead with saturated color blocks. Cream reads as premium UK food (think Cook With M&S, Daylesford, Ottolenghi) and lets the product photography pop. The red banner masthead at the top carries the wordmark and creates an instant family signature across every SKU.

The ingredient illustration band at the bottom of every pouch is the same recognition system that lets a shopper identify the SKU from across the aisle — turmeric powder shows turmeric root and leaves, cardamom shows pods, black pepper shows whole peppercorns. The illustrative style is hand-drawn, warm, food-honest — not corporate-clean.

Product names appear in four languages on the pouch front (English, German, French, Spanish phonetic) — a small but critical UK/EU retail signal. Net weight is large and confident. The BRC Grade AA seal is on the back panel — visible to the shopper who flips the pouch, which is the moment of trust-decision in premium grocery.

Initial direction from the founder leaned toward saturated tradi tional Indian visuals — gold borders, paisley, rich color blocks. I argued the UK shopper wouldn’t reach for that on the shelf. Held the line on cream-white and minimal type. The cleaner direction shipped.
04
Brand & Web

The pouch’s grammar, extended into every digital surface.

From pouch to ecosystem

Once the pouch system was locked, the brand extended into web, recipes, retail-ready collateral, and Amazon A+ content without re-inventing.

The website (mumsfood.co.uk) opens with the same cream background, the same red banner navigation, the same product-first photography. The hero copy — "It’s all about good spices · Buy and Experience the Best of Indian Spices" — uses the brand’s warm-but-confident voice in the same italic-serif accent type the pouch uses on "Sana Vita".

Three primary navigation pillars match the three product ranges: Whole Spices · Ground Spices · Spice Blends. The Recipes section gives the brand a content surface — a place where Mum’s isn’t just selling pouches, it’s teaching a UK home cook how to use them. That’s how a spice brand earns repeat purchases — by being useful, not just available.

Amazon UK listings extend the same system — A+ content modules use the same red-banner masthead, the same illustrative ingredient style, the same "100% natural · handpicked · sustainably sourced" trust line that anchors the back of every pouch. One brand, every surface, the shopper never needs to relearn the visual contract.

05
Outcome

A spice brand that belongs on a British pantry shelf.

Where it landed

Mum’s Sana Vita ships nationally across the UK via direct-to-consumer web (mumsfood.co.uk) and Amazon UK marketplace, with a coherent identity from pouch to homepage to product detail page.

The brand now retails 3 product ranges — Whole Spices, Ground Spices, Spice Blends — across multiple SKUs and weight tiers, each carrying the same masthead, illustration band, and trust seals. The BRC Grade AA certification (the highest UK food-safety grade for manufacturing) is communicated visibly on every pouch back panel and prominently across the website and Amazon listings.

The result: a small Indian spice brand that doesn’t look like a small Indian spice brand on a UK shelf. It looks like it belongs next to Cook With M&S, Belazu, and Bart’s — and it’s priced to be the curious home cook’s authentic upgrade.

06
Reflection

What translating heritage into a new market taught me about restraint.

The cross-market design insight

When a brand crosses cultures, the temptation is to amplify the heritage. The discipline is to edit it.

"A brand that travels well doesn’t shout where it’s from. It earns the right to be heard in the new room first — and then lets the heritage breathe through quietly. That’s the difference between import and translation."

Both Wild Tiger and Mum’s taught me the same lesson from opposite directions. Wild Tiger leaned into heritage with bold, sensory packaging. Mum’s leaned away from heritage clichés to land in UK premium grocery. The throughline: let the physical object do the cultural work first, and the digital extends quietly.

What I’d push further now

A stronger e-commerce conversion engine. The current website is brand-first, transaction-second. A v2 would integrate Shopify or BigCommerce directly, adding subscribe-and-save, a recipe-to-cart flow ("cook this curry → add the 4 spices in one click"), and a UGC recipe gallery from real customers. The brand has the visual authority to support a richer commerce surface — it just hasn’t been built yet.

07
Design Leadership & Ownership

End-to-end ownership across every brand surface.

AreaOwnership
Pouch & primary package design (3 ranges, multi-SKU, multi-weight)Led end-to-end
Logo system & brand identity (red banner mark, Sana Vita endorsement, seal)Owned end-to-end
Per-SKU illustration system (ingredient band, language ladder)Owned end-to-end
Website design (mumsfood.co.uk)Led design end-to-end
Recipes & content surface architectureLed design
Amazon UK A+ content & listing imageryOwned end-to-end
Retail collateral, mailing-list assets, socialOwned end-to-end
Previous All Work Next
Case Study · Naming · Logo · Mascot · Custom Type · Identity

Eskimo’s — where every drip is an idea.

Brand identity for Eskimo’s Artisan Ice Cream — a Coimbatore-based ice cream parlour and fun-food café crafting frozen desserts with only fruits, dry fruits, chocolates, milk, and sugar. Owned naming, logo, mascot, custom type, invitation, and corporate identity end-to-end. The most playful brand project I’ve worked on — and the one where every visual decision had to taste like ice cream.

Eskimo's logo — polar bear mascot inside igloo, dripping custom-type wordmark
Client
Eskimo’s Artisan Ice Cream — Coimbatore, India
Role
End-to-end Designer · Naming · Brand · Mascot · Type
Industry
Artisan F&B · Café Hospitality · Family Casual Dining
What I delivered
Naming
Logo Design
Mascot Design
Custom Font
Branding System
Invitation Design
Corporate Identity
01
Naming

One word that tastes like cold.

Verbal identity

Before any visual work, the brand needed a name a five-year-old could pronounce and a thirty-year-old would post about.

The naming brief had three asks: (1) instantly suggests cold, (2) sounds joyful — café-friendly, not clinical, and (3) lands easily across English and Indian-English speakers. After exploring categories — geography (Arctic, Glacier), substance (Frost, Chill), character (Yeti, Pingu) — the winning territory was character with built-in story.

"Eskimo’s" earned the brief in one word: it telegraphs cold, it implies a person/character (the apostrophe-S = belonging to), and it opens the door to a mascot system. The name became the foundation that every other decision — mascot, type, color, illustration — could anchor to.

02
Logo & Mascot

A polar bear in an igloo. The brand in one frame.

Mascot-led identity

An ice-cream café for families needed warmth at the center, not corporate restraint. The mascot does the work the wordmark alone couldn’t.

The polar bear mascot is illustrated inside an abstract igloo — the dome rising behind him forms a frame for the entire mark. His expression — eyes closed, tongue out — is the brand’s mood in one face: indulgent, joyful, slightly cheeky. Not generic-cute. Specific-cheeky.

I drew an entire expression library beyond the primary mark: the wink for menu boards, the surprise face for new flavour launches, the satisfied closed-eyes face for testimonials, and a hungry/curious one for kid’s menus. The mascot became a system, not just a logo, so the brand could speak without ever repeating itself.

Color: deep purple ground with white-and-violet bear, accented by red tongue. Purple was the deliberate departure from category — everyone in ice cream uses pastel pink, mint, or sky blue. Purple owned the space, made the mark unmistakable on Coimbatore’s café-row signage, and gave the brand an instant Instagram identity.

03
Custom Type

A wordmark that melts as you read it.

The drip detail

Off-the-shelf typography would have wasted the brief. A custom letterform turned the wordmark itself into the product.

The Eskimo’s wordmark is hand-drawn custom type — a soft script that swells into rounded curves, with intentional drip details hanging from the bottom of select letters. The drips do double duty: they reference melting ice cream, and they signal frozen drips from an icicle’s edge. One illustration, two readings, both correct.

The white double-outline lifts the wordmark off the deep purple ground and gives it the candy-shop sticker quality the café environment needed. The wordmark looks edible. That was the whole brief.

There was real pressure to use an existing playful display font and skip the custom type — faster, cheaper, "good enough." I held the line that an artisan ice cream brand using a stock font would contradict its own positioning. The custom letterforms shipped, and they became the most-recognized element of the visual identity.
04
Brand System

One mascot. One drip. An entire brand world.

Identity in motion

Once the mascot, type, and color locked, the brand extended into every corner of the café experience.

The brand system covered: menu boards, packaging cups and tubs, takeaway bags, napkins, the staff uniform, signage, social-media templates, and the launch invitation. Each surface re-used the same components — the mascot, the dripping wordmark, the purple-and-white palette — but composed differently for each context.

The launch invitation was where I had the most fun: a die-cut card shaped like the igloo silhouette, with the polar bear’s face revealing through a circular cut-out when the card opened. Tactile, surprising, kept on fridges — exactly the kind of object an artisan ice cream brand should produce.

The corporate identity layer (letterhead, business cards, formal stationery) deliberately turned down the mascot, keeping the wordmark and a single bear paw print as the formal signature — proving the system flexes from kid’s menu energetic to vendor-correspondence professional without losing voice.

05
Reflection

The most fun project I’ve worked on — and the one that taught me restraint.

The visual-design insight

Playful brand work is the discipline most likely to look unprofessional — and the discipline where restraint matters most.

"The art in playful identity work isn’t adding. It’s removing. One mascot, one type voice, one color anchor — held with conviction — beats five clever ideas held loosely. Eskimo’s taught me where to stop."

Across the SaaS, agentic AI, and enterprise UX work that fills most of this portfolio, this project is the outlier — and the proof that the same design instinct (find the one big idea, then commit to it everywhere) holds across categories. A polar bear in an igloo is not a different discipline from an agent hub right rail; both are systems where one anchor decision has to carry every downstream surface.

06
Design Leadership & Ownership

End-to-end ownership on every brand surface.

AreaOwnership
Naming & verbal identityLed end-to-end
Logo design — primary, secondary, signature variantsOwned end-to-end
Mascot design & expression libraryOwned end-to-end
Custom type — Eskimo’s wordmark with drip detailsOwned end-to-end
Color system & brand guidelinesOwned end-to-end
Launch invitation design (die-cut igloo card)Owned end-to-end
Corporate identity — stationery, business cards, signageOwned end-to-end
Previous All Work Next
Case Study · Benchmark Gensuite · Analytics & Dashboards

Executive Dashboards — the answer in one glance.

Designed Tableau-powered EHS executive dashboards that compress thousands of incident, action, and rate data points into a single decision surface — scope-aware, time-aware, and trend-aware.

Company
Benchmark Gensuite
Role
Senior Product Designer · Data Visualization Lead
Industry
Enterprise SaaS · EHS · Analytics
8
Pinned executive KPIs surfaced above the fold
3
Cognitive layers — Number, Trend, Cause — in one screen
9
Global scope filters cascade across every chart
End-to-end
Owned design from discovery and persona work through Tableau implementation review
01
Context

Executives were drowning in their own data.

The problem in short

Decisions made on stale PDF reports. Real data sat unread in BI tools.

Benchmark Gensuite's enterprise customers ran EHS programs across hundreds of sites and dozens of business units. Every week, analysts compiled custom PDF and Excel reports for executive leadership. By the time the report landed, the data was 5–10 days old.

Meanwhile, the underlying Tableau data layer was already live. Executives didn't lack data — they lacked a surface that answered their questions in their language, on their schedule.

02
The Problem

Three questions. One screen.

Root cause

Existing dashboards forced executives through chart libraries to answer simple questions.

Across stakeholder interviews, every executive walked the same cognitive path:

"What happened?" — How many cases, how many hours, what's the rate?

"How are we trending?" — Is this number better or worse than last month, last quarter, last year?

"Where do we focus?" — Which accident type, incident category, region is dominating the count?

Existing analytics surfaces forced executives to assemble that answer themselves across multiple tabs and filter combinations. The dashboard had to be the answer, not the path to it.

03
Research

What I learned before opening Tableau.

Methods

Executive stakeholder interviews · Persona development · Cognitive walkthroughs · Tableau heuristic audit · Competitive review

Interviewed three executive archetypes: Chief Sustainability Officer (board-level reporting), EHS Director (cross-region operational visibility), and Regional Operations Lead (site-level performance). Different scope, same cognitive ritual.

Audited the existing Tableau implementation against data-viz heuristics — Tufte's data-ink ratio, Few's dashboard design principles. Mapped where the surface was over-decorating data and where it was under-explaining it.

Applied frameworks

Few's Dashboard Design · Tufte's Data-Ink Ratio · Pre-attentive attributes · Information Hierarchy · Mental Model Mapping

Key insight

"Executives don't need more data. They need the same data in the order their brain asks for it: number first, trend second, cause third. Anything else is noise."

04
Design Decisions

Five decisions. All argued for.

01

KPI strip pinned above the fold

Information Hierarchy · Pre-attentive Attributes

Eight executive KPIs (I&I Cases, Recordable Cases, Hours Worked, TRIR, Incidents, Concerns, Actions, LTIR) sit in a fixed strip on the left. Numbers in large weight, labels in small caps. The first question — "what happened?" — is answered before the executive scrolls.

Stakeholders proposed adding period-over-period deltas to each card. I held the line: deltas belong with the trend chart, not on the count tile. Mixing two cognitive layers on one tile defeats the strip's purpose.
02

Scope filters as a single global control bar

Tesler's Law · Direct Manipulation

Organization, Sub-Organization, Site, Department, World Region, Country, Custom Group, Lookback — pinned as one bar at the top. Every chart on the page respects the same scope at the same time. One change cascades everywhere; no per-chart filtering.

An early prototype put filters above each chart. Research showed executives lost track of which scope was applied to which chart. Unifying the scope removed the cognitive overhead and made the page truthful as a snapshot.
03

Trend chart in the center — number meets time

Mental Model Mapping · "Trends, Not Snapshots"

The I&I Rates line chart (LTIR and TRIR over time) anchors the center of the dashboard. It bridges the KPI strip and the breakdown donuts — answering "is this getting better or worse?" without leaving the page. Sparse axes, two clearly-distinguished lines, period-aligned labels.

04

Top-5 donuts answer "where do we focus?"

Hick's Law · Recognition Over Recall

Two donut charts at the bottom — Top 5 Accident Types and Top 5 Incident Types. Five categories, not all twenty. The point isn't completeness, it's salience. Counts on each segment, color reserved for the dominant slice. Executives leave the page knowing where to ask the next question.

Engineering offered a "show all categories" toggle. We dropped it. Toggles add cognitive load to a surface designed for instant comprehension.
05

Welcome line, scope toggle, lookback — context above all charts

Visibility of System Status · User Identity

"Welcome, [Name]" plus a scope toggle (Choose Organizational Scope · My Scopes of Responsibility) and a lookback control sit above every chart. The dashboard tells the executive who they are, what they're seeing, and how far back the lens goes before they read a single number.

05
Outcome

What shifted — and how we measured it.

AreaResult
Decision speedLive dashboard replaced 5–10 day stale PDF cycle
Self-service rateExecutives running their own scope and time analyses without analyst help
Cognitive layeringThree layers (Number · Trend · Cause) preserved on every page render
Scope disciplineOne global filter bar — no per-chart scope drift, no truth-fragmentation
Information density8 KPIs + 1 trend + 2 breakdowns delivered without scrolling on standard executive monitors
Pattern reuseTemplate adopted across additional Benchmark Gensuite dashboards (Sustainability, Quality, Compliance)
06
Reflection

What this taught me about data-dense design.

Takeaway

Data density isn't the enemy of clarity. Disorder is.

"The hardest thing about executive dashboards isn't fitting the data on the screen. It's deciding which data the executive is allowed to see first."

Tableau gives you infinite chart types. The discipline is choosing the three that map to how an executive thinks — and then defending that order against every stakeholder who wants their pet metric pinned to the top. Hierarchy is the design. Everything else is decoration.

What I'd do differently

Mobile and tablet experiences arrived after launch. Next time I'd design the surface mobile-first, then expand to desktop — the constraint sharpens the hierarchy faster than any user research session.

Previous All Work Next
Case Study · Benchmark Gensuite · Marketing Operations

Marketing Incubator — Lead & Booking Accelerator.

Led end-to-end UX/UI design to streamline lead prioritization and accelerate conversion workflows across marketing operations — turning a fragmented multi-tool process into a single action-first dashboard.

Company
Benchmark Gensuite
Role
Senior Product Designer · Lead UX/UI
Industry
Enterprise SaaS · Marketing Operations
25%
Faster lead-to-booking conversion
30%
Reduction in manual workflow steps
Faster decision-making via priority-based triage
End-to-end
Owned UX/UI across discovery, design, validation, and implementation review
01
Context

The cost of fragmented lead management.

The problem in short

Disconnected lead and booking systems were costing campaign velocity.

Marketing teams at Benchmark Gensuite were managing leads across disconnected systems — leads in one tool, bookings in another, prioritization in a spreadsheet, ownership in nobody's head. The result: delayed campaign execution, increased manual coordination, reduced efficiency at high lead volumes.

The challenge wasn't a missing tool. It was the gap between tools — the multi-step conversion path where leads went cold while everyone waited on the next handoff.

02
The Problem

Six gaps. One workflow.

Business & system challenges

Six failure modes compounding into a single workflow problem.

Disconnected lead and booking systems → fragmented workflows where the same lead lived in three places.

No prioritization model → inconsistent decision-making about which leads to chase first.

Dense tables → high cognitive load and slow scanning even when the data was right.

Multi-step conversion → delays and drop-offs at every handoff.

Limited visibility into pipeline ownership and status → leads stalling because nobody knew who owned the next move.

Direct impact on conversion efficiency and campaign execution timelines — the design problem was a revenue problem.

03
Research

Three lenses. One pattern.

Methods

Dogfooding · Data analytics · Stakeholder interviews · Pain-point mapping

Dogfooding. Validated workflows with internal marketing teams using real lead data. Observed friction across triage, prioritization, and booking actions. Surfaced three Key Gaps: hard to identify high-priority leads, context switching between tools, no clear next-step actions.

Data insights. Analyzed lead distribution across priority tiers; identified drop-offs in the lead → booking funnel; measured time-to-action across workflows. Outcome: need for prioritization visibility and faster execution.

Interviews. Spoke with marketing managers, lead owners, and stakeholders. Pain points clustered around three lines: difficult to scan and prioritize, inefficient navigation, limited pipeline visibility.

Findings

Five research findings that defined the design constraints.

What the research said

Users need action-first workflows, not passive data tables. Priority must be visually scannable. Workflows must enable fast triage → execution. Personalization ("My Leads") improves ownership. Data density requires progressive disclosure.

04
Design Decisions

Four decisions. All argued for.

01

Data structure and hierarchy redesigned around action

Information Hierarchy · Pre-attentive Attributes

Redesigned the lead table around a clear column priority sequence: Lead → Priority → Context → Actions. Grouped business, contact, and task data so users could scan a row at a glance. Enabled flexible column configuration so different roles could shape the table to their work.

02

Action-driven workflow — booking lives inside the row

Direct Manipulation · Tesler's Law

Embedded a "Push to Booking" CTA directly within every lead row. Multi-step conversion compressed into a single interaction. Introduced clear state feedback (Booked / Linked) so users knew the lead had moved without refreshing or re-querying.

Stakeholders proposed routing to a separate booking screen for confirmation. Research showed every screen jump added drop-off; we kept the action inline and put the confirmation in a state badge instead.
03

Filtering and segmentation that respect role and intent

Recognition Over Recall · Hick's Law

Multi-level filters across Recent · High · Medium · Low · Archived let users triage by signal. A "My Leads" toggle delivers the personalized view marketing owners asked for. Integrated search ties it together for fast retrieval — three lenses on one dataset, no context switch.

04

Cognitive load reduction through progressive disclosure

Miller's Law · Tufte's Data-Ink Ratio

Applied progressive disclosure ("See more") to collapse detail by default. Reduced visual noise through structured layout and improved scannability via consistent hierarchy across all data types. Density without overload — the table can carry 200+ leads and still be parseable on first glance.

05
Outcome

What the redesign delivered.

AreaResult
Lead-to-booking conversion~25–30% faster across active campaigns
Manual workflow steps~30% reduction across triage and booking flows
Decision-making speed↑ Adoption driven by simplified, action-first UX
Workflow errors↓ Errors through structured data and clear state feedback
Pipeline visibilityCentralized dashboard combining status visibility and actionable workflows
Prioritization clarityReal-time prioritization with clear visual distribution by priority tier
ScalabilitySupports high-volume, multi-role environments without re-architecture
06
Reflection

What this taught me about marketing-ops UX.

Takeaway

Marketing tools fail when they make users move data instead of making decisions.

"The win wasn't a prettier table. It was making the next action visible from the row, and trusting the user to take it."

The instinct in enterprise marketing tooling is to show more — more columns, more filters, more counts. The discipline is to show what gets the next action done and hide everything else behind progressive disclosure. Lead → Priority → Context → Actions is a hierarchy, not a column order. Once that hierarchy was right, the velocity numbers followed.

About

I design products people actually use — and teams actually ship.

I'm Dolly Kapadia. A multidisciplinary designer working where UX, product strategy, and brand intersect.

My work lives in the messy middle — where research meets interface, systems meet stories, and software actually gets shipped. I've led design on enterprise platforms used by thousands, launched consumer products that moved metrics, and helped teams move faster by aligning on what matters.

Off the clock: too many essays on systems thinking, terrible sourdough, and a stubborn belief that good design is mostly good judgment.

Good principles have sacrifices. They prioritize certain benefits over others, build constraints into workflow, and act as a guide toward performance.

01

Design is 90% communication.

The most important skillset in product design isn't pushing pixels. It's facilitation, persuasive storytelling, and stakeholder management. A beautifully rendered design that doesn't ship is worth nothing.

02

Infinite mindset.

A purpose-driven approach and playing the long game is more effective than short-term tactics. Great products compound over years, not sprints.

03

Outcomes over output.

Customer behavior is the clearest metric of business success. Solving problems starts with prioritizing outcomes, not features. Shipping more isn't the same as shipping right.

04

80% is good enough.

Reaching 80% of a goal is usually good enough. The time to reach 100% is better spent making progress elsewhere. Perfectionism is a tax the user never asked you to pay.

Where product design is actually going.

The role is changing. The old lines between UX, UI, product, research, and strategy are dissolving. What used to require a team of five specialists often now requires one hybrid designer who can navigate all of them — and a clear point of view about what's worth building in the first place.

AI is reshaping the craft faster than any tool shift I've lived through. The best designers I know are using it to compress research cycles, generate and evaluate variations, and spend more time on the 20% of work that actually matters: framing the problem, understanding people, and making decisions nobody else can make.

I think the next decade of product design belongs to generalists with taste — people who can zoom from business strategy to pixel detail, who understand systems as well as stories, and who know the right question is usually more valuable than the prettiest answer.

What I work with, and what I work on.

Skills
  • UX Strategy
  • Product Thinking
  • Creative Design Thinking
  • Systems Thinking
  • UX Problem Solving
  • UX Laws & Principles
  • User-Centered Design (UCD)
  • Human-Centered Design (HCD)
  • Dual-Track Agile
  • User Research
  • Usability Testing
  • Information Architecture
  • UX/UI Design
  • Interaction Design
  • Visual Design
  • Wireframing & Prototyping
  • Design Systems
  • Accessibility
  • AI Prototyping & Research
  • Brand Strategy
Tools
  • Figma
  • Adobe Creative Suite
  • Sketch
  • Framer
  • Miro
  • UserTesting
  • Hotjar
  • Litmus
  • Aha!
  • Jira
  • SharePoint
  • AI Research Tools
  • AI Design Tools
Recognition
  • Leadership Circle Award — Benchmark Gensuite 2025
  • Best Branding Logo — UN Director Research Policy 2019
  • Emerging Creative Director — Rotary AAKRUTHI 2018

Let's build something that matters.

Get in touch for opportunities or just to say hi! 👋