Design is 90% communication.
The most important skillset in product design isn't pushing pixels. It's facilitation, persuasive storytelling, and stakeholder management. A beautifully rendered design that doesn't ship is worth nothing.
Redesigning a global compliance platform around user confidence, not system logic.
Enterprise Forms landing — Sites and Open Access Forms split into two clear intent zones. Scope navigation lives on the left, public-facing form access on the right. Administrators land in the right mode instantly, no ambiguity about where to start.
Form Management — every form's lifecycle visible at a glance. Form names, publisher, submission type, created and updated dates, paired with a real-time KPI Status donut (Active · Pending Review · Achieved · Draft · Rejected). My Assignments sits below the dashboard so individual work and global view share one surface.
Form Builder — drag-and-drop canvas with live field preview. Controls grouped into named categories (Basic, Static, Date-Time, System, Entity) keep each chunk within cognitive grasp. Properties sidebar updates the moment a field is selected.
Add Sub-Question modal — focused field configuration without losing builder context. Label, placeholder, tooltip, max length, and visibility flags (Required, Include in search field, Email Display) chunked into one task surface.
New Form — Templates tab with curated Survey Form starting points. Each card previews the template with a one-click Use Template action. Turns 'where do I begin?' into 'pick the closest match and edit.'
New Form — empty state for unmatched activity templates. Friendly illustration, clear voice, and a 'Click here to create now' onramp turn a dead end into the next action. Empty states designed as entry points, not roadblocks.
500+ compliance forms. A user base that avoided opening them.
Benchmark Gensuite's Enterprise Forms platform housed 500+ compliance forms used daily across global manufacturing, energy, and logistics operations — but it was hemorrhaging time at every touchpoint.
The numbers told a clear story. Administrators were spending 45–60 minutes building forms that should take 15. UAT cycles were running 5 rounds deep. EHS managers were running compliance programs on spreadsheets — not inside the platform.
Jakob's Law — the interface was organized around system logic, not user cognition.
Administrators spent 45–60 minutes building forms that should take 15. Templates missing or unusable. Every form started from scratch.
EHS Managers had zero visibility into form lifecycle. Active, rejected, pending — all buried in a separate reporting module nobody opened.
Frontline Users were completing the wrong form versions. Scope assignment unclear. System architecture exposed to users who never needed to see it.
Contextual inquiry · Usability testing · Task analysis · Mental model mapping · Competitive audit
Structured discovery across three user types — observed in real workflows, not conference rooms. I ran contextual inquiry with EHS managers and system admins on-site, moderated usability testing across all three user groups, and mapped each journey against the interface.
Double Diamond · Jobs-to-be-Done · Mental Model Mapping · Heuristic Evaluation · Affinity Mapping
"Users weren't creating forms — they needed confidence that what they built would work the first time, without a developer, without rework."
Split the home screen into two clear intent zones: scope navigation (operational) and Open Access Forms (public-facing). Administrators land in the right mode instantly — no ambiguity about where to start.
Real-time status dashboard — Active · Pending · Achieved · Draft · Rejected — as a permanent fixture inside Form Management. Not a widget. Not collapsible. Always on.
Redesigned template library zero-results as an active onramp — warm illustration, clear voice, direct CTA. From "nothing here" → "this is where you begin."
Live field preview. Real-time Properties sidebar. Control panel chunked into named categories — Basic, Static, Date-Time, System, Entity — keeping each group within cognitive grasp.
Every decision applied one consistent principle: move complexity from the user into the system. Someone always carries the complexity. We chose the system every time.
Previous cycles averaged 5 UAT revision rounds. This one completed in 2.
During UAT, an EHS manager building a 10-field form from template said — unprompted:
Not "it's faster." Not "it looks better." Certainty. That was the design north star.
| Area | Result |
|---|---|
| Form creation time | 45–60 min → under 15 min · ~70% reduction |
| UAT revision cycles | 5 rounds → 2 rounds |
| Compliance visibility | First real-time status dashboard embedded in workflow |
| Open Access Forms | New capability — public data collection outside registered users |
| External workarounds | Spreadsheet dependency eliminated |
| Design system | 12+ reusable components contributed |
Senior design is about knowing which questions to protect.
The hardest part wasn't the interface — it was holding the line on cognitive simplicity when every stakeholder had a legitimate reason to add something. Tesler's Law in practice: someone always carries the complexity. We chose the system every time.
Frontline field workers were underrepresented in our research. The mobile submission experience deserved its own dedicated sprint.
Redesigning compliance workflows at scale — from fragmented document handling to an AI-enabled, governed operational layer.
Before — the legacy Doc Manager. Cluttered tree sidebar, four parallel tables (My Recent, My Files, Showcase, Favorites, Drafts, Checked Out) competing for attention, dense rows with no visual hierarchy. Documents had to be hunted for; status was buried in icons; the surface taught the user the system, not the work.
After — Doc Manager homepage redesigned. One title, one collection selector, four task-first tabs (Recent · Showcase · Favorites · Checked Out), one search field. Documents render as scannable rows with status badges (Published · In Draft · Under Review) and inline actions. Genny AI lives in the right rail with alert counts (Total Published 999 · Pending Approvals 99 · Under Review 9 · Drafts 6) and a persistent 'Ask me anything about Doc Manager' chat — AI surfaces in the rail without hijacking the workflow.
Collection navigation — nested hierarchy without losing the user. Selecting 'Quality' opens a tree that maps to how compliance teams already think (Governance & QMS Framework → QMS Manual · Quality Policy & Objectives · Risk & Opportunity Register → New 1.0 Process / New Process · Operational Processes · Document Control · Nonconformance & CAPA · Audit & Compliance). The tree replaces six legacy tables with one shape that scales with the customer's framework.
New document workflow — the '+ New' menu compresses upload, folder creation, and link-add into one chunk-friendly list (Folder · Upload File · Upload Folder · Add Link). Breadcrumb above the table preserves location memory; documents below stay readable. Adding a file no longer means leaving the page.
File context actions — every action a document needs, grouped by job. More File Actions (Preview · Download · Browse to File), Links (Link to View · Link to Download · QR Code · Email Link), File Info (Folder Permissions toggle), Manage File (Update File · Initiate Review · Publish File · Archive File). Sixteen actions chunked into four named groups so the menu reads like a checklist, not a cliff.
Audit readiness, compliance outcomes, and operational speed were being throttled by documents users couldn't find.
Enterprise compliance teams were executing high-stakes workflows against fragmented document storage scattered across multiple systems. Retrieval during audits was slow. Teams leaned heavily on manual workarounds sitting outside the product.
Direct impact showed up everywhere: audit readiness slipped, compliance risk rose with missing traceability, and operational execution slowed across every team that touched a document. The scaling challenge was existential — the platform wasn't designed for the volume enterprise teams were generating.
Running compliance programs across large, unstructured repositories. Slow retrieval during audits. Heavy reliance on memory over system navigation.
Inconsistent metadata reducing search accuracy. Limited version traceability introducing significant audit and compliance risk.
Approval workflows executed outside the system, causing delays and lack of visibility across review cycles.
Legacy system architecture. Regulatory requirements. High volume of structured + unstructured data.
Root cause. The system was file-based, not metadata-driven. Users had to know where a document lived before they could use it. At enterprise scale — thousands of active documents, dozens of workflows, multiple regulatory bodies — that broke down fast.
1:1 interviews · Workflow shadowing · Legacy usability testing · Behavioral analysis of search patterns.
Structured discovery across EHS leaders, auditors, and operational users. I ran 1:1 interviews, shadowed real compliance and audit processes, and conducted usability testing on the legacy retrieval and approval workflows to see where the product broke down under real pressure.
"The issue wasn't lack of functionality — it was lack of system structure, visibility, and workflow coherence. Users were forced to rely on manual workarounds instead of the product itself."
Shifted from file-based organization to a scalable, metadata-driven architecture — enabling consistent classification, faster retrieval, audit traceability, and governed access across enterprise-scale datasets.
AI was positioned at critical interaction points — tagging on upload, predictive search, summarization before review — rather than bolted on as a standalone feature. Intelligence amplifies existing workflows instead of creating new ones.
Status visibility across every stage — Draft · Under Review · Published · Archived · Expired — surfaced inline so users stop context-switching to reporting modules to confirm where a document sits in its lifecycle.
Enterprise teams scan hundreds of documents a day. Dense, structured layouts with inline actions let users scan, act, and navigate without leaving flow. Every click saved compounds.
Approval workflows — previously executed in email, chat, or spreadsheets — were rebuilt natively with version control and role-based access. Every approval is now auditable, versioned, and governed. Compliance stopped being a report people ran and became a state the product enforced.
The behavior shift users reported was the one I had been designing for.
During UAT, one auditor summed up the behavior change:
Not "it's faster." Not "it looks better." Trust in the system. That was the north star the research had pointed to from the start.
| Area | Result |
|---|---|
| Workflow efficiency | 35–45% improvement — reduced operational delays across compliance teams |
| User confidence | 25–30% increase in system reliability trust scores |
| Version errors | ~30% reduction in version-related compliance errors |
| AI-assisted tagging | Standardized metadata on upload — reduced manual tagging dependency |
| AI summarization | Rapid document evaluation without full-file read — faster review cycles |
| Contextual search | Predictive, role-aware filtering — reduced reliance on exact metadata |
| Workflow coverage | 30+ compliance workflows re-architected end-to-end |
The best AI in enterprise isn't a feature. It's a quieter version of the product.
The hardest work was resisting the temptation to surface AI as a feature. Every AI capability we shipped was invisible until it had work to do — tagging on upload, summarizing on review, predicting on search. Users rarely named it as "the AI." They just noticed the system felt sharper.
I'd invest earlier in a lightweight governance dashboard for compliance leads. The system produced better data than any previous version — but we shipped without a first-class way to see the health of it. That's the next chapter.
Designing the disclosure platform sustainability teams stop apologizing for. 400+ KPIs, six global frameworks, one governed surface where the audit trail is the product.
Disclosure Director home — three framework cards (CDP shown) with KPI Finalization Status donuts, My Tasks queue, and Recent Activity feed unifying the disclosure workflow into a single command surface.
Data Source Mapping — ESRS framework expanded into per-KPI rows. Each row offers Data Source (Manual Input, Sustainability Reporting, Incidents & Measurements) and Transformation Function (Addition, Custom) selectors so analysts wire disclosure responses without leaving the surface.
Parameter aggregation — selecting reporting groups and parameters (Scope 1 Emissions, Reuse, Irrigation, Abrasives, Graphite & Wax) to roll up into a single KPI value. Unit type, roll-up unit, and reporting year confirmed inline before save.
Disclosure setup — Status, Period, Materiality Due Date, KPI Finalization Due Date, and Final Report Date governance fields paired with Disclosing Entity management (Benchmark Germany, Global Legal, Tracer Mexico, White & Case). Program owners scope an entire reporting cycle in one screen.
Modern Expression Builder — visual formula composition with named parameters, operators, and constants. Live validation replaces opaque syntax with structured, scannable logic that non-analysts can author and audit.
CSRD passed. CDP scoring tightened. Investors started reading footnotes. Sustainability went from narrative to ledger overnight.
Disclosure Director is the platform Benchmark Gensuite customers run their ESG program on — 400+ KPIs across CDP, GRI, CSRD, SASB, and TCFD, owned by people in five different functions, audited by a sixth, and signed by a seventh. The old product was built when ESG reports were brochures. By 2024 they were filings.
My design brief, in one line: make this feel less like a survey tool and more like a system of record.
Sustainability lead accountable for the whole filing. Lives in portfolio view: which material topics are red, which KPIs are stale, which framework deadline is coming first.
Operations, HR, finance — the people who actually have the number. Needs a scoped task, a clear input field, and zero exposure to framework taxonomy they didn't sign up to learn.
External or internal sign-off. Needs the chain: who entered the value, what evidence backs it, what changed since last quarter, who approved.
Killed the old "modules" structure. Rebuilt navigation around material topics — because that's how regulators ask, and how owners think.
The original IA was a list of features: KPI Library, Frameworks, Evidence, Reports. Users had to assemble the journey themselves. I rebuilt it around material topics — Climate, Water, Workforce, Governance — so a Program Owner can drop into "Climate" and see every KPI, owner, framework tag, and open task without crossing four screens.
Mental Model Mapping made this obvious. Tesler's Law made it non-negotiable. Someone has to carry the cross-framework complexity. The system carries it now, not the user.
Stakeholder interviews · persona modeling · journey mapping · workflow shadowing · heuristic review of the existing product · competitive teardown of Workiva, Persefoni, Watershed.
I sat with sustainability leads at three customers during a real CDP submission cycle. The pattern was identical at every site: people weren't drowning in data, they were drowning in questions about the data — Is this number current? Whose number is it? Which framework needs it? Has anyone signed off?
That ruled out the obvious move (simplify the UI). Enterprise users were going to look at thousands of KPIs no matter what we did. The job was structured density — give every cell a clear owner, status, framework tag, and evidence link, then let users filter their way to the question they actually had.
"They didn't need fewer KPIs. They needed every KPI to answer four questions on sight: who owns it, what state is it in, which framework cares, and where's the proof."
Old nav: KPI Library, Frameworks, Evidence, Reports. New nav: Climate, Water, Workforce, Governance, Supply Chain — with framework views as a lens, not a destination. Owners think in topics. Regulators ask in topics. The IA finally matched.
Every KPI row shows its workflow state and its risk state, side by side, always. Draft / In Review / Final next to Stale / At-Risk / Verified. No drilling. No "open the record to find out." The table became the dashboard.
Derived KPIs (Scope 1 emissions, water intensity ratios, gender pay gap calculations) used to live in a developer-style formula box that no sustainability lead could read. I designed a token-based builder: drag in variables, pick operators from a typed menu, see the formula render in plain English underneath, see the live preview value on the right.
Data Owners had three separate inboxes — KPI inputs, evidence requests, review comments — each with its own list and notification. I collapsed them into one My Tasks panel with type filters. One place to start the day, one queue to clear.
Audit trail used to be a separate report. I moved it into a persistent right-side panel beside every KPI and disclosure record — every change, comment, approval, evidence upload, in time order. Reviewers stop asking "what changed?" because the answer is right there.
| Area | Result |
|---|---|
| Information architecture | Pivoted from feature-led nav to material-topic-led IA — owners and regulators now share a model |
| KPI Finalization Status | Workflow state and risk state visible at row-level — no drill-in required |
| Expression Builder | 0→1 token-based pattern shipped — non-technical users built derived KPIs in first UAT session |
| My Tasks panel | Three separate inboxes collapsed into one cross-module queue |
| Audit feed | Always-visible side panel — reviewers stopped requesting "what changed" reports |
| Time saved | ~4+ weeks of annual manual collection effort reclaimed per program team |
| Design system | 15+ governed components: framework badges, KPI cards, owner chips, evidence drawers, audit timelines, expression tokens |
In regulated software, the audit trail isn't a feature. It's the product.
The hardest argument was the simplest one: that an audit trail belongs in the same view as the work, not in a quarterly export. Tesler's Law again — someone always carries the complexity. Putting it on the system meant fighting for screen real estate every sprint. Worth it.
I'd have run the framework-mapping research with the auditors earlier. We learned which tags actually mattered for assurance opinions in week ten. That should have been week one.
An internal lead-collaboration tool for Marketing Incubator teams. Built from zero to replace the spreadsheet, the Slack thread, and the "wait, who was talking to her?" — all at once.
Champions Builder mobile — primary flows. Four screens (home dashboard, open tickers list, new contact form, recent conversations log) show the mobile rebuild from a sales rep's pocket. Open Tickers KPI cards (Past Due · In Progress · Need Action) sit above quick-action chips (+ New Contact · My Contacts · Recent Conversations · Log Conversations). The New Contact form chunks contact, company, file upload, LinkedIn, and attributes (Archived · Untracked · Ineligible) into one task surface so reps capture leads without bouncing between apps.
Champions Builder mobile — search, filter, and ticker management. Three screens (advanced search with Contact / Conversations toggle, date-filtered conversation log, View / Edit Ticker detail). The ticker detail surfaces every signal a sales rep needs in the field — Ticker Nr, Logged By, Status, Recipient, Last Conversation, Follow-Up Action, Close Ticker / Notify Changes — without requiring a return to desktop. The mobile app is the desktop's full peer, not a stripped-down companion.
Marketing Incubator teams ran lead pursuits across conferences, calls, intros, and warm reach-outs. The work was fast. The handoffs were not.
Champions Builder is the internal product Marketing Incubator teams use to collaborate on leads — capture new contacts in the moment, find existing contacts before someone else cold-emails them again, log conversations so the next person knows what was said, and manage targeted distribution lists without rebuilding them every campaign.
The brief: a mobile-first workspace that's faster to update than a Slack message and more reliable than anyone's memory.
Notes lived in DMs. Contact details in spreadsheets. Status in someone's head. Three teammates would each cold-email the same person before realizing two others had already met her.
When someone went on leave or rotated off a pursuit, the relationship reset to zero. No record of what was promised, by whom, or when the next nudge was due.
Building a targeted distribution list meant a fresh export, a fresh filter, a fresh "wait, is this list current?" — every single time a campaign went out.
A lightweight CRM surface — without becoming Salesforce. Fast to update, ambient to read, status-first by default.
The trap with internal tools like this is the slow drift toward becoming a real CRM — fields beget fields, and within six months nobody updates it. The product had to stay aggressively narrow: capture, search, log, affiliate, list. Five verbs. Status visible without a click.
Stakeholder interviews · workflow shadowing · persona modeling · entity mapping · task-flow analysis.
The journey turned out to be a loop, not a funnel: meet someone → check if they're already in the system → add or update → log what was said → tag the affiliation → drop them on a list → follow up before someone else does. Every stage failed in the same way — context loss between people.
The bigger finding wasn't about screens. It was about workflow architecture: which actions deserved the home screen, which fields could wait behind a "more" tap, and how status had to read at a glance without anyone opening a record.
"The contact record wasn't the product. The product was the relationship history — and it had to travel with the contact across every person who touched it."
The first thing you see on launch isn't a search bar or a feed — it's your open tickers, sorted by what's overdue. Past-due and in-progress states are scan-level signals, not metadata you have to drill in to discover.
The "Add Contact" flow opens with search, not a blank form. Type the name, see if they're already there, then either reuse or create new. Eliminates the duplicate-contact problem at the entry point — where it actually starts.
Every contact has a chronological log: what was said, who said it, what was promised, what's next. Quick-add at the bottom, expandable detail above. The handoff that used to take a 15-minute call now takes a scroll.
Affiliations (contact ↔ company ↔ program) and distribution lists are two views of the same underlying graph. Build a list once, the affiliations keep it current. No more "is this list still good?" exports.
Ticker cards, status pills, search bars, tabbed accordions, inline CTAs — defined once, reused everywhere. Kept the interface compact and predictable as scope grew. Adding a new module became a configuration job, not a redesign.
| Area | Design result |
|---|---|
| Open work | Tickers and overdue states became the home screen — no more digging to find what you owe |
| Contact capture | Search-first add flow killed the duplicate-contact problem at the source |
| Collaboration | Conversation log replaced individual recall — handoffs survived rotations and PTO |
| Distribution lists | Lists became a query over the affiliation graph, not a static export — always current |
| Mobile-first IA | Tabs, bottom nav, accordions, contextual CTAs — usable in an elevator |
| Design system | 10+ reusable patterns for status, search, cards, forms, lists, and workflow actions |
Internal tools fail by becoming too useful — every "small" addition is the start of a CRM. Saying no is the design.
Champions Builder is end-to-end 0→1: discovery, personas, journey, IA, workflow modeling, mobile UI density, component scale. The decision I'm proudest of is the one that doesn't show on screen — the features I argued against. Stages, custom fields, opportunity values, forecast dashboards. All requested. All declined. The product stayed a shared brain instead of becoming Salesforce-Lite.
The natural v2 is ambient intelligence — follow-up nudges based on log timestamps, owner-based queues, and a prompt when a conversation says "I'll send it next week" but next week came and went.
End-to-end UX/UI design for a unified AI workspace — streamlining agent discovery, interaction, and management across enterprise teams.
Team GPT landing — agent marketplace organized by department (Product Engineering shown). Agents like Code Evaluator and Sample Agent are surfaced as cards with descriptions, action menu (Open in new window, Favorite), and a persistent Ask General AI input. Knowledge Base search anchors the top so users can pull the right context before launching an agent.
Recents — the same marketplace filtered to agents the user has touched. Reduces re-discovery cost on a daily workflow; familiar agents stay one click away while the wider catalog stays browsable.
Favorites — agents pinned for power-user workflows. Remove from Favorites lives inline in the action menu so curation is as fast as discovery.
Chat interface — the agent in action. User intent (Statement of Work prompt) sits in a styled bubble; agent response is structured (To proceed, please provide: Executive Summary, Scope of Work, Service Offerings) so users know what's missing before they retry. Bookmark and copy actions sit inline on the response itself.
Filters panel — model and parameter governance built into the chat surface. Model selector (Claude-4-Sonnet), Intelligence toggles (Long Context, Extended Thinking), Interaction controls (Remember Chats, Send on Enter), Parameters (Temperature, Max Words), and Style (Tone, Format) — enterprise-level configuration without leaving the conversation.
Bookmark — prompt and response library with folder organization. My Prompts and Saved Responses tabs, New Folder action, and a working tree (UX/UI Design folder expanded showing Ultra Modal and sidebar, Genny AI Agentic Hub UX, Platform Login UX/UI, REACT NEXTJS Tutorial). Reuse-by-default replaces re-typing-by-default.
New Prompt Folder modal — focused folder creation without losing the bookmark context. Name, Folder dropdown for nesting, optional Prompt body, New Folder shortcut for chained creation. Modal stays minimal so capture doesn't break flow.
History — every conversation captured with prompt, response preview, timestamp, and a one-click Resume action. Today and Last 30 days tabs filter the list. Conversations stop being ephemeral; workflows become resumable.
AI usage was fragmented, ungoverned, and invisible — scaling risk faster than scaling value.
Enterprise teams were using AI tools across disconnected surfaces — GPT instances, Copilot apps, custom agents — with no centralized ecosystem. Governance didn't exist, so outputs were inconsistent and risk exposure grew every month.
Agents were hard to discover, capabilities were unclear, and users had no structured way to reuse prompts or continue prior conversations. Every session started from zero.
Reduced efficiency in AI adoption because users couldn't find the right agent for the task, and had no trust that the outputs were governed.
Increased operational friction from context-switching between tools and re-creating prompts that had already been written 20 times across the team.
Missed opportunity for a scalable, customer-facing AI platform — internal infrastructure could have been the product, but wasn't packaged for external subscribers.
No continuity across conversations. No control over AI responses. No reuse of prompts or outputs.
At its core: users had power without structure. They could generate anything, but couldn't reliably retrieve, reuse, or govern what they had generated. The product was an engine without a workspace around it.
Internal teams used agents for real workflows: project planning, code evaluation, task execution — not synthetic scenarios.
I shadowed engineers, product teams, and internal enterprise users running real AI workflows — then interviewed them about friction points. Mapped actual interaction patterns across prompts, responses, and reuse behavior to ground every design decision in observed reality.
"Users weren't asking for more AI. They were asking for somewhere to put the AI they already had — with memory, structure, and guardrails."
Designed a centralized agent marketplace (All, Recents, Favorites) with a card-based layout for quick scanning and selection. Inline actions (open, favorite, expand) let users act without losing context. Agent capability was surfaced on the card, not hidden behind a click.
Structured conversational interface for prompt → response workflows. Clear separation of user input vs AI output. Enabled contextual responses with "Remember Conversations" so continuity carried across sessions — not lost every time a tab closed.
Designed a bookmark system for saved prompts and reusable knowledge. Folder-based organization for scalability. Quick access to frequently used workflows. Users stopped rewriting the same prompts across teams — reuse became the default, not the exception.
Introduced model selection, parameters, and response controls (temperature, token limits, memory). Provided transparency and control over AI behavior. Enabled enterprise-level configuration across tools — turning AI from a black box into a governed layer.
Designed conversation history with resume capability. Users could continue workflows without restarting context. Simplified workflows into structured, repeatable patterns. Reduced ambiguity with clear system feedback and states. Balanced flexibility with guided interactions — power users got depth, new users got the rails.
The behavior shift users named was the one the research pointed to from day one.
During internal rollout, a product manager summed up the shift:
Not "the AI is smarter." Not "the interface is cleaner." Compounding trust. Prompts had become assets — reusable, governed, and stacked across workflows.
| Area | Result |
|---|---|
| AI workspace | Unified platform integrating agent discovery, interaction, and management |
| Workflow continuity | Seamless flow: prompt → response → reuse → continuation |
| Governance | Centralized control — model selection, parameters, response controls embedded into configuration layer |
| Scalability | Supports internal enterprise use and external subscribers |
| Adoption | Increased adoption of AI tools across enterprise teams |
| Multi-agent friction | Reduced through prompt reuse and workflow continuity |
| Platform readiness | Established the foundation for a scalable, customer-facing AI product |
AI design isn't about the model. It's about the workspace around it.
The hardest decisions were restraint decisions — holding the line against AI feature creep. Every new capability risked adding cognitive load without adding value. Choosing memory over novelty, choosing governance over velocity, choosing reuse over generation. Each one required trading perceived "AI magic" for real workspace utility.
I'd ship a team-level knowledge graph earlier. Prompt reuse worked at the individual level — but the collective intelligence of a team should compound in a visible surface. That's the next chapter.
End-to-end UX redesign of a global enterprise EHS/ESG platform — trusted by 4M+ users across 35+ industries. Transformed a site-first, cognitively overloaded legacy system into a personalized, AI-augmented Digital Home experience across web, tablet, and mobile.
B2B Platform Digital Home — the unified landing surface for Benchmark Gensuite enterprise customers. Welcome banner anchors identity; a tab system (Start Record · Look Up · Quick Access) collapses dozens of legacy entry points into three intent-first paths. Six primary action tiles (Concern · Workflow · Action · Injury · Audit · Observation) cover the EHS workday. Take Action stack queues live audit completions inline. Genny AI right rail makes intelligence accessible without breaking the workflow.
B2C white-labeled customer experience — Adani OHS shown. The same Digital Home surface re-skinned with the customer's brand identity (logo, color palette, hero photography of their operational sites). Same task-first IA, same Genny AI assist, same Take Action queue — but the surface feels like the customer's own platform. Multi-tenant brand layer that scales without forking the product.
Active user landing — when someone has work in flight. Start Menu (Log New · Resources · Bulletin Board) brings frequently used apps to the top. My To-do panel (3 items) shows audit completions assigned to the user; Watch List, My Activity tabs let them switch lens without leaving the page. My Apps panel below organizes the user's full toolkit (Concern · Mapper · Inspection · Action · Inspection) for quick relaunch. AI-assisted workflows are everywhere AI helps and invisible everywhere it doesn't.
Personalized user home with insights — My Personal Home. My To-do donut chart visualizes work distribution (Open vs Past Due vs Closed); Start Menu chunks discovery into Log New · Resources · Bulletin Board with named app tiles (Med Care · Industrial · Mapper · Action · Equipment · Concern · Incident · Contractor); My Apps panel below adds personal saved apps (Permit to Work · Sustainability Prospector · Contractor Manager · Safety Dialogue · Compliance Calendar · Doc Manager · Maintenance Manager · Water Watch · Reg Auditor · Waste Tracker · Sustainability Projects · Air Log Tracker). Insights chart visualizes work over time. The system grows with the user, not against them.
Organized around the system’s own architecture — not around how users think about their work.
Benchmark Gensuite had built a powerful EHS platform over 25 years — trusted by Fortune 500 clients for compliance-critical operations. But its interface had drifted into a site-first structure: users had to understand the system before they could complete their work.
For frontline workers operating in high-stakes environments — safety inspectors, field teams, infrequent users — this created dangerous friction where there should have been clarity. The business signal was equally sharp: client churn risk, low frontline adoption, a mobile experience broken for field use, and a competitive gap against modern EHS platforms.
Design opportunity: If users had to understand the system before they could complete their work, the system was organized for the product — not the person. The redesign had one north star: make the work come to the user, not the other way around.
Daily platform user. Needs cross-program visibility, reporting, and team action management. High digital literacy but time-pressured.
Design implication: Needs a command center, not an app launcher. Dashboard-first, real-time status, no hunting.
Infrequent or task-specific user. Accesses platform primarily on mobile to log concerns, complete audits, or action assigned tasks. Low tolerance for friction.
Design implication: Every extra tap is a failure. Tasks must be reachable in one action. AI assistance essential for complex record completion.
Manages platform setup, user permissions, and client branding. Infrequent deep-use. Needs control without complexity.
Design implication: Personalization must be powerful enough for admins to configure but invisible enough end users never feel it.
Designing only for the EHS Manager meant failing the frontline worker and the compliance contributor.
All three personas had to succeed on the same platform — but each had fundamentally different relationships to it. The legacy experience treated them identically. The redesign had to treat them as three different products with one shared foundation.
Every design decision in this project traces back to a research finding. Methods were chosen to surface the gap between what users said, what they did, and what the system provided.
I owned research end-to-end — from method selection through synthesis through stakeholder communication. The highest-leverage finding didn’t come from interviews. It came from contextual inquiry in field conditions — watching a safety inspector try to log a concern on mobile while wearing gloves in variable lighting. Mobile-first wasn’t optional; it was a compliance risk if we didn’t deliver it.
These four themes became the four solution pillars of Digital Home.
"I need to access the Incident Management app" became "I need to log a concern before I forget the details." Different statement. Entirely different design response.
Task-first · Role-aware · AI-augmented · Brand coherent · Frontline-ready · Clear signal over noise.
The biggest leverage in enterprise UX wasn’t the interface — it was establishing shared design principles before a single screen was drawn. Principles created alignment speed, gave the team a basis for saying no to scope creep, and kept a 25+ module platform coherent across years of iteration.
Designed Start Record · Look Up · Take Action · Quick Access as the four primary entry points. Replaced app-by-app access with one routing action. "Start Record" takes the user into Concern, Injury, Action, Audit, or Observation without requiring system knowledge. "Take Action" surfaces a prioritized, status-aware task list (Past Due, Open) so users triage at a glance.
Designed a client-level layer (brand colors, logo, background imagery per enterprise client) and a user-level layer (pinned apps, Quick Access shortcuts, role-based defaults). The platform feels like a native product of the client’s organization — not generic SaaS dropped into their environment. Powerful enough for admins to configure, invisible enough that end users never feel it.
I initiated and designed how Genny AI exists within Digital Home — persistent left-side positioning, default-on, with proactive alerts, action summaries, generative AI features, and natural-language search from a single consistent entry point. AI assistance that requires users to seek it out gets ignored. Genny is always in peripheral view — available when needed, never intrusive when not.
The legacy mobile app was a fragmented, leaky experience. I rebuilt it end-to-end as a unified, persona-aware mobile platform with the same Digital Home logic — task-first, role-aware, client-branded, with Genny AI accessible on every screen. Critical flows rebuilt with clear progress, error recovery, and offline capability for field use. The mobile app is not a reduced version of the platform — it is the platform, optimized for the device.
System-centric jargon was a primary source of cognitive load. I rewrote the platform’s core navigation and action vocabulary from scratch — replacing system language with human, outcome-oriented phrasing. Every label written to meet users where they are, not where the system expects them to be. "Start Record" instead of "Incident Management App." "Look Up" instead of "Global Search." UX writing was scoped as a core workstream, with ownership.
The redesign shipped as "My Home" to the full subscriber base after a successful beta. Community response was immediate.
From real EHS leaders using the product post-launch:
Users named the two design principles the redesign was built on — personalization and task-first visibility — in their own words, unprompted. That’s validation the research predicted, and the design delivered.
| Before | After |
|---|---|
| Site-first navigation requiring system knowledge | Task-first homepage — work surfaces immediately |
| Uniform experience for all roles | Role-aware, client-personalized Digital Home |
| No AI — manual, multi-step workflows | Genny AI persistent across all screens |
| Fragmented legacy mobile app | Unified persona-driven mobile redesign |
| No brand identity — 25 years of visual drift | Cohesive design system at enterprise scale |
| System-centric jargon throughout | Human UX writing — owned end-to-end |
UX metrics instrumented for ongoing measurement: task completion rate, time-on-task reduction, navigation error rate, feature adoption by role, mobile session depth, Genny AI engagement rate per user group.
The biggest leverage in enterprise UX is shared design principles — established before a single screen is drawn.
Senior design leadership means knowing what you’d push further — not just celebrating what shipped. Principles create alignment speed, give teams a basis for saying no to scope creep, and keep a 25+ module platform coherent across years of iteration.
Deeper usability testing with infrequent users — particularly those accessing the platform monthly for compliance tasks — to measure whether Digital Home truly eliminates re-learning friction at scale across diverse industries and geographies.
The Genny AI onboarding flow — specifically designing role-aware first-run experiences where Genny proactively introduces itself based on the user’s persona, reducing time-to-value for new platform subscribers.
| Deliverable | Ownership |
|---|---|
| UX research & synthesis | Led end-to-end |
| UX writing & content strategy | Owned end-to-end |
| Personalization architecture | Owned end-to-end |
| Genny AI placement & interaction model | Initiated & led |
| Mobile app redesign | Owned end-to-end |
| Visual design & brand system | Owned end-to-end |
| Design system & component library | Owned end-to-end |
| Stakeholder alignment & design leadership | Led throughout |
Defined and designed Ultra — an AI-first experience layer that transforms enterprise software from navigation-heavy interfaces into intelligent work execution surfaces. Ultra sits on top of existing applications and answers one question exceptionally well: "What do I need to do right now?"
Ultra EHS Home — task-first landing with My Live AI Feed surfacing investigations, verifications, and resolved concerns. My Actions chunks workflow into Open, Past Due, Active, and Overdue counts. Genny AI lives in the right rail, always one click from context.
Action Tracking with Data Mining — switch from list to AI-assisted analysis in one click. Status tiles (Total, Open, Past Due, Closed, Closed Past Due) anchor the surface; Genny picks up the dataset, runs record and field analyses, and returns recommendations inline.
Log Request — Standard Form. Three-step progress (Request Details → Response Plan → Response Details) captures full requester context. Genny offers contextual writing help on every field without breaking the flow.
Log Request — Compact Form. Pre-filled rows let teams batch-log up to 10 actions at a time. Action Date, Origin, and Need / Issue Description condense into one row. Writing Help powered by Genny accelerates ambiguous fields.
Users don’t operate in systems. They operate in jobs-to-be-done. Software still asks them to learn the system first.
Enterprise EHS work lived across fragmented systems — Incident Management, Action Tracking, Concern Reports, Calendar, Inspections. Every one of them asked users to first understand where their work lives before they could act on it.
For frontline operators executing under time pressure, functional leaders tracking accountability across programs, and executives needing risk visibility at a glance — navigation had become the bottleneck. Not the features. Not the data. The interface itself.
Ultra was a 0→1 product definition — a new experience layer sitting on top of existing applications. The strategic restraint mattered: we weren’t rebuilding 25+ modules. We were replacing the question users ask when they open the product.
Technicians, operators, field workers executing tasks safely. Don’t care which system work lives in — they care about getting it done. Low tolerance for navigation. Mobile-first. Context switching kills productivity.
Site leaders, program owners accountable for outcomes. Need cross-program visibility with clear ownership and status. Move between oversight and action throughout the day. Need drill-down without friction.
Senior leaders owning enterprise risk and narrative. Need clear credible visibility into risk posture. Care about trends, not transactions. Need assurance, not operations. Mobile-accessible, presentation-ready.
High cognitive load. Manual review overhead. No prioritization. Missed or delayed actions.
The same platform had to serve all three — but at completely different abstraction levels. Designing one default homepage and assuming it would work for everyone would fail all three differently. Ultra had to adapt to the user, not the other way around.
Contextual interviews · Workflow decomposition · Task analysis · Heuristic audit · JTBD framing.
I ran contextual interviews across Operational, Functional, and Executive users. Decomposed actual workflows across ATS, IM, CR, and CC. Measured time-to-action and dependency mapping. Audited the legacy IA, feedback loops, and prioritization gaps.
The synthesis surfaced one reframe that reset the entire problem space:
"From navigation system to execution system. From 'Where do I go?' to 'What should I do?'"
Ultra became the starting point of work — surfacing Tasks · Insights · Records directly. Users no longer need to know which app holds which record. Ultra understands role, permissions, and ownership, then routes intent into the right place. The app grid still exists — but users rarely need it.
The Live Feed surfaces a dynamic stream of critical actions, risks, and deadlines powered by role context, behavioral signals, and AI ranking logic. Every card has a priority state (In Progress · High Priority · Completed) and — crucially — an inline action. Users triage and execute from the same surface.
Every Live Feed card has a clear execution path — Verify · Resolve · Log · Escalate — directly from the surface. No searching, no switching systems, no multi-step flows. Paired with a dashboard of workflow counts ("8 Open · 2 Past Due") that are themselves entry points into prioritized views.
Genny AI is contextually embedded, not hidden behind an icon. She initiates — reaching out first ("I see your lighting concern in B11 was resolved. Can you also let me know if that Zone C rail repair is still solid?") — not waiting for users to ask. Interaction is conversational, context-aware, and embedded within tasks.
Ultra loads differently for each persona. Operational users see task-first, mobile-optimized layouts. Functional leaders get cross-program visibility with bottleneck detection. Executives see AI summaries, risk signals, and drill-down only when needed — never required. Same platform, three entirely different first impressions. Layout pre-sets determined by Role APIs; content presented is AI-determined.
Genny AI persists across lifecycle stages — Incident → Investigation → Closure → Summary. She tracks state, surfaces next steps, and maintains continuity across the personas handing work off to each other. When an Operational User provides proof of closure, Genny proactively notifies the Functional Leader. When the Leader rejects it, Genny brings the feedback back to the originator. The AI becomes the connective tissue between handoffs.
Users stopped asking "where do I go?" and started asking "what’s next?"
The goal of Ultra was never features. It was this exact moment:
Three personas. Three different relationships to the same platform. One experience layer designed to meet each of them where they are.
| Dimension | Shift |
|---|---|
| User impact | Reduced cognitive load across workflows · Faster task execution · Increased confidence in decisions |
| Product impact | Navigation system → Execution system · Increased cross-application engagement · Reduced workflow fragmentation |
| AI impact | Tool → Embedded workflow intelligence · Real-time assistance · Predictive task surfacing |
| Entry point | App launcher → Ultra Home as the default start-of-day surface |
| Prioritization | Manual interpretation → AI-ranked Live Feed with transparency on why |
| Execution | Multi-step, multi-app flows → Inline actions from the surface itself |
| Continuity | Disconnected handoffs → AI-threaded workflows across the incident lifecycle |
Designing for AI is designing for a new relationship between user and system — not a new feature.
Ultra is a 0→1 product definition — not a UI refresh. The work that moved it from concept to shipped platform happened in three layers: IA transformation (from site-first to job-first), interaction architecture (inline execution replacing navigation), and AI orchestration (Genny as a persistent, proactive co-pilot across the lifecycle).
The senior design leverage in AI products is resisting the temptation to bolt AI on. The best AI experiences are the interface. Users shouldn’t need to find the AI — the AI should find the work.
Onboarding for agentic AI is still an unsolved problem. First-run experiences where Genny proactively introduces herself based on persona — and earns trust by demonstrating value within the first minute — is the next frontier. Without that, even the best AI co-pilot risks being dismissed as a chatbot.
Measuring behavior change at scale: tracking the specific moment a user shifts from navigating to Ultra to starting their day in Ultra. That behavioral shift is the real success metric — not feature adoption, not click-through, not time-on-task. It’s whether users reach for Ultra first.
| Area | Ownership |
|---|---|
| 0→1 product definition & vision | Led end-to-end |
| UX research across Operational, Functional, Executive personas | Led end-to-end |
| IA transformation (site-first → job-first) | Owned end-to-end |
| AI interaction architecture (Genny placement & behavior) | Initiated & led |
| Role-based experience layer system | Owned end-to-end |
| AI-threaded workflow design (lifecycle continuity) | Owned end-to-end |
| Cross-functional alignment (Product, Engineering, AI, Stakeholders) | Led throughout |
| HX maturity framework & principles | Defined & led |
Led ground-up research, product design, and AI interaction architecture for a new AI-powered disclosure management platform. Built from the ground up with AI at the core — evolved from a recommendation-based "Suggestions" model into a fully agentic "Automated Intelligence" system that drafts, applies, and files responses at scale. Winner of three industry awards including TITAN Gold & E+E Leader.
Responsio Admin Dashboard — the command center for the disclosure response lifecycle. Four KPI tiles (30 New Requests, 245 In Progress, 15 Past Due, 400 Delivered) anchor the page; My Tasks queue surfaces drafts and reviews assigned by role; My Responses in Progress table shows priority, requester, section lead, and progress; Activity Feed captures every engagement with requests and responses.
Index Repository — searchable history of every disclosure response the organization has ever submitted. Filters across Section, Organization, Scope, and Section Lead let teams stop rewriting answers from scratch. Active filter chips, Expand All Data, and Export turn the repository into a real reuse engine.
Add Past Request / Response — a focused side panel for ingesting historical Word, Excel, or PDF responses into the repository. Requesting Organization, Date Finalized, Scope, drag-and-drop file zone with upload progress and Upload-from-URL alternative — designed so legacy backlogs become AI-ready in minutes, not weeks.
Response Details — three-step authoring flow (Request Details → Response Plan → Response Details) with section-level governance. Sections show Section Lead, Due Date, and Priority inline; Manage Section and Add Questions actions sit at the row level. Submit for Review at the bottom is the only path forward — governance built into the workflow.
AI-assisted attachment intake — Genny AI offered as the easier path on the Add Batch Attachments panel. The right-side request shows the response in progress (12% Completed) so users can see Genny's contribution land directly inside the active workflow. Trust grows from visibility before autonomy.
Attachment workflow — uploaded source document opens inline in the left rail (GRI Standards Glossary 2022.pdf) while the response sections stay live on the right. Reviewers verify the source, the answer, and the section assignment without ever leaving the request — three jobs, one surface.
Genny AI Automated Intelligence — when the user authors a response, Genny surfaces four suggested past responses pulled from the Index Repository, each with confidence, source citation, and a one-click Apply Response action. Disclosure response becomes retrieval and reuse, not regeneration.
Spreadsheets. Lost threads. Missed deadlines. Repeated answers across 40+ stakeholder types.
ESG, Sustainability, and EHS teams at enterprise scale are under relentless pressure. Customer questionnaires, investor ESG surveys, regulatory filings, supplier audits, DEI disclosures — requests come from every direction, in every format (Word, Excel, PDF), demanding consistent, high-accuracy responses under tight deadlines.
The pre-Responsio workflow was broken: teams built disclosure answers from scratch every time, hunted through email threads for prior responses, and manually copy-pasted across spreadsheets. The same factual answer was being rewritten four, five, six times a year across different templates. Lean teams. Rising demand. No system.
Responsio was a new product definition — conceived, researched, designed, and shipped as a ground-up solution. Built with AI at the core from day one, not bolted on after.
Owns the disclosure queue. Logs incoming requests, assigns ownership, tracks deadlines. Today: spreadsheets + email + status pings. Design implication: Needs a dashboard that shows request state across the lifecycle at a glance, with clear ownership and priority signals.
Answers specific domain questions — emissions, governance, supplier practices. Today: asked the same questions across requests. Design implication: Needs AI-powered recall of prior approved answers and the ability to customize them, not re-write from scratch.
Final approver accountable for accuracy and brand voice. Today: reviews drafts across email with zero traceability. Design implication: Needs a consistent review surface with version history, approval workflows, and automated notifications.
Inaccurate or inconsistent disclosures create business and reputational risk — the stakes aren’t just efficiency.
Sustainability disclosures are investor-grade data. An inconsistent answer across two customer questionnaires isn’t just embarrassing — it’s a potential compliance and reputational risk. The existing workflow optimized for throughput. Responsio had to optimize for throughput and consistency, simultaneously.
I led end-to-end research — stakeholder interviews, workflow decomposition, document audit, competitive scan, and jobs-to-be-done framing — before a single pixel was designed.
This wasn’t a feature refresh. It was a new product, which meant research had to earn the shape of the solution from scratch. The research had two jobs: (1) define the product surface, and (2) define the AI interaction model — specifically, how much to let AI initiate in a domain where accuracy is legally and reputationally consequential.
"Disclosure response isn’t a writing problem. It’s a retrieval & re-use problem with an accuracy gate. Which means AI’s job is to surface and adapt prior approved answers — not generate new prose from scratch."
The dashboard surfaces New Requests · In Progress · Past Due · Delivered as the primary information scent — matching how coordinators actually think about their work. Paired with "My Tasks" by stage (Draft Pending Completion, Sign-Off, Closure, Delivery, Review) so users see exactly what’s waiting on them, not the whole queue.
Every approved response becomes a repository asset — searchable, reusable, taggable by topic and source. The repository isn’t a feature hidden in a menu; it’s the spine of the product. AI retrieval, response suggestions, and drafts all trace back to this single library, which means users trust what AI surfaces because they authored it originally.
The first AI surface in Responsio was intentionally conservative: "Genny AI Suggestions" with a "Use Response" button. Each suggestion came with a visible rationale — "GennyAI recommends this AI-generated response based on a holistic analysis of all responses in the Response Repository." Users chose. AI didn’t act.
Post-launch, with trust established, I shipped the agentic evolution. "Suggestions" became "Automated Intelligence" with Apply Response — the AI now surfaces "4 Response suggestions found" and users can apply directly. The rationale layer stayed, but the interaction shifted: from "AI proposes, user composes" to "AI drafts, user approves." Same repository, same transparency, different autonomy level. This is the agentic pivot.
Research surfaced that 30-60 minutes was wasted just formatting incoming Word / Excel / PDF questionnaires into actionable question lists. I designed the ingestion flow: upload → AI parses structure → user confirms question boundaries → ready to respond. The first 30 minutes of every disclosure became 3 minutes.
Responsio doesn’t live in isolation. I architected the integration surface so responses pull from (and push to) Disclosure Director, Sustainability Reporting, and Dashboards & Analytics. One shared data layer. A disclosure answer drafted in Responsio updates the enterprise’s central ESG data library — closing the loop between request-response and platform-wide reporting.
Responsio shipped and earned three awards in its first year. But the validation that mattered came from real users.
From a Benchmark Gensuite subscriber, unprompted, describing the shift:
And from internal leadership, naming the design intent exactly:
| Dimension | Measured outcome |
|---|---|
| Productivity per request | 4 hours saved per disclosure request (documented average) |
| Annual productivity savings | Up to $80,000 per organization using Responsio |
| Corporate-level savings | 4+ weeks of productivity saved annually at the corporate level |
| Industry recognition | TITAN Innovation Awards — Gold Winner (Emerging Technology) |
| E+E Leader Awards — Winner (Top Product: Software + Cloud) | |
| BiBA 2025 — International Silver | |
| Product evolution | v1 Suggestions → v2 Automated Intelligence (agentic) shipped |
| Integration footprint | Unified with Disclosure Director, Sustainability Reporting, Dashboards & Analytics |
In compliance-adjacent domains, AI autonomy is earned — not granted. Ship the trust layer before the automation layer.
Responsio is the clearest case I’ve shipped of progressive AI autonomy as a design strategy. Starting conservative (recommend + rationale), proving value, then evolving to agentic (apply + autonomous draft) once the trust data was in. This sequencing mattered more than any individual feature.
Agentic question clustering — surfacing, unprompted, "You’ve answered this question 14 times across 8 questionnaires. Would you like me to draft all future occurrences automatically?" This would move Responsio from a reactive tool to a proactive platform agent.
Cross-subscriber learning. Every Benchmark Gensuite subscriber has a unique Response Repository today. The next step — with consent and privacy boundaries — is sector-level pattern learning so a first-time CDP responder can benefit from anonymized industry patterns. The research and governance work here is as much of the design problem as the UI.
| Area | Ownership |
|---|---|
| UX research across ESG, Sustainability, EHS personas | Led end-to-end |
| Product definition & workflow architecture | Led end-to-end |
| Response Repository as single source of truth | Owned end-to-end |
| Genny AI interaction design (v1 Suggestions) | Owned end-to-end |
| Agentic evolution (v2 Automated Intelligence) | Initiated & led |
| Questionnaire ingestion flow | Owned end-to-end |
| Cross-product integration architecture | Owned end-to-end |
| Stakeholder alignment & design leadership | Led throughout |
Pure product design — no research phase. Redesigned the User Administration portal and license seats monitor for Benchmark Gensuite admins, introducing progressive capacity alerts, module-specific gauges, and in-flow seat-limit messaging that prevents failed add-user actions before they happen.
User Administrator Portal — landing surface unifies user roster, role coverage, application distribution, and seat utilization on one page. Total Users (199) and Total Roles Assigned (46) anchor the left rail; Users By Applications donut breaks down 800 seats across Cullet Manager, Disclosure Director, DXP, Responsio, and Risk AI Advisor; License Seats Monitor surfaces aggregate gauge (78% utilized) plus per-module bars where Disclosure Director and Responsio sit at 95% with red warning treatment.
Application Users — quick search and filter compress an enterprise roster into actionable rows. Active filter chips (Application: Risk AI Advisor · Assigned to: Saudi Sciences Studio · User Status: Active) make scope visible. Each row exposes user status, role assignments (Ultimate System Administrator · General User · Enterprise System Manager · Business Leader), and inline Entity Scope(s) Assigned actions so admins finish work without modal hops.
Add User — Limited Seats early warning. The modal shows live seat awareness tied to the selected application: 'Limited seats available — Only 1 seat(s) left. 199/200 seat(s) used.' The yellow strip lands above the Submit button so admins decide before they fail. Approaching-limit visibility prevents the support tickets that 'your add-user action failed' modals create.
Add User — Seat limit hard stop. When 200/200 seats are used, the system blocks the action with a red error and a clear next step: 'release a seat or contact your account manager to purchase additional seats.' Plain-language guidance replaces a generic error code; the workflow stays inside the same modal so the admin knows exactly what to do next.
Not a research project. A pure UX/UI redesign — solving a concrete, recurring admin frustration.
Benchmark Gensuite admins manage user access across multiple modules — Disclosure Director, Responsio, DXP, Risk AI, and more. Each module has its own seat license. The legacy interface showed a single blended number that obscured module-level exhaustion. Admins would only discover they’d hit a seat limit after trying to add a user — creating frustration, failed actions, and support tickets.
The design job: make capacity visible before it blocks work. Show admins exactly where they stand per module, warn them progressively, and communicate seat availability in every place a user might be added.
A single blended license count across all modules. An admin at 78% overall might be at 95% in Disclosure Director without knowing it — until an add-user action failed unexpectedly.
The Add User modal happily collected First Name, Last Name, Email, Application selections — then threw a seat-limit error on submit. Classic failed-transaction UX pattern that destroys trust.
When an admin did see they were out of seats, there was no clear next step. Just a blocked button and a dead end. No CTA to contact an account manager or request additional seats.
The Overview state shows the aggregate utilization gauge (e.g., 78% of 530 seats used) — the macro picture. A dropdown toggles to per-module view showing gauges for Disclosure Director, DXP, Responsio, Risk AI, each with their own utilization percentages and progress bars. Admins can see both the whole and the parts without leaving the surface.
Three states, three colors, three messages:
Each state is readable at a glance. No interpretation required.
The biggest UX shift. The Add User modal now shows live seat awareness tied to the Application(s) selected. When an admin selects "Disclosure Director" and that module has 1 seat left, a yellow warning strip appears above Submit: "Limited seats available: Only 1 seat(s) left. Consider purchasing more seats soon. 199 / 200 seat(s) used."
When the module is at 100%, the warning escalates to a red error strip: "You’ve reached the seat limit for this module. To continue, release a seat or contact your account manager." — and Submit is disabled.
The Users By Applications donut chart sits next to the totals, breaking down distribution across Cullet Manager, Disclosure Director, DXP, Responsio, Risk AI Advisor, User Management. Admins see the shape of their user base instantly — which modules are saturated, which are underused, where licensing is being wasted. Pattern before numbers.
Two quiet but important cards: Total Users: 199 · 14 new added last 90 days and Total Roles Assigned: 46 last 90 days. These answer questions admins were asking anyway ("are we growing faster than we planned?") — now visible without running a report. Simple data. High operational value.
| Before | After |
|---|---|
| Single blended license count — module exhaustion invisible | Per-module gauges with live utilization percentages |
| Post-submit seat-limit errors — failed add-user actions | Live seat awareness surfaced inside Add User modal |
| Dead-end blocked state with no escalation | Account manager CTA embedded in both warning and blocked states |
| No visibility into user-base composition | Users By Application donut surfaces saturation patterns |
| No trend awareness for capacity planning | 90-day added-users and roles-assigned cards |
Admin tools are judged on error prevention, not feature count. Failing a submit is a broken promise.
This was a deliberate non-research project — a pure product design sprint based on support ticket patterns and my own usage audit. Sometimes the signal is already loud enough; research would have been deceleration, not insight. Senior design judgment includes knowing when to skip discovery.
A capacity planning forecast layer — "Based on your growth rate (14 users in 90 days), you will hit Disclosure Director capacity in ~28 days. Consider expanding now to avoid disruption." Turning the alert from reactive to predictive.
Led the 0→1 design of the Genny AI Agentic Hub — the command center for agentic AI across the Benchmark Gensuite platform. Established the complete AI design system — voice, chat, alert, file, history, and personalization patterns — shipped 4 launch agents, and handed the same framework to product teams to ship remaining agents with consistency, governance, and trust built-in from day one.
Genny AI Agentic Hub — landing surface. The Hub answers one question — 'what do you need done?' — with a single Ask me anything entry point flanked by quick-launch actions (Add New Record · Summarize Site Report · Generate Site Report · Summarize Open Items). Tabs (Home · Favorites · All Agents) keep discovery one click away. Note the New badge on All Agents — onboarding signal embedded in the navigation.
All Agents — every agent in the EHS portfolio surfaced as a card with its scope and behavior. Category filters (Core EHS · Environmental · Operational Safety) chunk the catalog. Twelve agents shown — Platform, Support, Agent Configuration, Datamining, Chem Management, Permit, Corrective Actions, Inspections, Incident, Environmental, Operational Safety, Reporting — unified by one card pattern, one launch interaction, one mental model.
Chat interface — Chemical Management Agent in action. User asked for GHS pictograms; agent returned a structured table (Pictogram · Name · Hazards Represented) inside the chat thread. Right rail Genny AI Insights makes the agent's <em>memory</em> visible — Pending Actions, Pinned Chats, Work Products, File References. Quick actions at the bottom (Key Constituents · Summarize Site Report · Precautionary Measures · Initiate Approval Request) are the agent's next-best moves, not a static menu.
Without design system governance, every Genny AI agent would arrive looking, speaking, and behaving differently — fragmenting the user experience at the exact moment the platform needed to signal AI maturity.
Benchmark Gensuite’s Genny AI was scaling fast — from content-drafting Helpers, to decision-guiding Assistants, to fully autonomous Agentic Apps. Multiple product teams were racing to ship agents for Chemical Management, Permits, Compliance, Disclosure, and more. Each team had their own idea of what "the AI panel" should look like, how history should behave, where alerts should surface, how voice input should work, and what file references should display.
Without intervention, each agent would ship as a one-off. The platform would feel like ten different AIs built by ten different teams — exactly the anti-pattern for a product whose value proposition is unified, trustworthy, enterprise-grade automation.
The brief I set for myself: design once, scale infinitely. Establish the Agent Hub as the canonical surface for all agentic AI — then ship the design system, voice, and interaction patterns so every agent launched from day one (and every agent shipped after) would feel like a single coherent product.
Draft and summarize content, extract insights, auto-populate forms. Low autonomy, high frequency. Design implication: lightweight, embedded, fast — no pomp. Appears inline inside workflows.
Analyze data, uncover hazards, analyze images and documents. Medium autonomy, high complexity. Design implication: conversational surface with quick-action chips, file reference display, decision rationale visible.
Fully automate multi-step workflows. High autonomy, high stakes. Design implication: governance surface with Pending Actions, approval gates, Work Products trail, full audit-ready history. The user must stay in control.
Design one language that expresses three entirely different levels of AI autonomy — without fragmenting the experience or overwhelming the user.
The Agent Hub had to handle an Operational user asking Genny "what’s in this SDS?" and an admin approving an autonomous permit-intake workflow — in the same design language. Same chat primitives, same file reference pattern, same voice, same alert grammar — but different governance surfaces per tier. The art was in the restraint: one system, flexing.
Before the Agent Hub, AI was scattered across modules. I designed the Hub as the canonical entry point — a 2-column grid of agent tiles (Permit Compliance · Chem Management · Compliance · Disclosure Management, with Emerging Agent Configurator below) with each tile carrying consistent anatomy: icon, name, description, chat CTA, and state indicator for "Coming Soon." The persistent right-rail Genny AI panel ensures the ambient assistant is always one message away — even when the user hasn’t picked a specific agent yet.
Every Genny message uses the same anatomy: agent avatar + name, message bubble, timestamp, micro-actions (TTS playback, copy, bookmark/save). User messages mirror the same grammar on the opposite side. When Genny produces a generated answer with a source, the rationale pattern appears — same design as Responsio’s. This is the core conversation primitive: one pattern, every agent, every tier, every language.
The input bar pattern became one of the most-copied components across the platform: a single "Ask me anything..." field with three adjacent affordances — paperclip for file attach, microphone for voice input, arrow for send. Every agent uses it. Voice input is not an afterthought; it’s peer to text. Critical for field workers, and a differentiator against competitors who treat voice as a separate surface.
For tier-3 Agentic Apps, autonomy without oversight is a liability. I designed a standardized right-rail surface — Pending Actions · Pinned Chats · Work Products · File References — that every process agent uses. Pending Actions shows what the AI wants to do (and waits for approval). Work Products shows what the AI has done (and lets users audit). File References shows what the AI used (and lets users verify). Governance becomes part of the UX — not an afterthought.
The left-rail history surface lets users return to prior conversations without losing context. Messages persist. Pinned chats float to the top. Conversation threads carry across sessions — and across agents when handoff is needed. Users don’t restart from zero every time; the AI remembers what they were working on.
Defined the full state vocabulary every agent uses: idle · thinking · proposing · executing · awaiting approval · complete · error. Each state has one component, one color treatment, one animation rule, one plain-language microcopy template. When any team ships a new agent, they inherit the vocabulary — no invention required.
The four agents shipped at first-go — Platform Agent, Chem Management Agent, Permit Agent, and Emerging Agent Configurator — proved the system works across different autonomy levels and domains. I then documented every pattern, every component, every state, every voice guideline, and handed the system to product teams so they could ship the remaining agents without me in the room. This is the actual measure of a design system: it keeps producing consistent, high-quality work when its author steps away.
A design system is judged by what it produces without its author.
After the initial 4 agents launched, product teams began shipping new agents using the same patterns — chat, voice input, right-rail governance, input bar, history, alert states — without needing me to design each one. The Agent Hub became the template, and new agents simply plugged in.
That’s the quiet outcome of design system work: you stop being a bottleneck and become a force multiplier. One designer’s patterns became every agent’s baseline. And the platform got its industry recognition for it — Verdantix Green Quadrant 2025: Market Leading AI Integration.
| Before | After |
|---|---|
| AI scattered across modules, no discoverable home | Agent Hub — unified command center for all agentic AI |
| Every team inventing their own chat UI | One chat primitive — message, timestamp, audio, copy, rationale |
| Voice input proposed as an enterprise add-on | Voice at parity with text, native on every agent |
| Agentic autonomy without user oversight | Standardized governance rail — Pending Actions, Work Products, Files |
| No memory across sessions | History surface, pinned chats, context persistence |
| Inconsistent alert and state language | Complete state lexicon: idle · thinking · executing · awaiting · complete |
| Every agent designed from scratch | 4 launch agents + scalable system for teams to ship independently |
Recognized in Verdantix Green Quadrant 2025 as market-leading AI integration — a platform-level signal that the unified design approach translates to industry perception of product maturity.
In platforms with many AI surfaces, the single most valuable design is the one that stops other designers from designing the wrong thing.
Designing four agents was the visible work. Designing the system that produces every agent after was the actual work. The four launch agents were the proof-of-concept. The design system was the leverage.
A canonical "agentic approval" interaction — a universal pattern every Process Agent uses when asking for human approval of an autonomous multi-step workflow. Today each agent solves it slightly differently. Standardizing this would be the next piece of system work — because as agents get more autonomous, the approval surface becomes the trust surface.
Cross-agent conversation handoff. Today, each agent’s context is siloed. Letting users move a conversation from Chem Agent → Compliance Agent without losing state would be the next frontier in agentic UX — and would reveal whether the platform truly feels like one AI or many.
| Area | Ownership |
|---|---|
| Agent Hub IA & 0→1 product definition | Led end-to-end |
| AI design system — voice, chat, alerts, history, files, personalization | Established & owned |
| Multimodal input pattern (text, voice, attachment) | Owned end-to-end |
| Right-rail governance surface (Pending Actions, Work Products, Files) | Owned end-to-end |
| State & alert lexicon across all three AI tiers | Defined & led |
| 4 launch agent designs (Platform, Chem, Permit, Configurator) | Led design |
| Design system handoff & governance to product teams | Led throughout |
| Cross-functional alignment with Product, AI, Engineering, Security | Led throughout |
Redesigned the external-facing Supplier Portal for Benchmark Gensuite — the platform where thousands of global suppliers complete Conflict Minerals, ESG, Anti-Human Trafficking, and product compliance questionnaires on behalf of enterprise customers. Shipped an outcome-first Overview dashboard, consolidated action-items queue, and multilingual portal experience that earned the Top Supply Chain Projects Award 2024.
Supplier Portal Overview — the front door for global suppliers responding to enterprise sustainability programs. Announcement banner attached to the portal (not buried in email), Survey Questionnaires card with at-a-glance counts (Open · On Going · Due), Overview Channel Breakdown by source, and a unified table for Open Questionnaires, Open Actions, Completed Questionnaires, and Completed Actions.
CMRT Upload — focused workspace for a single questionnaire. Persistent left rail (CMRT Upload · Summary · Supplemental Documentation) keeps the supplier oriented through long compliance flows. Inline instructions, version awareness, required-field signaling, and a single Save & Continue path eliminate ambiguity about what to do next.
GHG Emissions response — structured form authoring for Scope 1, 2, and 3 emissions paired with Company Information (reporting period, baseline year, calculation methodology, net zero targets, SBTi verification). Helper text and inline guidance reduce supplier back-and-forth; Save and Save & Continue let suppliers commit progress without losing work.
This isn’t an internal tool. It’s the interface a global supplier sees the first time an enterprise customer asks them for disclosure data.
The Supplier Portal is how thousands of global suppliers — from Fortune 500 manufacturers to small specialty vendors — respond to their enterprise customers’ compliance asks: Conflict Minerals Reports (CMRT v6.22 / RMI-aligned), ESG surveys (Ceres-developed), Anti-Human Trafficking questionnaires, Supplier Self-Assessments, and product stewardship data requests.
The portal carries a unique design weight: it represents the enterprise customer’s brand to their supply chain. A confusing portal makes the customer look disorganized. A slow portal creates deadline miss. A broken portal creates compliance risk. Every UX decision here has trust consequences outside the platform.
The design brief: make this the clearest, fastest, most multilingual-ready supplier experience in the market — and keep it scalable across survey types and regulatory categories.
Suppliers respond to multiple enterprise customers, each with multiple questionnaires, each with multi-part actions. The legacy portal buried action items inside individual questionnaires. Design implication: needs a unified cross-questionnaire action surface.
Status was ambiguous — is "Pending" good or bad? Is "On Going" different from "In Progress"? Due dates weren’t visible at a glance. Design implication: needs a tight status vocabulary + clear due-date signals (colored dates for past-due).
Announcements from enterprise customers were missed. Portal-wide alerts had no home. Design implication: needs a persistent announcement surface that supports multiple alerts with See All pattern and attached guidance docs.
The first thing a supplier sees: 86 Open Questionnaires · 186 Open Actions · 186 In Progress · 286 Past Due — with red treatment on Past Due so triage happens before scrolling. Each card is a jump-point into a filtered view. This replaces the old "scroll through every survey to find what’s urgent" pattern with "I see the urgency in 2 seconds."
The Action Items view pulls every pending action from every questionnaire — CMRT, ESG, Anti-Human Trafficking, Supplier Self-Assessment — into a single queue with Action ID, Action Name, Associated Item, Parts Pending Response, Parts Completed, Due Date, Status, Survey Owner. One place to see what’s waiting. Massive reduction in context-switching for suppliers juggling 20+ simultaneous requests.
Two status chips, color-coded: Pending (blue, neutral) and On Going (amber, in-flight). Due dates shown in red when past-due, default text otherwise. "Parts Pending Response" surfaces as a blue deep-link when multi-part, "Not applicable" in muted grey when single-part. The vocabulary is small on purpose — suppliers don’t need nuance, they need certainty.
Portal-wide communications used to be an afterthought. I designed a dedicated announcement surface on the Overview — high-level alert messages visible portal-wide, with capability to attach import documentation and guidance documents, display the latest 3 with See-All expansion. The design unblocked enterprise customers from sending supplier-base communications through the portal instead of email.
Global suppliers don’t default to English. The language selector sits in the top-right with a flag, instantly discoverable. Every string is externalized. Layouts were stress-tested against character-length expansion (German, Japanese) and RTL support contingency. The portal ships in 10+ languages on launch — not a v2 afterthought. This is what "enterprise-ready" actually means at the supplier edge.
Supplier Manager (the enterprise-side complement to the Supplier Portal) earned industry recognition in the first year.
Kimberly Gillen, Strategic Accounts Manager at The Anderson’s, describing her team’s experience:
The platform went on to earn the Top Supply Chain Projects Award 2024 for Supplier Manager — and Verdantix Green Quadrant 2025 recognition for Market Leading AI Integration across the product stewardship and supplier risk suite. Industry validation of the entire supplier experience.
| Before | After |
|---|---|
| Action items buried inside individual questionnaires | Unified cross-questionnaire Action Items queue |
| Overview with ambiguous status across open work | 4-metric primary dashboard (Open / Actions / In Progress / Past Due) |
| Ambiguous statuses — "what does this mean?" | Tight status vocabulary + red past-due date treatment |
| No portal-wide communication channel for announcements | Persistent announcement banner with multi-item See-All & attachments |
| English-first with partial translation | Multilingual-first — 10+ languages on launch, discoverable selector |
| Supplier compile times measured in "hours and weeks" | Measured in "less than a couple of hours" (customer testimonial) |
Recognition: Top Supply Chain Projects Award 2024 · Verdantix Green Quadrant 2025 Market Leading AI Integration
An external-facing portal is where B2B design judgment shows most — because every friction in the supplier UX becomes a business problem for the enterprise customer.
This project reinforced a principle I apply to all external-facing enterprise UX: reduce invention, amplify certainty. Don’t introduce novel patterns where boring patterns will finish the work faster. Don’t hide actions in dropdowns where a flat list would show them. Don’t use clever status copy when a single word would do. External users pay the cost of designer ego more than internal users do.
AI-assisted questionnaire response. Today suppliers manually fill each field. The next step — applying the Responsio agentic pattern to the supplier side — would let a supplier say "use my last Conflict Minerals response from Q2 and update only the changed fields" and have Genny draft the entire questionnaire. This moves supplier response time from "couple of hours" to "couple of minutes."
| Area | Ownership |
|---|---|
| Supplier Portal Overview & IA | Owned end-to-end |
| Cross-questionnaire Action Items consolidation | Owned end-to-end |
| Status & due-date visual lexicon | Owned end-to-end |
| Multi-announcement broadcast surface | Owned end-to-end |
| Multilingual-first portal architecture | Owned end-to-end |
| Survey-specific upload flows (CMRT, ESG, Anti-Trafficking) | Led design |
| Stakeholder alignment with Product Stewardship & Supplier Risk teams | Led throughout |
Led the end-to-end brand build for Wild Tiger Rum — India’s first super-premium rum and a tiger-conservation cause-brand. Started not on a screen, but in a workshop: physical bottle and packaging design first, logo and visual identity second, website and digital playground last. An unconventional founder-led process, a C-suite that demanded the bottle speak before the brand did, and a final product that now sells across 16+ countries with 10% of profits funding tiger conservation.
Wild Tiger Rum — logo and packaging system. Hand-drawn 'WILD TIGER' wordmark with tiger-stripe pattern fill, finished with a 'Rum' script underline. The tiger-stripe wrap continues across the bottle as both a brand mark and a shelf signal — the bottle itself becomes the brand on a crowded back-bar.
Age confirmation gate — entry to a category-restricted experience designed without breaking the brand voice. 'Are you old enough to *Open up to WILD*' uses the same hand-script the rest of the site does; the legal drinking age confirmation, Remember Me, and 'Now Let Me In' CTA all live inside a bottle silhouette so the gate feels like an extension of the bottle, not a tax on the visitor.
Landing page — bottle hero anchored center, content modules ringed around it. Left rail tells the story: A Tiger Is Born, Signature Cocktails, The Exotic Blend. Right rail tells the mission: WTF Wild Tiger Foundation, Exquisite Packaging, Become a Rumaican newsletter signup. Bottom row routes Beverages, About Rum, Relish Responsibly. Discovery, education, and conservation each get equal weight.
Wild Tiger Foundation (WTF) — the conservation page that earns the brand its name. Bold headline 'A Social Innovative Initiative to Save Tigers' frames the data: only 2,000 Bengal tigers remain in Indian forests, down from 40,000 at independence. A portion of every bottle sale funds the Wayanad Tiger Reserve. WTF circular mark with 'Our Tigers, Our Pride' lockup, donate-here CTA, and a closing quote — 'Let's Roar for our Tigers' — turn the page into both message and call to action.
India is the second-largest spirits market in the world. It had no recognizable global premium-rum brand. The founder wanted to build one — and tie it to the cause closest to him: tiger conservation.
Gautom Menon, a Kerala-born entrepreneur, had spent eight years tasting more than 500 rums and walking the world’s rum festivals before he was ready to build his own. He came to me with the full ambition: he wanted India’s answer to Jack Daniel’s and Guinness. A premium rum that could pour with pride at duty-free in Heathrow, on Tiger Airways flights, in cocktail bars in New York and Copenhagen.
And the rum had to do good. 10% of every bottle’s profit would fund the Wild Tiger Foundation — Gautom’s NGO working with conservationists and forest authorities across South India to protect the Royal Bengal Tiger. Only ~2,000 wild tigers remained in India, down from 40,000 at Independence.
This wasn’t a typical brand brief. The product, the cause, and the visual story all had to be the same story.
Most brand projects begin with a logo, move to packaging, end on a website. Wild Tiger did the opposite — and that order shaped everything.
The founder’s instruction at the C-suite kickoff was direct: "Don’t draw me a logo. Show me the bottle a person picks up at duty-free and won’t put down." That set the working order — and it’s a sequence I’ve since trusted in every brand-led project where the physical product carries the story:
Form, weight, sleeve material, claw ornament, tag system. The object had to feel like a tiger sighting before it had a single character of typography on it. Every stripe pattern hand-painted — no two bottles alike, mirroring real tiger genetics.
Once the bottle struck, the wordmark, monogram, conservation seal, color palette (deep black, tiger orange, cream, gold), and typographic hierarchy all flowed from the physical object — not the other way around.
Once the bottle and brand were locked, the digital surfaces — website, conservation campaign pages, retail collateral, neck tags, brochures, recycled-paper cartons — all extended the same physical language into pixels.
Every material choice was a decision in two languages — premium spirits and conservation.
The bottle is wrapped in a velvet sleeve printed with a tiger-stripe pattern — but the deeper design choice is that every sleeve is uniquely patterned, so no two bottles in the world look identical. The same way no two tigers in the wild share the same stripes.
Fastened to the neck is a replica tiger claw — symbolizing "No Fear" in ancient Indian mythology — paired with a hand-tied conservation tag. Glass is recycled. All paper — labels, neck tags, cartons, brochures — is recycled. The packing line itself is over 80% women, intentionally hired to extend the brand’s commitment to empowerment beyond the cause statement on the box.
The package is the argument. Pick it up, and the bottle has already told you what the brand believes in — before you read a word.
The brand identity didn’t need to invent a visual language — it inherited one from the bottle.
The wordmark uses bold, slightly distressed display type that feels carved rather than drawn — paying homage to the hand-finished feel of the bottle. The "WTF" sub-mark for Wild Tiger Foundation doubles as both an irreverent acronym ("WTF? Only 2,000 tigers left") and a serious conservation seal. The color system is restricted: deep black, tiger orange, cream, and a single accent of gold for premium signaling.
The website (wildtiger.in) extends the same restraint — a near-monochrome palette, full-bleed bottle photography, conservation as a primary navigation item (not buried in a footer), and an age-gate that opens with the same hand-painted stripe pattern as the bottle. Every digital surface is downstream of the physical one.
Collateral — recycled-paper cartons, neck tags, in-flight cards for Tiger Airways, retail point-of-sale, the Wild Tiger Foundation campaign assets — all snap into the same identity grid without negotiation.
Wild Tiger launched at the London Rum Festival, then expanded across the world’s premium spirits market.
Wild Tiger Rum debuted at UK Rumfest, London (October 2015), followed by USATT New York (March 2016). Within a year it was retailing in the US, UK, France, Belgium, Cyprus, Czechia, Hungary, Poland, Denmark, the UAE, Maldives, and Thailand — across more than 16 countries. It became the only Indian liquor product available at international duty-free, and the official rum onboard Tiger Airways — a first for an Indian spirits brand for onboard sales.
The founder was named one of GQ India’s 50 Most Influential Young Indian Innovators of 2017. The Wild Tiger Foundation has since adopted the Wayanad Tiger Reserve in Kerala and works directly with tiger conservationists, local authorities, and NGOs on the ground.
What started as a founder’s wild idea — a premium rum from India that could fund tiger conservation — became a globally distributed product whose every bottle pours back into the cause it was named for.
Working with a C-suite that demanded the bottle precede the brand was a forcing function — and the discipline transferred everywhere I’ve worked since.
Every brand project I’ve led since has borrowed this discipline. In enterprise SaaS at Benchmark Gensuite, the equivalent of "design the bottle first" is design the canonical surface first — the dashboard, the agent hub, the home — and let the rest of the product’s pixels inherit the visual contract from there. Same pattern, different industry.
Tighter integration between the conservation story and the buying moment. The bottle whispers conservation; the website shouts it; the in-store retail moment is the gap. A QR-led "this bottle helped fund X reserve" experience — bottle scan → real-time impact dashboard — would close the loop between purchase and cause.
| Area | Ownership |
|---|---|
| Bottle & primary package design (sleeve, claw, tag, structure) | Led end-to-end |
| Logo system, monogram, WTF conservation seal | Owned end-to-end |
| Brand identity guidelines (color, typography, stripe library) | Owned end-to-end |
| Website design (wildtiger.in) | Led design end-to-end |
| Collateral — neck tags, brochures, cartons, retail POS | Owned end-to-end |
| Wild Tiger Foundation campaign extension | Led design |
| C-suite alignment with founder & conservation partners | Led throughout |
Led the end-to-end brand build for Mum’s Sana Vita — a BRC Grade AA-certified Indian premium spice brand selling in the UK across Whole Spices, Ground Spices, and Spice Blends. Owned the package design, brand identity, web design, and retail-ready collateral — building a system that translates an Indian-spice heritage into a clean, trust-led UK premium-grocery aesthetic.
Mum's Sana Vita brand system — package design, logo lockup, photography, and retail-ready collateral. The wordmark pairs with a Sana Vita endorsement and a leaf seal that signals natural origin without overplaying it. Typography drives shelf hierarchy first, ingredient illustration anchors repeat-purchase recognition, and the BRC Grade AA-certified badge does the trust work — translating Indian-spice heritage into a clean UK premium-grocery aesthetic that earns its place at Whole Foods, Waitrose, and Amazon Marketplace.
Indian spices in UK retail compete on two extremes — bargain-bin ethnic-aisle SKUs, or hyper-premium artisanal jars. Mum’s needed to land cleanly in the middle: premium-feeling, accessibly priced, trustworthy on first glance.
The challenge wasn’t the spice — the spice was excellent. The challenge was that UK shoppers don’t evaluate Indian spice brands the way Indian shoppers do. A British home cook walking down a Sainsbury’s or browsing Amazon UK is scanning for: clean typography, a recognizable trust seal, ingredient clarity, and a brand that signals "this belongs in my pantry next to Ottolenghi’s spice tin."
Most Indian spice brands import their domestic packaging directly to the UK — heavy graphics, multi-color clutter, dense product copy. The result is shelf-invisibility. Mum’s brief was the opposite: build a brand that speaks UK premium-grocery first, while staying authentically Indian in story.
The product line spanned three ranges (Whole Spices, Ground Spices, Spice Blends) across multiple SKUs and weights — each pouch needed to feel like part of one family, while staying distinctive enough that a shopper picking the cardamom doesn’t accidentally pick the coriander.
Same physical-first sequence I’ve trusted in every brand-led project: lock the most expensive, most-touched object first, then let everything else inherit.
The pouch is where the brand earns its first second of attention — on the shelf, on Amazon’s grid, in the customer’s hand. Every later surface (logo extensions, web, recipes, Amazon A+ content) had to extend what the pouch already established. Not invent a parallel identity.
Pouch structure, material, the red-banner masthead, ingredient illustration band at the bottom, ingredient name in 4 languages (UK retail = multilingual market). Per-SKU color coding for Whole vs Ground vs Blends.
Mum’s wordmark in red banner shape (the same masthead silhouette as the pouch), Sana Vita endorsement in serif italic, leaf-and-chili seal as a recognition mark. Restricted palette: red, cream, leaf-green accent, charcoal text.
mumsfood.co.uk — clean editorial homepage with the same "It’s all about good spices" voice. Recipes section as a content extension. Amazon UK listings with A+ content using the same pouch language.
Every element of the pouch was chosen to telegraph "premium UK-grocery" while still letting the heritage breathe through.
The cream-white pouch background is intentional — most Indian spice brands lead with saturated color blocks. Cream reads as premium UK food (think Cook With M&S, Daylesford, Ottolenghi) and lets the product photography pop. The red banner masthead at the top carries the wordmark and creates an instant family signature across every SKU.
The ingredient illustration band at the bottom of every pouch is the same recognition system that lets a shopper identify the SKU from across the aisle — turmeric powder shows turmeric root and leaves, cardamom shows pods, black pepper shows whole peppercorns. The illustrative style is hand-drawn, warm, food-honest — not corporate-clean.
Product names appear in four languages on the pouch front (English, German, French, Spanish phonetic) — a small but critical UK/EU retail signal. Net weight is large and confident. The BRC Grade AA seal is on the back panel — visible to the shopper who flips the pouch, which is the moment of trust-decision in premium grocery.
Once the pouch system was locked, the brand extended into web, recipes, retail-ready collateral, and Amazon A+ content without re-inventing.
The website (mumsfood.co.uk) opens with the same cream background, the same red banner navigation, the same product-first photography. The hero copy — "It’s all about good spices · Buy and Experience the Best of Indian Spices" — uses the brand’s warm-but-confident voice in the same italic-serif accent type the pouch uses on "Sana Vita".
Three primary navigation pillars match the three product ranges: Whole Spices · Ground Spices · Spice Blends. The Recipes section gives the brand a content surface — a place where Mum’s isn’t just selling pouches, it’s teaching a UK home cook how to use them. That’s how a spice brand earns repeat purchases — by being useful, not just available.
Amazon UK listings extend the same system — A+ content modules use the same red-banner masthead, the same illustrative ingredient style, the same "100% natural · handpicked · sustainably sourced" trust line that anchors the back of every pouch. One brand, every surface, the shopper never needs to relearn the visual contract.
Mum’s Sana Vita ships nationally across the UK via direct-to-consumer web (mumsfood.co.uk) and Amazon UK marketplace, with a coherent identity from pouch to homepage to product detail page.
The brand now retails 3 product ranges — Whole Spices, Ground Spices, Spice Blends — across multiple SKUs and weight tiers, each carrying the same masthead, illustration band, and trust seals. The BRC Grade AA certification (the highest UK food-safety grade for manufacturing) is communicated visibly on every pouch back panel and prominently across the website and Amazon listings.
The result: a small Indian spice brand that doesn’t look like a small Indian spice brand on a UK shelf. It looks like it belongs next to Cook With M&S, Belazu, and Bart’s — and it’s priced to be the curious home cook’s authentic upgrade.
When a brand crosses cultures, the temptation is to amplify the heritage. The discipline is to edit it.
Both Wild Tiger and Mum’s taught me the same lesson from opposite directions. Wild Tiger leaned into heritage with bold, sensory packaging. Mum’s leaned away from heritage clichés to land in UK premium grocery. The throughline: let the physical object do the cultural work first, and the digital extends quietly.
A stronger e-commerce conversion engine. The current website is brand-first, transaction-second. A v2 would integrate Shopify or BigCommerce directly, adding subscribe-and-save, a recipe-to-cart flow ("cook this curry → add the 4 spices in one click"), and a UGC recipe gallery from real customers. The brand has the visual authority to support a richer commerce surface — it just hasn’t been built yet.
| Area | Ownership |
|---|---|
| Pouch & primary package design (3 ranges, multi-SKU, multi-weight) | Led end-to-end |
| Logo system & brand identity (red banner mark, Sana Vita endorsement, seal) | Owned end-to-end |
| Per-SKU illustration system (ingredient band, language ladder) | Owned end-to-end |
| Website design (mumsfood.co.uk) | Led design end-to-end |
| Recipes & content surface architecture | Led design |
| Amazon UK A+ content & listing imagery | Owned end-to-end |
| Retail collateral, mailing-list assets, social | Owned end-to-end |
Brand identity for Eskimo’s Artisan Ice Cream — a Coimbatore-based ice cream parlour and fun-food café crafting frozen desserts with only fruits, dry fruits, chocolates, milk, and sugar. Owned naming, logo, mascot, custom type, invitation, and corporate identity end-to-end. The most playful brand project I’ve worked on — and the one where every visual decision had to taste like ice cream.
Before any visual work, the brand needed a name a five-year-old could pronounce and a thirty-year-old would post about.
The naming brief had three asks: (1) instantly suggests cold, (2) sounds joyful — café-friendly, not clinical, and (3) lands easily across English and Indian-English speakers. After exploring categories — geography (Arctic, Glacier), substance (Frost, Chill), character (Yeti, Pingu) — the winning territory was character with built-in story.
"Eskimo’s" earned the brief in one word: it telegraphs cold, it implies a person/character (the apostrophe-S = belonging to), and it opens the door to a mascot system. The name became the foundation that every other decision — mascot, type, color, illustration — could anchor to.
An ice-cream café for families needed warmth at the center, not corporate restraint. The mascot does the work the wordmark alone couldn’t.
The polar bear mascot is illustrated inside an abstract igloo — the dome rising behind him forms a frame for the entire mark. His expression — eyes closed, tongue out — is the brand’s mood in one face: indulgent, joyful, slightly cheeky. Not generic-cute. Specific-cheeky.
I drew an entire expression library beyond the primary mark: the wink for menu boards, the surprise face for new flavour launches, the satisfied closed-eyes face for testimonials, and a hungry/curious one for kid’s menus. The mascot became a system, not just a logo, so the brand could speak without ever repeating itself.
Color: deep purple ground with white-and-violet bear, accented by red tongue. Purple was the deliberate departure from category — everyone in ice cream uses pastel pink, mint, or sky blue. Purple owned the space, made the mark unmistakable on Coimbatore’s café-row signage, and gave the brand an instant Instagram identity.
Off-the-shelf typography would have wasted the brief. A custom letterform turned the wordmark itself into the product.
The Eskimo’s wordmark is hand-drawn custom type — a soft script that swells into rounded curves, with intentional drip details hanging from the bottom of select letters. The drips do double duty: they reference melting ice cream, and they signal frozen drips from an icicle’s edge. One illustration, two readings, both correct.
The white double-outline lifts the wordmark off the deep purple ground and gives it the candy-shop sticker quality the café environment needed. The wordmark looks edible. That was the whole brief.
Once the mascot, type, and color locked, the brand extended into every corner of the café experience.
The brand system covered: menu boards, packaging cups and tubs, takeaway bags, napkins, the staff uniform, signage, social-media templates, and the launch invitation. Each surface re-used the same components — the mascot, the dripping wordmark, the purple-and-white palette — but composed differently for each context.
The launch invitation was where I had the most fun: a die-cut card shaped like the igloo silhouette, with the polar bear’s face revealing through a circular cut-out when the card opened. Tactile, surprising, kept on fridges — exactly the kind of object an artisan ice cream brand should produce.
The corporate identity layer (letterhead, business cards, formal stationery) deliberately turned down the mascot, keeping the wordmark and a single bear paw print as the formal signature — proving the system flexes from kid’s menu energetic to vendor-correspondence professional without losing voice.
Playful brand work is the discipline most likely to look unprofessional — and the discipline where restraint matters most.
Across the SaaS, agentic AI, and enterprise UX work that fills most of this portfolio, this project is the outlier — and the proof that the same design instinct (find the one big idea, then commit to it everywhere) holds across categories. A polar bear in an igloo is not a different discipline from an agent hub right rail; both are systems where one anchor decision has to carry every downstream surface.
| Area | Ownership |
|---|---|
| Naming & verbal identity | Led end-to-end |
| Logo design — primary, secondary, signature variants | Owned end-to-end |
| Mascot design & expression library | Owned end-to-end |
| Custom type — Eskimo’s wordmark with drip details | Owned end-to-end |
| Color system & brand guidelines | Owned end-to-end |
| Launch invitation design (die-cut igloo card) | Owned end-to-end |
| Corporate identity — stationery, business cards, signage | Owned end-to-end |
Designed Tableau-powered EHS executive dashboards that compress thousands of incident, action, and rate data points into a single decision surface — scope-aware, time-aware, and trend-aware.
EHS Executive Dashboard — eight pinned KPI cards (I&I Cases, Recordable Cases, Hours Worked, TRIR, Incidents, Concerns, Actions, LTIR) anchor the surface. Scope filters at the top cascade to every chart. The center I&I Rates trend connects 'now' to 'over time.' Bottom donuts answer 'where do we focus?' with Top 5 by Accident Type and Top 5 by Incident Type.
Decisions made on stale PDF reports. Real data sat unread in BI tools.
Benchmark Gensuite's enterprise customers ran EHS programs across hundreds of sites and dozens of business units. Every week, analysts compiled custom PDF and Excel reports for executive leadership. By the time the report landed, the data was 5–10 days old.
Meanwhile, the underlying Tableau data layer was already live. Executives didn't lack data — they lacked a surface that answered their questions in their language, on their schedule.
Existing dashboards forced executives through chart libraries to answer simple questions.
Across stakeholder interviews, every executive walked the same cognitive path:
"What happened?" — How many cases, how many hours, what's the rate?
"How are we trending?" — Is this number better or worse than last month, last quarter, last year?
"Where do we focus?" — Which accident type, incident category, region is dominating the count?
Existing analytics surfaces forced executives to assemble that answer themselves across multiple tabs and filter combinations. The dashboard had to be the answer, not the path to it.
Executive stakeholder interviews · Persona development · Cognitive walkthroughs · Tableau heuristic audit · Competitive review
Interviewed three executive archetypes: Chief Sustainability Officer (board-level reporting), EHS Director (cross-region operational visibility), and Regional Operations Lead (site-level performance). Different scope, same cognitive ritual.
Audited the existing Tableau implementation against data-viz heuristics — Tufte's data-ink ratio, Few's dashboard design principles. Mapped where the surface was over-decorating data and where it was under-explaining it.
Few's Dashboard Design · Tufte's Data-Ink Ratio · Pre-attentive attributes · Information Hierarchy · Mental Model Mapping
"Executives don't need more data. They need the same data in the order their brain asks for it: number first, trend second, cause third. Anything else is noise."
Eight executive KPIs (I&I Cases, Recordable Cases, Hours Worked, TRIR, Incidents, Concerns, Actions, LTIR) sit in a fixed strip on the left. Numbers in large weight, labels in small caps. The first question — "what happened?" — is answered before the executive scrolls.
Organization, Sub-Organization, Site, Department, World Region, Country, Custom Group, Lookback — pinned as one bar at the top. Every chart on the page respects the same scope at the same time. One change cascades everywhere; no per-chart filtering.
The I&I Rates line chart (LTIR and TRIR over time) anchors the center of the dashboard. It bridges the KPI strip and the breakdown donuts — answering "is this getting better or worse?" without leaving the page. Sparse axes, two clearly-distinguished lines, period-aligned labels.
Two donut charts at the bottom — Top 5 Accident Types and Top 5 Incident Types. Five categories, not all twenty. The point isn't completeness, it's salience. Counts on each segment, color reserved for the dominant slice. Executives leave the page knowing where to ask the next question.
"Welcome, [Name]" plus a scope toggle (Choose Organizational Scope · My Scopes of Responsibility) and a lookback control sit above every chart. The dashboard tells the executive who they are, what they're seeing, and how far back the lens goes before they read a single number.
| Area | Result |
|---|---|
| Decision speed | Live dashboard replaced 5–10 day stale PDF cycle |
| Self-service rate | Executives running their own scope and time analyses without analyst help |
| Cognitive layering | Three layers (Number · Trend · Cause) preserved on every page render |
| Scope discipline | One global filter bar — no per-chart scope drift, no truth-fragmentation |
| Information density | 8 KPIs + 1 trend + 2 breakdowns delivered without scrolling on standard executive monitors |
| Pattern reuse | Template adopted across additional Benchmark Gensuite dashboards (Sustainability, Quality, Compliance) |
Data density isn't the enemy of clarity. Disorder is.
Tableau gives you infinite chart types. The discipline is choosing the three that map to how an executive thinks — and then defending that order against every stakeholder who wants their pet metric pinned to the top. Hierarchy is the design. Everything else is decoration.
Mobile and tablet experiences arrived after launch. Next time I'd design the surface mobile-first, then expand to desktop — the constraint sharpens the hierarchy faster than any user research session.
Led end-to-end UX/UI design to streamline lead prioritization and accelerate conversion workflows across marketing operations — turning a fragmented multi-tool process into a single action-first dashboard.
Marketing Incubator dashboard — Status Summary donut shows lead distribution by priority (Total 206 across All / High / Medium / Low). Lead table surfaces every active thread with Lead ID, Priority chips, Company & Business, Application(s), Subscriber Contact, Point of Contact, Title, and Description. Filters across Recent, High, Medium, Low, and Archived plus a personal "My Leads" toggle let marketing managers triage by scope without losing the macro view.
Disconnected lead and booking systems were costing campaign velocity.
Marketing teams at Benchmark Gensuite were managing leads across disconnected systems — leads in one tool, bookings in another, prioritization in a spreadsheet, ownership in nobody's head. The result: delayed campaign execution, increased manual coordination, reduced efficiency at high lead volumes.
The challenge wasn't a missing tool. It was the gap between tools — the multi-step conversion path where leads went cold while everyone waited on the next handoff.
Six failure modes compounding into a single workflow problem.
Disconnected lead and booking systems → fragmented workflows where the same lead lived in three places.
No prioritization model → inconsistent decision-making about which leads to chase first.
Dense tables → high cognitive load and slow scanning even when the data was right.
Multi-step conversion → delays and drop-offs at every handoff.
Limited visibility into pipeline ownership and status → leads stalling because nobody knew who owned the next move.
Direct impact on conversion efficiency and campaign execution timelines — the design problem was a revenue problem.
Dogfooding · Data analytics · Stakeholder interviews · Pain-point mapping
Dogfooding. Validated workflows with internal marketing teams using real lead data. Observed friction across triage, prioritization, and booking actions. Surfaced three Key Gaps: hard to identify high-priority leads, context switching between tools, no clear next-step actions.
Data insights. Analyzed lead distribution across priority tiers; identified drop-offs in the lead → booking funnel; measured time-to-action across workflows. Outcome: need for prioritization visibility and faster execution.
Interviews. Spoke with marketing managers, lead owners, and stakeholders. Pain points clustered around three lines: difficult to scan and prioritize, inefficient navigation, limited pipeline visibility.
Five research findings that defined the design constraints.
Users need action-first workflows, not passive data tables. Priority must be visually scannable. Workflows must enable fast triage → execution. Personalization ("My Leads") improves ownership. Data density requires progressive disclosure.
Redesigned the lead table around a clear column priority sequence: Lead → Priority → Context → Actions. Grouped business, contact, and task data so users could scan a row at a glance. Enabled flexible column configuration so different roles could shape the table to their work.
Embedded a "Push to Booking" CTA directly within every lead row. Multi-step conversion compressed into a single interaction. Introduced clear state feedback (Booked / Linked) so users knew the lead had moved without refreshing or re-querying.
Multi-level filters across Recent · High · Medium · Low · Archived let users triage by signal. A "My Leads" toggle delivers the personalized view marketing owners asked for. Integrated search ties it together for fast retrieval — three lenses on one dataset, no context switch.
Applied progressive disclosure ("See more") to collapse detail by default. Reduced visual noise through structured layout and improved scannability via consistent hierarchy across all data types. Density without overload — the table can carry 200+ leads and still be parseable on first glance.
| Area | Result |
|---|---|
| Lead-to-booking conversion | ~25–30% faster across active campaigns |
| Manual workflow steps | ~30% reduction across triage and booking flows |
| Decision-making speed | ↑ Adoption driven by simplified, action-first UX |
| Workflow errors | ↓ Errors through structured data and clear state feedback |
| Pipeline visibility | Centralized dashboard combining status visibility and actionable workflows |
| Prioritization clarity | Real-time prioritization with clear visual distribution by priority tier |
| Scalability | Supports high-volume, multi-role environments without re-architecture |
Marketing tools fail when they make users move data instead of making decisions.
The instinct in enterprise marketing tooling is to show more — more columns, more filters, more counts. The discipline is to show what gets the next action done and hide everything else behind progressive disclosure. Lead → Priority → Context → Actions is a hierarchy, not a column order. Once that hierarchy was right, the velocity numbers followed.
I'm Dolly Kapadia. A multidisciplinary designer working where UX, product strategy, and brand intersect.
My work lives in the messy middle — where research meets interface, systems meet stories, and software actually gets shipped. I've led design on enterprise platforms used by thousands, launched consumer products that moved metrics, and helped teams move faster by aligning on what matters.
Off the clock: too many essays on systems thinking, terrible sourdough, and a stubborn belief that good design is mostly good judgment.
The most important skillset in product design isn't pushing pixels. It's facilitation, persuasive storytelling, and stakeholder management. A beautifully rendered design that doesn't ship is worth nothing.
A purpose-driven approach and playing the long game is more effective than short-term tactics. Great products compound over years, not sprints.
Customer behavior is the clearest metric of business success. Solving problems starts with prioritizing outcomes, not features. Shipping more isn't the same as shipping right.
Reaching 80% of a goal is usually good enough. The time to reach 100% is better spent making progress elsewhere. Perfectionism is a tax the user never asked you to pay.
The role is changing. The old lines between UX, UI, product, research, and strategy are dissolving. What used to require a team of five specialists often now requires one hybrid designer who can navigate all of them — and a clear point of view about what's worth building in the first place.
AI is reshaping the craft faster than any tool shift I've lived through. The best designers I know are using it to compress research cycles, generate and evaluate variations, and spend more time on the 20% of work that actually matters: framing the problem, understanding people, and making decisions nobody else can make.
I think the next decade of product design belongs to generalists with taste — people who can zoom from business strategy to pixel detail, who understand systems as well as stories, and who know the right question is usually more valuable than the prettiest answer.
Get in touch for opportunities or just to say hi! 👋