Hello! I am

Prabhagaran Rakkiappan Talk to my AI clone

(ProbG)

A metal sculptor turned UX Designer & UX researcher, turned product manager, now a Design Manager specializing in building Software Platforms that unify Product, Design & Strategy across Conversational AI, Enterprise SaaS, Fintech, IT Monitoring Services and Dealership Management Systems.

My journey spans from an artist to UI/UX Designer & Researcher at Zoho, a Product manager at PayPal, and a Design manager at Tekion and Uniphore.

SG Interactive, Zoho, PayPal, Tekion, Uniphore
Enterprise Platforms & Product Design

Product Vision & Strategy

Platform design, product strategy, and design leadership across enterprise-scale products.

Case Study
Tesseract Developer Platform & Nexus API
Took a twice-failed project, aligned a fractured cross-functional team, and shipped two interconnected low-code platforms in 11 months.
Role
Design Manager
Duration
11 months
Team
1 → 3 designers
Scope
Strategy + Design + Product
60+
Dealerships onboarded
200+
Apps built
40%
Cost reduction
Case Study
From Chaos to Clarity: Redesigning Risk & Compliance at PayPal
Led design and product strategy for three interconnected platforms (SCM, HCP, CCI) that transformed how PayPal handled risk investigations globally — consolidating 10+ departments into 3 persona-based teams.
Role
Product Designer & PM
Duration
18 months
Team
Solo → Led 2 designers
Products
SCM · HCP · CCI
78%
AHT reduction
44%
CSAT improvement
10→3
Dept consolidation
Case Study
Transforming Contact Center Intelligence: From Fragmented Tools to Unified AI-Powered Agent Experience
Spearheaded an 8-month initiative to redesign our conversational AI platform, consolidating three disconnected products — Agent Assist, Chat/WhatsApp, and Email — into a unified, LLM-powered agent experience.
Role
Design Manager & Research Lead
Duration
8 months
Team
4 designers
Scope
Agent Assist + Unified Studio
46%
AHT reduction
38%
Agent satisfaction
68%
Self-service resolution
Case Study
Unifying 7 Products Into One Platform
A strategic initiative to consolidate fragmented enterprise products into a cohesive, unified platform experience — reducing login friction, support costs, and onboarding time.
Role
UX Lead
Duration
6+ months
Scope
Enterprise SaaS
Products
7 → 1 Platform
1
Unified login
85%
Faster navigation
60%
Less support tickets
Design Leadership Case Study

From Chaos to Clarity:
Redesigning Risk &
Compliance at PayPal

When I joined PayPal's Risk & Compliance team in 2018, investigators were managing fraud detection, money laundering prevention, and regulatory compliance using Excel spreadsheets and juggling 6+ different applications per case. Over 18 months, I led the design and product strategy for three interconnected platforms that transformed how PayPal handled risk investigations globally.

Role
Product Designer and Product Manager
Duration
August 2018 – Early 2020 (18 months)
Location
PayPal Global (Bangalore, San Jose, Austin, Omaha)
Team
Solo designer/PM → Led 2 designers
78%
Reduction in
Handle Time
~50→10
Clicks per case
reduction
44%
Improvement in
Customer CSAT
10→3
Departments consolidated
into personas

Recognition: $1,000 Spot Award from Leadership · Promoted to Conversational AI Team

Overview

The Challenge

PayPal processes millions of transactions daily. Every transaction flows through SINE (Scan Engine), a real-time risk screening system that flags potential money laundering, terrorist financing, sanctions violations, PEP matches, fraud, and policy violations. Flagged transactions become cases routed to one of 10+ specialized departments: PEP, SAR, AML, DD, EDD, CDD, ODD, BRM, Fraud, Underwriting, and Global Investigations.

The situation when I arrived:

  • 10+ siloed departments — no shared case visibility, no communication protocols, redundant investigations on the same customer
  • ~50 clicks per case across 6 apps — Email, Excel, Attack, Admin, Norkom, World-Check, each with separate logins and manual copy-paste
  • Excel-based case management — a compliance rule kept data out of internal systems, so investigators ran financial crime cases in spreadsheets passed via email
  • CSAT collapse — redundant verification requests, no centralized customer intelligence, high investigator burnout

The solution emerged as three interconnected platforms: SCM (Simplified Case Management) to unify investigation tools, HCP (Holistic Customer Profile) as a single source of truth for customer data, and CCI (Customer Centric Insights) to reorganize departments into persona-based teams with a modular widget platform.

Part 1: Discovery & Problem Space

Learning a New Domain

August 2018. I joined as a Product Manager, though my background was in product design. Leadership specifically wanted someone who could bridge both disciplines. My first challenge: I knew nothing about payments, compliance, or risk operations. I spent my first two weeks just learning acronyms — chargebacks, OFAC, sanctions lists, false positive detection rates, KYC vs. CDD vs. EDD.

I could have faked expertise. Instead, I embraced being a beginner. I asked “stupid questions.” I took notes obsessively. I built a glossary. This beginner’s mindset became my superpower — I wasn’t constrained by “this is how we’ve always done it.”

Research methodology. I conducted extensive ethnographic research over 3 months: 30-40 investigator interviews across departments and seniority levels, multiple office locations, supervisor interviews, and full-shift shadowing sessions. Methods included contextual inquiry, task analysis, journey mapping, pain point identification, and cognitive load assessment — documented in Miro, Google Sheets, and screen recordings.

I created detailed journey maps for each department covering six phases (Case Assignment, Information Gathering, Analysis, Documentation, Approval/Escalation, Case Closure) — mapping actions, tools, time spent, pain points, emotional state, and failure points for each.

Key Research Findings

The Excel Nightmare. Every department managed cases in Excel spreadsheets passed around via email — not a shared database, not a ticketing system. A typical workflow: receive case email, open personal Excel tracker, log into Attack (copy transaction details), log into Admin (copy personal data), open Norkom (search risk flags), open World-Check (check against lists), manually compare, document in Excel, email supervisor, wait for response. For. Every. Single. Case.

Tool Fragmentation. Minimum 6 applications per case (Email, Excel, Attack, Admin, Norkom, World-Check) — some departments added LexisNexis, Dow Jones, and Google Maps for manual address verification. Each required separate login and different UI patterns. Average: ~50 clicks per case.

The Aha Moment. A senior investigator told me: “PayPal is one company, but we use sooooo many tools for investigation.” World-class engineering, billions in payments — yet investigators were copying data manually between 6 systems, spending more time on tooling than actual investigation.

Departmental Silos. No shared case visibility (PEP couldn’t see SAR’s flags). No communication protocols. Same user under review by 3 departments simultaneously without anyone knowing. Duplicated effort, inconsistent decisions, frustrated investigators — and the real victims were customers.

Part 2: Strategy & Approach

Risk as a Service

Leadership’s original vision was ambitious: build a case management platform so good that PayPal could offer it as a product to other companies. For that to work, we needed world-class UX, scalability, configurability, and intelligence. My strategy: start with one department (SAR) to prove the concept, earn trust through execution, expand iteratively, build modular reusable components, and think platform — not point solutions.

Design Principles

Reduce Cognitive Load Single-page case views. Progressive disclosure. Visual hierarchy guiding attention.
Eliminate Redundancy Don’t make users find data that exists elsewhere. Automate what machines can do. Pre-fill with known info.
Contextual Intelligence Surface relevant data by case type. Highlight anomalies. Decision support, not just information display.
Seamless Workflows Approvals and escalations built in. No email back-and-forth. Clear next actions at every step.
Audit & Compliance Every action logged. Full audit trail for regulators. Documentation templates for consistency.
Human-Centered Designed for 8-hour shifts. Keyboard shortcuts. Customizable dashboards. WCAG compliant.

Making UX Part of PDLC

One of my biggest battles. At PayPal, the traditional flow was: PM gathers requirements, engineering builds, designer comes in at the end to “make it pretty.” I pushed for a new model — Discovery (PM + Designer together), Definition (requirements WITH design input), Design (solutions validated with business), Validation (user testing BEFORE engineering builds), Development (designer stays involved), Launch (measure, learn, iterate).

Resistance was fierce. I won by leading with results — prototypes that demonstrated ROI, data that proved design impact, and a collaborative approach that made PMs and engineers co-owners. By the time we shipped PEP, UX was embedded in PDLC. Design reviews at every sprint. User testing non-negotiable. This cultural shift was as important as the products themselves.

Part 3: Execution — SCM (Simplified Case Management)

Phase 1: SAR

We chose SAR as the pilot because it had the most complex workflow (if we could crack this, we could crack anything), high volume (hundreds of cases daily), critical compliance function (reports go to federal regulators), and engaged stakeholders.

I mapped every data point investigators needed and organized them into a hierarchical widget structure: a Primary Panel (always visible) with case header, transaction summary, and quick actions; and Secondary Panels (collapsible) for user profile, transaction details, risk indicators, related cases, external intelligence, investigation notes, evidence, and audit trail. Low-fi wireframes in Sketch, then moderated usability testing with 5 SAR investigators revealed key needs: keyboard shortcuts, color-coded priority, inline notes, drag-and-drop evidence attachment.

Key features shipped: single-page case view (no more tabbing between 6 apps), contextual collapsible widgets with drag-and-drop reorder, embedded Norkom/World-Check intelligence (auto-pulled on case open), inline rich-text notes with auto-save, smart actions (pre-filled dismiss templates, auto-populated info requests, one-click escalation), full audit trail, and dashboard with personal/team queues and filters. Built with React, Redux, RESTful APIs, SSO, encrypted data with no local storage.

↓45%
Average Handle Time
From ~60 min to ~33 min per case.
↑60%
Cases Closed Per Day
Per investigator per day.
↓30%
Error Rate
Fewer missed steps, better documentation.
4.6/5
Satisfaction
“This is a game-changer.”

Phase 2: PEP & The Google Maps Innovation

During shadowing, I noticed a tedious pattern: investigators would open a PEP case, pull the user’s address from Admin, pull the PEP hit address from World-Check, open Google Maps in a new tab, paste both, check the distance. If >500km, it’s a different person — case dismissed. This happened hundreds of times per day.

I proposed automating this with Google Maps API. Engineering pushed back (“nice-to-have,” “API costs money”). Instead of arguing, I built proof: a clickable InVision prototype with real anonymized data and functional Maps integration. A/B test with 20 PEP investigators, 50 cases each, 2 weeks. Results: 67% AHT reduction for distance-based dismissals, 22% of cases auto-dismissed, test group closed 40% more cases per day.

ROI: Investment: ~$15,000 (eng time) + $500/month (API). Return: ~500 cases/day × 3.5 min saved = 29 hours/day = $530,000/year savings. Payback period: 1 month. Approved immediately. Built in 2 weeks.

Phase 3: Scaling to Other Departments

With SAR and PEP proven, we rolled out SCM to DD, EDD, CDD, ODD, and BRM. Instead of building from scratch, I designed modular, reusable components — universal Case Header, configurable User Profile Widget, department-specific Transaction and Risk Indicators widgets, universal Notes and configurable Actions panels. Reused 70% of code across departments.

By end of 2019, SCM was live across 6 departments: applications per workflow dropped from 6+ to 1, clicks per case from ~50 to 10-15, investigator productivity up 2-3x, training time down 40%. SCM was a success. But I noticed something troubling.

Part 4: The Problem Nobody Asked Me to Solve

The Customer Satisfaction Crisis

Mid-2019. SCM was live. Investigators were happy. Leadership was celebrating. But I noticed in a weekly ops review: CSAT scores for Risk & Compliance interactions were plummeting. Complaint emails spiking. Escalations to support up 40%. Nobody asked me to look into this — my job was internal tools, not customer-facing experience. But if we made investigators more efficient while customers were miserable, what were we actually accomplishing?

I reached out to Marketing, negotiated access to low-CSAT users, sent a survey to 300 who rated us 1-2 stars (42% response rate), and got 85 opt-ins for follow-up interviews. Finding: 35% of complaints were about chat support (not my problem — but would become my next role). 65% were about redundant verification requests.

“It looks like I have to send some ID proof or the other to PayPal every day.”

“I got three emails in one day asking for the same documents. Don’t you people talk to each other?”

What was happening: a user’s transaction gets flagged for multiple reasons — AML, EDD, and PEP simultaneously. Three separate cases, three investigators, three independent emails requesting overlapping information. From the user’s perspective: disorganized, inefficient, intrusive. We had optimized investigators at the expense of customers. Departments didn’t talk to each other.

I built a presentation for the VP of Risk & Compliance: declining CSAT, root cause analysis, real customer quotes, business impact (attrition risk, 40% more support tickets), and the proposed solution — HCP, a single source of truth with widget architecture. Their reaction: “This is brilliant. Why didn’t we think of this before?” Approved on the spot.

Part 5: HCP (Holistic Customer Profile)

Single Source of Truth

HCP needed to be comprehensive, contextual, accessible across departments with permissions, auditable, and secure. The core concept: a persistent customer profile that investigators could reference during any investigation. Instead of starting from scratch every time, they’d see previous cases, verification status, risk history, and transaction patterns.

I designed it as a modular widget system with 10 core widgets:

Profile Name, email, phone, address, account status
Linked Accounts Family, business, associated emails
Financial Payment methods, balances, limits
Primary Info Core KYC data, verified identity, nationality
Lifetime Highlights Account age, transaction volume, lifetime value
Session Assets Recent logins, IPs, devices used
Documents Uploaded IDs, proofs of address, bank statements with status and permissions
Audit Trail Every action — who, what, when — filterable and exportable
Account Events Creation, limits applied, restrictions, password resets
Red Flags Active risk indicators, fraud alerts, sanctions matches, color-coded severity

Three iterations to get it right. V1 (full-page dashboard) was overwhelming — “too much info for every case.” V2 (tabbed interface) forced clicking through tabs to find things. V3 (collapsible widgets) nailed it: core widgets expanded by default, secondary collapsed but visible, user-customizable saved layouts. Built on React with GraphQL for efficient widget-level data fetching, Redis caching, lazy-loading, and role-based access control.

↓82%
Redundant Requests
Documents stored once, used everywhere.
↑44%
Customer CSAT
Risk & Compliance interactions.
↓37%
Complaint Emails
Customer complaint volume.
↓56%
Context Gathering
Time to understand customer history.

“HCP changed everything. I can see a user’s full history before I even start investigating.” — Investigator

“You identified a problem we didn’t know existed and solved it.” — Leadership

Featured in internal PayPal blog. Invited to quarterly leadership summit. $1,000 spot award.

Part 6: CCI (Customer Centric Insights) — The Grand Vision

Breaking Down Silos

Late 2019. HCP was live. But a PM and I kept having the same conversation: departments still worked in silos. A PEP investigator could see the customer profile, but not SAR’s investigation notes. What if we went further?

The hypothesis: instead of 10+ departments with unique tools, workflows, and training — organize around 3 core personas. One modular platform that adapts to each. Investigators work across personas. Shared intelligence. Massive OPEX reduction. Build tools for job functions, not departments.

The Global Research Tour

August 2019. I traveled to 5 PayPal offices in 11 days (San Jose, Austin, Phoenix, Omaha, Chicago). 60+ interviews, group workshops with card sorting and workflow mapping, shadowing sessions, and supervisor interviews. Despite 10+ department names, the underlying job functions fell into three clear categories:

Persona 1: Identity Specialist
PEP, KYC, CDD, EDD, ODD
Verify identity, assess customer risk. “Who is this person? Are they who they claim?” Needs: identity documents, address history, watchlist screening, adverse media.
Persona 2: Investigative Analyst
SAR, AML, Fraud, Global Investigations
Investigate suspicious transactions, detect criminal activity. “Is this part of a larger pattern?” Needs: transaction details, patterns, counterparties, payment flows.
Persona 3: Transactional Specialist
BRM, Underwriting
Assess business activity, ensure policy compliance. “Is this business legitimate?” Needs: merchant profile, transaction categories, disputes, chargebacks.

The insight: all three personas needed the same underlying customer data (HCP), viewed through different lenses. Same data, different priorities. Instead of 10 separate tools, one modular platform with configurable widgets per persona.

HCP’s 10 widgets expanded to 21 for CCI (adding Merchant Profile, Limitations, Business Info, CIP, Device, Transaction Activity, Alias, Disputes/Claims/Chargebacks/Withdrawals, Payment Flow Breakdown, Related Case Summary, Counterparty Highlights). Each persona got a pre-configured template with default, secondary, and hidden widgets. 12-column grid layout with drag-and-drop, resize handles, persistent saved layouts. Adding a new persona = new widget configuration, no engineering required.

Rollout

Phase 1 pilot (Oct-Nov 2019): 30 investigators, 10 per persona. Phase 2 (Dec 2019): full SAR, PEP, AML, Fraud (300 investigators). Phase 3 global (Jan-Feb 2020): all departments, coordinated with EMEA, APAC, LATAM, translated into 5 languages. By February 2020: 1,200+ investigators using CCI daily.

10 → 3
Team Consolidation
3 training programs, 3 persona leads, investigators flexible across personas.
↓63%
Training Time
Teach widgets once, apply everywhere.
↑78%
AHT Reduction
Across departments.
1,200+
Global Users
Live across EMEA, APAC, LATAM.

“This is more than a product. You’ve fundamentally rethought how we approach risk investigations. This will be the model for years to come.” — Leadership

Lessons Learned

What Made This Work

Immersion Over Assumptions
I shadowed 30-40 investigators, traveled to 5 offices, spent hundreds of hours understanding workflows. You can’t design solutions for users you don’t understand.
Data Beats Opinions
When engineering pushed back on Google Maps, I didn’t argue. I built a prototype, ran an A/B test, presented results (67% AHT reduction). Opinions are subjective. Data is objective.
Solve Problems Nobody Asked You to Solve
HCP wasn’t in my mandate. I noticed CSAT declining, ran rogue research, built a business case, pitched a solution. Great designers don’t wait for permission.
Think Modular, Think Future-Proof
CCI’s modular architecture means adding a 4th persona is just a new widget configuration — no engineering required. Modularity is scalability.
Design is Strategic, Not Cosmetic
This wasn’t about making things pretty. It was about cognitive load, workflows, redundancy, and measurable business outcomes. That’s what earned respect.
Collaboration is Currency
Started alone. Ended leading 2 designers, partnering with PMs, engineers, and compliance officers, influencing VPs. Best products are built by teams.
Advocate for UX in the Lifecycle
Won by proving value, leading by example, and making engineers partners. Culture change takes time, but results speak louder than words.
Understand the Domain Deeply
I asked “stupid questions,” built a glossary, studied regulations. By the end I could speak the language of compliance officers.
Metrics Tell the Story
Every presentation had numbers: AHT reduction (67%, 78%), CSAT (+44%), productivity (2-3x). Quantify impact. Tell the story with data.
Design for Humans, Not Systems
Investigators get tired, make mistakes, have bad days. Reduced cognitive load, decision support, recoverable errors, optimized for 8-hour shifts.

Reflection

When I joined PayPal in August 2018, I was new to payments, new to compliance, a designer proving myself in a PM role. By early 2020, I’d shipped 3 major platforms, reduced investigation time by up to 78%, improved customer satisfaction by 44%, consolidated 10+ departments into 3 personas, and transformed an entire organizational workflow.

Great design isn’t about interfaces. It’s about understanding systems, empathizing with humans, and having the courage to challenge the status quo. SCM, HCP, and CCI didn’t just make processes faster — they made investigators’ lives better, made customers happier, transformed organizational structure, and proved that design drives business transformation.

Components
35+
Reusable React components
Widgets
21
Investigation widgets
Templates
12
Layout templates
Accessibility
AA
WCAG 2.1, <2s load
Design Leadership Case Study

From Fragmented Tools to
Unified AI-Powered
Agent Experience

As Product Design Manager and Design & Research Lead, I spearheaded an 8-month initiative to redesign our conversational AI platform, consolidating three disconnected products — Agent Assist (voice), Chat/WhatsApp support, and Email self-service — into a unified, LLM-powered agent experience.

Role
Product Design Manager & Research Lead
Duration
8 months
Team
4 designers (2 Product, 1 Design System, 1 Manager)
Scope
Agent Assist + Unified Agent Studio
46%
Reduction in average
call handling time
38%
Improvement in agent
satisfaction scores
68%
Increase in self-service
call resolution
100%
Alert visibility through
redesigned patterns
Part 1: Agent Assist Redesign

The Challenge

Our contact center solution suffered from critical fragmentation. Three separate products — each with distinct configuration systems — created operational inefficiencies and poor user experiences.

Product Ecosystem Issues

Agent Assist Used intent-based NLP detection
Chat Support Relied on keyword-based systems
Email Self-Service Operated independently
Cross-Product No communication or data sharing between products
Configuration Inconsistent design-time configuration across platforms
Runtime Limited capabilities (only Agent Assist and Chat had active assistance)

Critical UX Problems Identified

1
Cognitive Overload
Agents managed a 70-30 split-screen interface (70% client CRM, 30% our tool) while simultaneously navigating legacy, non-responsive systems.
2
Hidden Alerts
Critical notifications were buried in tabs, causing agents to miss time-sensitive information.
3
Context Switching
Constant toggling between applications, tabs, and systems disrupted call flow.
4
Fragmented Workflows
Each product operated in isolation, forcing agents to learn and manage multiple systems.
Discovery & Research Methodology

Building a Complete Picture

As Design & Research Lead, I architected a comprehensive discovery phase combining multiple research methodologies to build a complete picture of user needs and market opportunities.

1. Feature Consolidation
Auditing Three Products
Working closely with Product Management, I facilitated workshops to audit and consolidate capabilities across all three products, identifying overlaps, gaps, and opportunities for integration.
2. UX Audit
Heuristic Evaluation
I conducted a heuristic evaluation of all existing products, documenting interface inconsistencies, interaction pattern conflicts, information architecture problems, accessibility gaps, and visual design inconsistencies.
3. Field Research
Multi-Site Journey Mapping
I led on-site research across multiple contact centers, varying in size and industry vertical. This immersive research revealed real-world workflows, pain points, and workarounds that quantitative data couldn’t capture.

Field Research Approach

Methods Contextual inquiries with call center agents, shadowing during live calls
Mapping End-to-end journeys for different call types, environmental factors
Key Insights Agents processed information non-linearly. Legacy CRM forced rigid workflows. Alert fatigue from poorly timed notifications. High cognitive load from managing multiple mental models.
4. Competitive Analysis

I analyzed 10-15 competitors to understand market positioning and identify feature gaps.

Key Insights:

  • Industry trend toward unified workspaces over split-screen interfaces
  • Proactive assistance gaining adoption over reactive lookup tools
  • Real-time transcription and sentiment analysis becoming table stakes
  • Multi-modal support (voice, chat, email) expected in single interface
5. Strella AI User Research

Leveraged Strella AI for advanced user research and feedback analysis with quantitative rigor.

Methodology:

  • Prototype testing with real contact center agents
  • Task-based usability studies
  • Sentiment analysis of user feedback
  • Behavioral pattern identification

Critical Findings from Strella AI Research

Real-time Data Sync Agents required conversation RAG (Retrieval-Augmented Generation) updates at call completion, not the current 4-hour delay. This directly impacted our technical architecture decisions.
Resolution Tracking Surfacing resolution status and related case numbers as prominent entities proved essential for agent efficiency.
Info Architecture Historic conversations needed restructuring with a mini-timeline view for better readability and accessibility, rather than dense text blocks.
Caller Profile This label didn’t resonate with users. Agents needed journey-specific context or caller communication preferences (e.g., “speaks slowly, requires repetition”) rather than generic profile data.
Contextual Integration Opening scripts felt disconnected. Agents wanted integrated, contextual information rather than separate information blocks requiring cognitive assembly.
Visual Hierarchy Color usage in high-stress contact center environments demanded exceptional care — it drove attention more powerfully than in typical applications.
Interaction Efficiency Expanding/collapsing call history needed smoother, more accessible controls for rapid information access during live calls.
Entity Recognition Opportunity to use LLM capability to extract case IDs generated during calls and use them as journey identifiers, reducing manual data entry.
Design Exploration & Iteration

Redesigning Agent Assist

With research insights in hand, I led my team through an intensive ideation and prototyping phase, emphasizing quantity over initial quality to explore the full solution space.

Exploration Philosophy

Rather than converging prematurely on a single approach, we deliberately explored multiple directions:

  • Rapid low-fidelity prototyping sessions
  • Parallel exploration of competing concepts
  • Cross-team critique and feedback loops
  • User validation at key decision points

Key Design Pivots

1. Abandoning the 70-30 Split. Research revealed agents actually preferred full-screen tab switching over split views. The 70-30 ratio prevented focus during calls and didn’t accommodate non-responsive legacy systems. We redesigned around tab-based navigation with intelligent context preservation.

2. Alert Visibility Redesign. Instead of hiding alerts in tabs, we introduced a persistent alert panel with visual hierarchy, context-aware alert prioritization, mandatory acknowledgment patterns for critical notifications, and smart alert dismiss interaction that required agent confirmation. This resulted in 100% alert visibility and zero missed critical notifications post-launch.

3. Contextual Information Architecture. Rather than forcing agents to hunt for information across tabs, we surfaced relevant information based on call context, introduced collapsible sections with intelligent defaults, designed a mini-timeline for customer journey visualization, and integrated opening context directly into the main call interface.

Final Design Solution

The redesigned Agent Assist interface transformed the agent experience through strategic information architecture, intelligent assistance, and seamless workflow integration.

1. Unified Call Interface
Single-pane focus during active calls with context-aware information surfacing, integrated customer journey timeline, and real-time conversation RAG updates.
2. Intelligent Alert System
Persistent, non-intrusive alert panel with color-coded priority system informed by Strella AI research on color psychology in high-stress environments. Mandatory acknowledgment for critical alerts with contextual alert grouping.
3. Customer Context Panel
Journey-specific insights replacing generic “Caller Profile,” historical interaction timeline with expand/collapse functionality, related case surfacing with resolution status, and communication preference indicators.
4. Task Guidance Integration
Real-time suggestions based on conversation analysis, relevant answer system powered by KaaS (Knowledge as a Service), proactive next-step recommendations, and resolution pathway suggestions.

Impact & Outcomes

Qualitative Improvements

  • Reduced cognitive load through streamlined information architecture
  • Increased agent confidence with proactive guidance
  • Improved first-call resolution through better context availability
  • Enhanced training efficiency for new agents

Business Impact

  • Reduced operational costs through improved efficiency
  • Increased customer satisfaction from faster resolution times
  • Decreased agent turnover from improved work experience
  • Competitive differentiation in enterprise sales
Part 2: Unified Agent Studio

From NLP to Multi-Agent LLM Architecture

While redesigning the agent-facing experience, we recognized a fundamental opportunity to transform our backend architecture. The emergence of Large Language Models presented a chance to eliminate the complexity of managing separate intent-based and keyword-based systems. This wasn’t just a technical migration — it was a fundamental rethinking of how we build, configure, and deploy conversational AI solutions.

The Architectural Challenge
01
Manual intent configuration and training
02
Separate keyword libraries for different channels
03
Complex cross-product orchestration logic
04
Limited adaptability to new use cases
Deployment: Weeks
The Agentic Solution
01
Single LLM-powered orchestration layer replacing multiple detection systems
02
Multi-agent architecture for specialized task handling
03
Dynamic intent detection without manual configuration
04
Unified knowledge base across all channels
Deployment: Days

Design Challenge: Unified Configuration Experience

Creating the Unified Agent Studio meant solving a complex design problem: how do we give non-technical users the power to create, configure, and orchestrate multiple AI agents without requiring engineering expertise?

Key Design Requirements

01 Single configuration interface for all channels (voice, chat, email, WhatsApp)
02 Visual flow builder for agent orchestration
03 Knowledge base integration and management
04 Multi-agent coordination and handoff design
05 Testing and simulation capabilities
06 Version control and deployment management

Research: Flow Builder Competitive Analysis

As Design Lead, I conducted extensive research into flow builder interfaces across the market. We analyzed 10-15+ competitors to understand successful patterns and identify opportunities for innovation.

Competitors Analyzed

Enterprise Automation Salesforce Flow, Microsoft Power Automate
Conversational AI Dialogflow, Amazon Lex, Rasa
No-code / Low-code Zapier, Make, n8n
Workflow Orchestration Airflow, Prefect

Key Research Insights: Node-based interfaces provided the best balance of power and usability. Contextual property panels reduced cognitive load. Inline validation prevented downstream errors. Visual feedback during flow execution aided debugging. Template libraries accelerated common use cases.

Design Exploration: Flow Builder Iterations

Understanding that the flow builder would be the heart of the Unified Agent Studio, I led multiple rounds of exploration to find the optimal balance between simplicity and power.

Approaches Explored

Linear Timeline Good for simple flows, broke down with complexity
Swimlane Model Excellent for showing parallel processes, but steep learning curve
Node-based Graph Best balance of flexibility and comprehension
Decision Tree Clear logic flow, limited for non-linear conversations
State Machine Powerful but too technical for target users

We ultimately converged on a hybrid node-based approach that combined the intuitiveness of visual flows with the power of state management.

Final Solution: Core Capabilities

The Unified Agent Studio provides a comprehensive design-time environment for creating, configuring, and managing AI agents across all channels.

01
Visual Flow Builder
Proprietary node-based interface for designing conversational flows, configuring multi-agent orchestration and handoffs, defining conditional logic, integrating with external APIs, and testing flows in real-time with simulation mode.
02
Knowledge as a Service (KaaS)
The intelligence layer powering our agents — vector database creation from client knowledge bases, real-time retrieval for answer generation, dynamic knowledge updates without flow reconfiguration, and multi-source knowledge synthesis.
03
Task Guide Configuration
Informed by Strella AI research, provides agents with intelligent, context-aware assistance — defining task sequences, configuring trigger conditions, customizing suggestion templates, and real-time coaching interventions.
04
Unified Configuration Dashboard
A single interface for managing all channels (voice, chat, email, WhatsApp), agent creation, knowledge base connections, deployment versioning, and analytics and performance monitoring.

Flow Builder Key Differentiators

Context Preservation Visual indicators show data flow between nodes
Agent Specialization Clear distinction between single-agent and multi-agent nodes
Inline Validation Real-time error detection and suggestions
Collaborative Editing Multi-user support with conflict resolution

Technical Innovation: LLM-Powered Intent Detection

The shift from manual intent configuration to LLM-powered detection represented a fundamental change in how our system understands and responds to user needs.

Legacy Approach
Manual intent creation and labeling
Training data collection and annotation
Model training and tuning
Ongoing maintenance and updates
Limited adaptability to edge cases
LLM-Powered Approach
Automatic intent understanding from conversation
Zero-shot learning for new scenarios
Natural language orchestration logic
Self-improving through interaction
Seamless handling of ambiguity

Design Implications: This architectural shift allowed us to simplify configuration interfaces, reduce time-to-deployment from weeks to days, enable non-technical users to create sophisticated agents, support more natural and flexible conversations, and eliminate manual intent maintenance.

Design System Integration

As our team included a dedicated design system designer, we ensured consistency and scalability across the entire Unified Agent Studio.

  • Standardized component library for flow builder nodes
  • Consistent interaction patterns across configuration interfaces
  • Accessible color system (critical given Strella AI findings on color usage)
  • Responsive layouts for different screen sizes
  • Documentation for engineering handoff

This systematic approach accelerated development velocity and ensured quality across the expanding product surface area.

Leadership & Team Collaboration

Design Leadership in Practice

Team Structure & Allocation

As Product Design Manager, I structured a lean, high-performing team of 4:

2 Product Designers Focused on Agent Assist interface and Unified Agent Studio flows
1 Design System Designer Maintained consistency and built reusable components
Myself (Design Manager) UX Research, Design Strategy, Stakeholder Management, and hands-on design leadership
Research-Driven Decision Making
  • Established a culture of evidence-based design across the team
  • Leveraged multiple research methodologies to validate decisions and reduce risk
  • Created conviction to make bold decisions and defend them to stakeholders
Cross-Functional Collaboration
  • Partnered closely with Product Management on feature prioritization and roadmap
  • Worked with Engineering to ensure technical feasibility of LLM-powered features
  • Collaborated with Sales and Customer Success to understand market requirements
  • Engaged with end users throughout the design process
Team Development
  • Mentored designers in enterprise UX patterns and complex system design
  • Facilitated design critiques and knowledge sharing sessions
  • Encouraged exploration and experimentation during ideation phases
  • Created clear decision-making frameworks to move from exploration to execution
Stakeholder Management
  • Presented design direction to executive leadership
  • Aligned multiple business units around unified vision
  • Managed expectations during architectural transition to LLM
  • Communicated design rationale with compelling storytelling
What’s Next

The Game Plan

Building on the success of the Agent Assist redesign and Unified Agent Studio launch, we’ve mapped a clear path forward for continued innovation and value delivery.

Phase 1: Pre-call Intelligence

Objective: Empower agents with comprehensive context before calls begin

Key Initiatives:

  • Customer journey analysis and synthesis
  • Predictive issue identification
  • Recommended opening strategies
  • Proactive knowledge surfacing

Design Focus:

  • Information architecture for pre-call briefing
  • Integration with calendar and call routing systems
  • Personalization based on agent expertise
  • Mobile-optimized pre-call views for remote agents
Phase 2: Enhanced Task Guide with New UI

Objective: Evolve Task Guide from reactive suggestions to proactive coaching

Key Initiatives:

  • Real-time sentiment analysis and intervention
  • Adaptive guidance based on agent performance
  • Personalized coaching recommendations
  • Integration with quality assurance workflows

Design Focus:

  • Non-intrusive coaching delivery during active calls
  • Progressive disclosure of complex guidance
  • Agent preference and learning style adaptation
  • Supervisor visibility and override capabilities

Strategic Priorities

Timeline Initiatives
Near-term (6 months) Iterate on Unified Agent Studio based on early adopter feedback. Expand KaaS capabilities with multi-modal knowledge sources. Enhance flow builder with advanced debugging tools. Build template library for common use cases.
Long-term (12-18 months) AI-powered flow optimization suggestions. Autonomous agent creation from business requirements. Multi-language and localization support. Advanced analytics and insight generation.
Reflection & Key Learnings

Key Learnings

This 8-month journey from fragmented products to unified, AI-powered experience taught valuable lessons about design leadership in enterprise AI:

1. Research Depth Creates Design Confidence
Investing in comprehensive research — field studies, competitive analysis, and Strella AI validation — gave us conviction to make bold decisions and defend them to stakeholders.
2. Architectural Shifts Require Design Thinking
The transition to LLM-powered multi-agent systems wasn’t just technical — it fundamentally changed how users would interact with the system. Design thinking was essential to translate architectural capability into user value.
3. Simplification Is the Hardest Design Challenge
Consolidating three complex products into one coherent experience required ruthless prioritization and willingness to eliminate features that didn’t serve the core user workflow.
4. Measure What Matters
Defining clear success metrics (46% AHT reduction, 38% satisfaction improvement) created alignment and made the business impact undeniable.
5. Team Structure Drives Success
A small, senior team with clear ownership areas moved faster than a large team with distributed responsibility. The dedicated design system resource paid dividends in consistency and velocity.

Conclusion

The Unified Agent Studio project demonstrates how strategic design leadership can transform enterprise AI products. By combining deep user research, systematic competitive analysis, and bold architectural vision, we created a platform that not only solved immediate user pain points but positioned the company for the future of agentic AI.

The measurable impact — 46% reduction in handling time, 38% improvement in satisfaction, 68% increase in self-service resolution — validates the power of human-centered design in complex enterprise systems. More importantly, we created a foundation for continuous innovation that will serve customers and agents for years to come.

This case study represents not just a successful product redesign, but a model for how design leadership can drive business transformation in the age of AI.

Duration 8 months
Role Product Design Manager, Design & Research Lead
Team 4 designers (2 Product Designers, 1 Design System Designer, 1 Design Manager)
Methodologies Field research, competitive analysis, user journey mapping, Strella AI validation, iterative prototyping
Technologies LLM-powered multi-agent architecture, KaaS vector databases, real-time conversation RAG
Design Leadership Case Study

Tesseract Developer
Platform
& Nexus API

How I took a twice-failed project, aligned a fractured cross-functional team, and shipped two interconnected platforms in 11 months — building the design team from scratch along the way.

Role
Design Manager — IC & Leadership
Duration
11 months to MVP launch
Team
Solo → 3 designers (hired & built)
Scope
Strategy, Design, Product, Team
60+
Dealerships onboarded
within 6 months
200+
Custom applications
built by dealership teams
40%
Reduction in custom
development costs
85%
Apps built by non-developers
(validating no-code UX)
Starting Point

A project that had already
failed twice

I was brought into Tesseract specifically for this project. The VP of Design had seen my platform-building experience at PayPal and reached out because two previous design teams had already attempted this project and failed. The challenge was clear: Tesseract needed someone who understood the complexity of building developer-facing platforms — not just designing screens, but thinking through the entire ecosystem of tools, workflows, and abstractions that make a platform work.

When I joined, I had no idea how low-code platforms worked. I didn't know what an "entity" was. I didn't know what a "field" was in the context of application building. But I'd built complex platforms before, and I knew the design challenges weren't about understanding every technical detail on day one. They were about understanding the problem space, aligning the team, and making deliberate design decisions that serve real users.

Tesseract's core product, the Prism platform, was used by hundreds of dealerships. But every time a dealership had a unique use case, they had to go back to Tesseract for custom development — creating a costly bottleneck. The Developer Platform would let dealerships build their own applications.

The root cause of previous failures wasn't design skill — it was organizational alignment. The development team had fundamental open questions about scope, users, and direction that had never been resolved. Engineering and product weren't talking to each other. People were building toward different visions. No amount of wireframes would fix that.

Design Leadership Intervention

Cross-functional stakeholder
alignment workshop

My first move was to design and facilitate a structured stakeholder alignment workshop — rooted in collaborative discovery and participatory design principles. This wasn't a standard kickoff. It was a deliberate intervention designed to surface hidden assumptions, resolve conflicting mental models, and establish a shared product vision across engineering, product, and design.

Session 1
Problem Framing & Assumption Mapping
Each function independently documented their understanding of the problem, users, and product vision. When we compared these maps, the misalignment was immediately visible — engineering was building a developer-centric tool while product envisioned a no-code solution for business users.
Session 2
User Segmentation & Prioritization
Using a modified Jobs-to-Be-Done framework, we mapped all potential user types, their contexts, and desired outcomes. A prioritization matrix (user urgency × business impact) determined which personas to serve first. The team collectively agreed — creating genuine buy-in.
Session 3
Goal Setting & Success Criteria
Using an OKR-style framework, we defined success for the first release. Each function contributed objectives and key results, ensuring engineering feasibility, business goals, and user outcomes were all represented. We left with a shared definition of done.

This workshop fundamentally changed the trajectory of the project. It was the highest-leverage activity of the entire engagement. The two teams that failed before me likely had talented designers. What they lacked was organizational alignment on what they were building and why.

Working Model

Deep collaboration with engineering

The workshop was the beginning of a collaboration model I maintained throughout the project. In platform design — especially low-code platforms — the boundary between design decisions and engineering architecture decisions is blurry. You can't design a good entity creation flow without understanding the data model. You can't design a workflow builder without understanding execution constraints.

Technical Deep-Dives
Recurring sessions with lead engineers walking through architecture — entity storage, workflow execution, permission cascading. I built domain knowledge; they gained design context. These were learning sessions, not status meetings.
Shared Problem-Solving
For challenges with technical implications — real-time App Builder preview, branching logic visualization — I brought engineers into the design process early. We whiteboarded and prototyped together, eliminating the wasteful design → rejection cycle.
Open Questions as Shared Artifacts
A living document visible to engineering, product, and design. Any unresolved question went here with an assigned owner. Weekly reviews ensured nothing fell through cracks — directly addressing the root cause of previous failures.
Co-Created Component Library
Engineers contributed technical constraints on reusable components; I contributed interaction patterns and visual standards. Co-creation meant zero ambiguity at build time — no surprises, no rework.
User Research

Four personas identified,
two prioritized

Through the alignment workshop and contextual inquiry with dealership staff and internal developers, we identified four distinct personas. The critical design decision was narrowing our MVP scope to only two — the users with the most immediate pain whose needs would validate the platform's core value proposition.

Persona 01
Dealership Employees
Non-technical staff — service advisors, sales managers, operations — who encounter business problems daily but have zero coding skills. They need the simplest possible experience. No jargon, no complexity.
Persona 02
Dealership Developers
In-house IT teams at larger dealership groups. They understand technical concepts and want sophisticated tools, but they're not building from scratch — they want to move fast with powerful abstractions.
Persona 03
External Developers
Third-party developers or consultants. Technically proficient but lacking dealership domain knowledge. Deferred to avoid scope creep in the MVP.
Persona 04
Non-Dealership Business Users
Business users from adjacent roles who know what they want to build but can't code. Parked for future phases to maintain focus.

This narrowing gave us focus. Instead of trying to be everything to everyone — a trap that low-code platforms frequently fall into — we could design intentionally for two clear user types with the most immediate need.

Research

Competitive analysis &
internal product audit

Externally, we evaluated Salesforce, Microsoft Power Apps, Pega, ServiceNow, Zoho Creator, Airtable, Kissflow, Quickbase, AppSheet, Nintex and others across target audience, pricing, support, strengths, and weaknesses — benchmarking with quantitative scoring across ease of use, data control, workflow management, and platform compatibility.

The market had bifurcated: simple tools that couldn't handle complex use cases, or powerful tools with steep learning curves. Our design opportunity was in the middle — genuinely simple for non-technical users while powerful enough for developers.

Internally, I led a comprehensive audit of every existing Prism application — every screen, workflow, data relationship, and edge case. This was a step the previous teams had not taken. We catalogued the most complex screens as stress tests: if the platform couldn't reproduce these, it wasn't powerful enough. The audit surfaced recurring UI patterns that became the foundation of our component library, and it gave engineering concrete specificity about what "flexible enough" actually meant.

Architecture Decision

Two-layer architecture:
Experience vs. Studio

The combined insights from competitive analysis, internal audit, and user research led to the most important structural design decision: splitting the platform into two distinct environments, each optimized for its primary user without compromising the other.

Experience Layer
Where apps are used
Clean, simple, focused on task completion. No technical jargon, no exposed complexity. Applications built in Studio appear here as polished tools that feel native to the Prism ecosystem.
→ Primary: Dealership employees (Persona 1)
Studio Layer
Where apps are built
Entity management, workflow builders, permission sets, governance tools. The full creation environment for developers and technically-inclined users. This is where the power lives.
→ Primary: Dealership developers (Persona 2)
Signature Contribution

The App Builder:
grid system & drag-and-drop

The App Builder needed to let users create anything from a simple data entry form to a full NOC dashboard with real-time monitoring widgets. Our internal audit confirmed the high bar of complexity it needed to match.

I designed a modular grid system that was flexible enough for any layout while maintaining visual consistency. It wasn't just a layout tool — it was a design system constraint ensuring every application built on the platform would feel cohesive and professional, regardless of who built it.

Snap-to-grid behavior prevented sloppy layouts without restricting creativity. A pre-built component library lowered the floor for beginners. WYSIWYG direct manipulation eliminated the cognitive gap between building and using. Responsive breakpoints were baked into the grid so users never had to think about responsive design.

The grid system was validated against every complex screen from our internal audit — inventory management, service scheduling, customer relationships, multi-panel reporting, and NOC-style monitoring. A single grid system supporting all of these proved the architecture was sound.

Complex UX Challenge

Flow Builder:
making logic tangible

The Flow Builder — if-then logic, conditional branching, data transformations, API calls through a visual interface — was where low-code platforms live or die. Competitive analysis showed this was a universal pain point: Pega alienated beginners, simpler tools couldn't handle complex logic.

We phased delivery in close collaboration with engineering. Phase 1 focused on linear workflows covering ~70% of actual use cases. Phase 2 added branching, loops, and error handling. The engineering team's input on execution constraints — synchronous vs. asynchronous operations, possible error states, runtime evaluation — directly shaped the interaction model. We introduced explicit "wait" nodes because engineers helped us understand certain operations couldn't be instantaneous.

The visual language used node-based representations with color coding for triggers, conditions, actions, and endpoints. We tested multiple metaphors before landing on the one that performed best with non-technical dealership staff.

Product Strategy

Phasing as a design decision

I made a counterintuitive delivery decision: build Studio first, not Experience. The engineering load for Studio was heaviest — entity management, workflow engines, permission systems. Starting here gave engineering maximum runway. It also meant internal developers could start building on the platform immediately, giving us real usage data before the Experience layer was complete.

Before
01
Dealership identifies a new use case
02
Approaches Tesseract for custom development
03
Tesseract scopes and builds the application
04
Application shipped to the dealership
Timeline: 2–3 months
After — Developer Platform
01
Dealership identifies a new use case
02
Opens Developer Platform
03
Builds or customizes their own application
04
Deploys immediately. Iterates on real usage.
Timeline: 2–3 weeks
Team Building

From solo IC to design team

For the first three months, I was the only designer — also acting as de facto product strategist, defining the roadmap, prioritizing features, and doing hands-on design. This was possible because of my prior product management experience, but it wasn't sustainable.

When we transitioned to Experience layer work, I hired two designers. I looked for people who could operate with high autonomy in a complex domain and were comfortable with ambiguity. I maintained ownership of system design, interaction patterns, and design principles. Designer 1 focused on the Experience layer. Designer 2 focused on Studio features. Twice-weekly syncs ensured consistency. Both designers participated in the engineering collaboration model — attending deep-dives, joining whiteboarding sessions, contributing to the shared open questions document.

Part Two

Nexus API: opening
the ecosystem

With the Developer Platform running and the team operating independently, I took on the next challenge. The Developer Platform empowered dealerships to build within Tesseract's ecosystem. The Nexus API opened that ecosystem to the outside world — vendors, partners, and external developers integrating with and building on top of Tesseract.

01
Vendor Standard User
Third-party vendors connecting products to Tesseract. Need API access, documentation, simple authentication.
02
Vendor Admin
Vendor-side administrators managing integrations. Need governance, monitoring, access management, compliance tools.
03
Tesseract Support
Internal staff troubleshooting issues. Need diagnostics, logging, vendor access management. Reactive workflow.
04
Dealership
End beneficiaries discovering and enabling integrations. Mental model: app store, not admin panel.

I led a comprehensive workflow mapping exercise in FigJam — color-coded swim lanes per persona with explicit handoff points. The map revealed vendor onboarding friction, support visibility gaps, and that dealerships needed a marketplace experience. We phased delivery following the same strategy as the Developer Platform.

Phase 1

Core API Access & Documentation

API portal, authentication flows, documentation, and sandbox environments. Get vendors connected and building.

Phase 2

Governance & Monitoring

Usage dashboards, access controls, rate limiting, audit logs. Tools for admins and support to manage the ecosystem.

Phase 3

Marketplace Experience

Curated integration discovery with reviews, ratings, and one-click enablement for dealerships.

25+
Vendor integrations
in first quarter
70%
Active dealerships enabled
integrations within 3 months
35%
Reduction in integration
support tickets
Days
Vendor onboarding time
(down from weeks)
What I Actually Did

Design leadership in practice

"Design Manager" means different things at different companies. Here's what it meant on this project — a hybrid of strategic leadership, hands-on design, team building, and cross-functional influence.

Strategic Leadership
  • Defined product strategy and influenced roadmap for both platforms
  • Made critical scoping decisions — which personas, features, and problems to prioritize
  • Designed and facilitated the alignment workshop that unblocked the project
  • Authored the design principles guiding all platform decisions
Hands-On Design
  • Personally designed the App Builder grid system and drag-and-drop model
  • Created the Flow Builder visual language and interaction patterns
  • Designed information architecture for both platforms
  • Led the internal product audit establishing the complexity benchmark
Team Building
  • Hired and onboarded 2 designers, scaling team from 1 → 3
  • Established design processes, critique cadences, documentation standards
  • Structured parallel workstreams while maintaining design coherence
  • Mentored designers through complex system design challenges
Cross-Functional Influence
  • Bridged the gap between engineering and product when communication broke down
  • Built deep ongoing collaboration model with engineering
  • Leveraged PM experience to advocate design decisions with business justification
  • Influenced sprint planning and release sequencing based on user impact
Reflections

What I learned

Start with alignment, not artifacts
Two talented design teams failed before me because they jumped into interface design without organizational alignment. The stakeholder workshop I facilitated was the highest-leverage activity of the entire project. It's a methodology I now bring to every complex initiative.
Domain ignorance can be an asset
Not knowing what an "entity" was forced me to design for clarity. If I couldn't understand something, a non-technical dealership employee certainly couldn't. My ignorance became a design tool that kept the team honest about unnecessary complexity.
Design and engineering must think together
Platform design has too many technical interdependencies for a traditional design-then-handoff workflow. The best decisions came from shared whiteboarding sessions where designers and engineers reasoned through problems together.
Audit your own product, not just competitors
Reviewing every existing Prism screen to establish our complexity benchmark was one of the most valuable research activities. It grounded platform design in real-world complexity and gave engineering the concrete specificity to build with confidence.
Phasing is a design decision
Building Studio first gave us data. Narrowing to two personas gave us focus. Phasing the Flow Builder gave us speed. These weren't project management decisions — they were design decisions that shaped the product.
Platform design is systems design
It's not about individual screens. It's about creating a coherent system where the grid, navigation, entity model, and permission structure all work together. The grid system wasn't a layout tool — it was a constraint ensuring quality across every application built on the platform.
UX Case Study

Unifying 7 Products
Into One Platform

A strategic initiative to consolidate fragmented enterprise products into a cohesive, unified platform experience.

Role
UX Lead
Duration
6+ months
Scope
Enterprise SaaS
Method
Research · IA · Strategy
7→1
Products unified
into one platform
85%
Faster feature
navigation
60%
Reduction in
support tickets
92%
Task success rate
in tree testing
Overview

The Background

Over the past decade, the company had grown through a combination of organic product development and strategic acquisitions. What started as a single conversational AI solution had evolved into a comprehensive enterprise suite spanning conversation capture, analytics, agent assistance, virtual assistants, knowledge management, and AI development tools.

However, this growth came at a cost. Each product was built by different teams, at different times, with different technology stacks and design philosophies. Some products were acquired from other companies and maintained their original branding and user experience. The result was a fragmented ecosystem where customers had to navigate between completely separate applications to accomplish their goals.

I recognized this as both a significant business problem and a design opportunity. If we could unify these products into a cohesive platform, we could dramatically improve customer experience, reduce support costs, increase cross-sell opportunities, and position the company as a true platform leader.

!
The Context
Customers struggled with context switching and multiple credentials. Each product had different navigation patterns, terminology, and mental models. Enterprise customers were impacted by separate user provisioning, documentation for 7 interfaces, and 7 different APIs. Competitors were consolidating into unified platforms.
My Role & Responsibilities
Research & Discovery: stakeholder interviews, user research, competitive analysis. Information Architecture: unified navigation structure, taxonomy, content organization. Stakeholder Alignment: building consensus across product teams. Executive Communication: pitch deck and documentation for leadership buy-in.

Hypothesis: If we could reorganize our products around user intent rather than product boundaries, we could create a unified experience that felt intuitive regardless of which features a customer used. Users don’t think in terms of ‘products’—they think in terms of tasks they want to accomplish.

Product Landscape

The 7 Products

VoiceLog
Conversation Capture
Recording and capturing voice conversations across channels.
InsightIQ
Analytics & Insights
Dashboards, reports, and performance metrics.
AgentHub
Agent Assistance
Real-time support tools for contact center agents.
BotStudio
Virtual Assistants
Building and managing conversational AI bots.
DataFlow
Knowledge Management
Organizing data pipelines and knowledge bases.
AIForge
AI Builder
Custom AI model development and deployment.
CoreAdmin
Core Infrastructure
User management, integrations, and system configuration.
Challenge

Understanding the Fragmentation

Before proposing any solutions, I needed to fully understand the scope and impact of the fragmentation. I spent three weeks conducting discovery research, which included analyzing support tickets, interviewing customers, shadowing users, and auditing each product’s information architecture.

What I discovered was worse than expected. The fragmentation wasn’t just a UX inconvenience—it was actively preventing customers from getting value from our products. Many customers were only using 2–3 of our 7 products, not because they didn’t need the others, but because the effort required to learn and manage additional systems was too high.

Support ticket analysis revealed that 23% of all tickets were related to navigation confusion, permission issues across products, or questions about how features in different products related to each other.

7 Separate Logins
Fragmented Access
Users managed 7 different credentials. Password resets accounted for 8% of all support tickets.
5× Context Switches
Workflow Disruption
Average user switched 5 times per day, losing 15–20 minutes to re-authentication and context rebuilding.
40+ Duplicate Terms
Terminology Chaos
Same features named differently across products. “Dashboards” vs “Reports” vs “Analytics” meant different things.
What User Research Revealed
“I feel like I’m using 7 different companies’ products.” — Contact Center Manager

“My team refuses to use [Product X] because it looks completely different.” — Operations Director

“I’ve given up trying to explain how all these tools fit together to new hires.” — Training Manager
$
The Business Impact
Lower product adoption: 3+ product users had 40% higher retention, but only 31% reached this threshold.

Higher support costs: Navigation/access issues consumed 23% of support resources.

Longer onboarding: 6 weeks time-to-value, 2 weeks spent just learning multiple interfaces.
Before: Fragmented Product Landscape
voicelog.app
VoiceLog
insightiq.app
InsightIQ
agenthub.app
AgentHub
botstudio.app
BotStudio
dataflow.app
DataFlow
aiforge.app
AIForge
admin.app
CoreAdmin
Process

Research & Discovery

Given the complexity of unifying seven products built over a decade, I knew this project required a rigorous research foundation. I structured my research in three phases: Discovery (understanding the current state), Exploration (identifying possible solutions), and Validation (testing proposed structures).

The entire research phase took approximately 4 months, involving 35+ stakeholder interviews, 18 customer interviews, competitive analysis of 12 platforms, and validation testing with 24 users.

Step 01

Stakeholder Interviews & Internal Discovery

I began by mapping the internal landscape. I conducted 35 interviews with product managers, engineers, customer success managers, and sales teams across all seven products.

Key Discovery: I created a comprehensive feature matrix that revealed 47 instances of duplicate or overlapping functionality across products. Four products had their own ‘dashboard builder,’ three had separate ‘user management’ systems, and all seven had different approaches to ‘reporting.’

35 stakeholder interviews · 47 feature overlaps identified · Complete feature matrix created

Step 02

Competitive Analysis & Industry Benchmarking

I conducted deep-dive analysis of 12 enterprise platforms: Salesforce, HubSpot, ServiceNow, Zendesk, Adobe Experience Cloud, Microsoft Dynamics, SAP, Oracle, Workday, Atlassian, Pega, and Genesys.

Key Patterns: The most successful platforms organized navigation around user intent (what you want to do) rather than product boundaries (which tool you’re using). They used consistent patterns: primary navigation for functional areas, secondary navigation for modules, consistent settings placement.

12 platforms analyzed · 8 IA patterns documented · Best practices synthesized

Step 03

Card Sorting & Mental Model Research

I ran both open and closed card sorting exercises with 24 participants.

Open Card Sort: Participants grouped 87 feature cards into categories. This revealed users naturally thought in terms of ‘viewing/analyzing,’ ‘building/creating,’ ‘managing/configuring,’ and ‘connecting/integrating’—closely aligned with my proposed four pillars.

Tree Testing: Initial testing showed 78% task success; after two iterations, this improved to 92%.

24 participants tested · 92% final task success · 3 iterations completed

Step 04

Strategic Thinking: The Four-Pillar Framework

Based on all research, I developed a framework organizing our product suite around four primary pillars:

Insights: “I want to understand what’s happening” — Analytics, dashboards, reports, measurement tools.
Applications: “I want to use tools to do my job” — Operational products for daily use.
Services: “I want to access platform capabilities” — Knowledge bases, AI models, data management.
Administration: “I want to configure and manage” — Settings, user management, integrations.

Intent-based organization · Scalable framework · Research-validated

Step 05

Leadership Pitch & Stakeholder Alignment

With a validated framework, I faced getting buy-in from leadership and seven product teams.

The Pitch Strategy: I led with the business problem (backed by data), demonstrated user pain (research quotes, journey maps), showed competitive pressure (benchmarking), and presented the solution as evolution—not replacement—of existing products.

I created detailed Figma mockups showing how each product would appear in the unified structure. The pitch was successful—leadership approved and product teams shifted from resistance to enthusiasm.

Executive approval secured · 7 product teams aligned · Implementation roadmap defined

Solution

The Unified Architecture

The final architecture wasn’t arbitrary—every decision was grounded in research:

Why four pillars? Users naturally grouped features into 3–5 categories. Four provided enough separation while remaining few enough to be memorable.

Why organize by intent? Users think in tasks, not products. “See how my team is performing” not “open InsightIQ.”

Why maintain product identity? Complete dissolution would cause too much disruption. We preserved familiar modules within the new structure.

Why progressive disclosure? 200+ features would overwhelm. A three-level hierarchy—Pillars → Modules → Features—kept the interface clean.

Navigation Philosophy
Predictability: Always know where you are and how to get back.
Efficiency: Common tasks reachable in 2–3 clicks.
Flexibility: Multiple paths—search, recent items, favorites, direct navigation.
§
Handling Complexity
Role-based views: Admins see configuration, operators see tools.
Contextual features: Settings appear where they’re needed.
Graceful scaling: New products slot into existing pillars without restructuring.
The Four Pillars
Insights
“I want to understand what’s happening”
Analytics: Agent Performance, Policy View, User Journeys, Custom Dashboards
Reports: My Exports, Shared Exports
Measurements: Outcomes, Facts, Scorecards
Customer Service Suite
“I want to use tools to do my job”
Work Supervisor: Live Monitor, Team Performance, Evaluations
Agent Studio: Agents, Tasks, SOPs/Flows, Guidelines
Co-Pilot: Applications, Tasks, Summarization
Self Serve: Channels, Personality, Testing
Answers: Knowledge Bases, Search, Q&A
Platform Services
“I want to access platform capabilities”
Knowledge: Projects, Pipelines, Datasets, Knowledge Bases
Models: Model Catalog, Tools Registry, Prompt Library
Conversations: Voice, Video, Chat
Data: Catalog, Sources, Entities
Administration
“I want to configure and manage”
Integrations: Available, Deployed
Configuration: Applications, Services, Customizations
Users & Accounts: Partners, Accounts, Users, Roles
API Console: Documentation, Playground
⤓ Progressive Disclosure
Reveal complexity only when needed
Keeping the interface clean by exposing features progressively through the three-level hierarchy.
□ Consistent Patterns
Unified interactions
Same navigation, terminology, and interaction patterns across every module in the platform.
✓ Role-Based Access
See only what’s relevant
Users see a tailored interface based on their role, reducing cognitive load and improving focus.
Impact

Results & Outcomes

1
Unified Login
Single sign-on replacing 7 separate credentials.
85%
Faster Navigation
Time to find features across the platform.
60%
Less Support Tickets
Reduction in confusion-related support tickets.
4
Clear Pillars
Intuitive mental model validated by user research.
Before
7
Separate applications
7
Different logins
7
Navigation paradigms
40+
Duplicate terms
After — Unified Platform
1
Cohesive platform
1
Single sign-on
4
Intuitive pillars
1
Unified taxonomy
Learnings

Key Takeaways

1
Stakeholder Alignment is Critical
Unifying products means unifying teams. Each product had owners deeply invested in their work. Early buy-in and consistent communication was essential. What worked: I involved product teams from day one as collaborators, not reviewers. Their fingerprints on the solution made them advocates.
2
Terminology Matters More Than Expected
Users had strong attachments to existing terms. “Knowledge Base” meant something different in each product. What worked: We let user research drive terminology decisions. Card sorting revealed which terms resonated most naturally.
3
Research Investment Pays Dividends
The four months of research felt long, but it was invaluable. Every time someone questioned a decision, I could point to specific research findings. The research also helped identify potential pitfalls before we committed to solutions.
4
Executive Communication is a Design Skill
The best solution means nothing if you can’t get it approved. Pitching to executives requires strategic narrative, business impact, and confidence. I focused on business outcomes (retention, support costs) rather than UX best practices.

What I’d Do Differently: Start user research earlier—should have run stakeholder and user interviews in parallel. Create a change management plan—users needed help transitioning mental models. Document trade-offs more explicitly for future team members inheriting the architecture.

From Fragmentation to Unity

This project demonstrated that the hardest design problems aren’t about pixels—they’re about people, systems, and strategy. By grounding every decision in research and bringing stakeholders along the journey, we turned seven competing products into one coherent platform.

Talks, Panels & Research

Speaking & Publications

Talks, panel discussions, and academic research exploring design thinking, product development, and user experience.

Academic Research

Research Papers

Research Paper
Architectural Frameworks for AI-Driven Adaptive UIs in SaaS Applications
Prabhagaran Rakkiappan, Balasubramanian Panneerselvan
Adaptive UI Enterprise SaaS UX Architecture
The increasing complexity of user expectations, device heterogeneity, and accessibility considerations in SaaS applications has revealed the limitations of traditional static user interfaces. This paper explores a multi-layered architectural framework that facilitates AI-driven UI adaptation, detailing key components such as the data collection layer, AI processing layer, and UI rendering layer.
View Paper
Research Paper
Multi-Layer AI Generation for Image, Audio, and Video
Prabhagaran Rakkiappan, Balasubramanian Panneerselvan
AI Generation Multimedia Content Creation Audio Processing
This paper introduces a novel multi-layer AI generation framework for creating images, audio, and video content. Unlike traditional monolithic AI outputs, this approach enables users to prompt and edit content layer by layer—granting greater creative control, modularity, and alignment across media types.
View Paper
Featured Talks

Talks & Discussions

TEDx IIT Guwahati
TEDx Talk
Product Design through times: Where does it go from here?
TEDx IIT Guwahati
This talk explores the evolution of product design over the years tracing the shift from visually striking, physically pronounced forms to more subtle, digitally-driven design experiences.
AI Marketing Panel
Panel Discussion
AI marketing unplugged: Beyond buzz to real results
Marketing Panel Discussion
Whether AI has any real marketing use cases in content, design, digital, and product marketing or is it just a buzz? A panel of marketing experts uncover the truth.
Product Conversation
Podcast / Interview
The Product Conversation with Prabhagaran
Product Conversation Series
An insightful conversation about product management, design thinking, and building successful products in today’s market.
Tamil Typography

Modern Tamil Lettering

A collection of Tamil typography experiments blending traditional script with modern design aesthetics. Each piece is inspired by Tamil culture, music, cinema, and everyday life.

3D Printing

Printed Creations

Exploring the world of 3D printing — from architectural models to pop-culture collectibles. Each project is designed, sliced, printed, and hand-finished.

Career Journey

Work Experience

Download Resume
Jan 2024 — Present
Manager, UX Design & Research
Uniphore
  • Managing and building the India UX Design & Research team
  • Leading AI-native product strategy and UX for enterprise-grade platforms powering automation, analytics, and intelligent decisioning at scale
  • Transforming complex AI capabilities (LLMs, speech AI, analytics, automation) into scalable, usable, and business-critical enterprise products
Apr 2022 — Jan 2024
Manager, Product Design
Tekion
  • Led design across Automotive Retail Clouds (Services), Partner Cloud, and Developer Platform (low-code/no-code)
  • Built, scaled, and maintained internal design systems, standards, and best practices
  • Managed, coached, and mentored designers within the team
Dec 2017 — Mar 2022
Sr. UX Product Manager, Conversational AI
PayPal
  • Built the design system for Simplified Case Management for compliance across multiple disciplines
  • Reduced average handling time by 30% and headcount by 11% through UX improvements
  • Redefined design handover to dev team, reducing change requests by 80%
Mar 2015 — Dec 2017
Sr. Product Designer & UX Researcher
Zoho
  • Worked on Site24x7 — a cloud infrastructure monitoring platform by Zoho
  • Spearheaded the complete redesign of the B2B SaaS platform
  • Designed Cloud Spend, the cloud monitoring product
Oct 2011 — Jun 2014
Game Designer & Typographer
Scientific Games
  • Developed artwork, backgrounds, UI elements, symbols, and animation for games
  • Collaborated with international, multidisciplinary teams across the world
Aug 2008 — Sept 2011
Graphic Designer
Tinderbox Events
Education

Academic Background

Summer 2018
People-Centred Research
Copenhagen Institute of Interaction Design
  • Summer school programme focused on human-centred research methodologies
2004 — 2007
B.Sc Visual Communication
University of Madras
Expertise

Skills & Tools

Skills

UX Management Design Systems Product Design Wireframing Visual Design User Research Usability Testing Art Direction Illustration

Tools

Figma Figma Make FigJam Lovable Antigravity Claude Photoshop Illustrator After Effects Premiere Pro
Recognition

Awards & Features

Mar 2021
TEDx Speaker @ TEDxIITGuwahati — “Product Design through the times: where does it go from here?”
Feb 2015
Creative GAGA Magazine — Featured “Illamai”, a Tamil typeface
Jul 2014
Pantone Gallery — Featured work “Manja Pai”
Jun 2009
OpenHouse — Sold an artwork to art director Sabu Cyril