top of page

Sprinklr Agent Console

Empowering customer support agents to provide faster resolution to customer queries across social channels


Sprinklr Agent Console is a one-stop tool used by leading global organizations to provide customer support on 35+ social & digital platforms. Since its last major release in 2016, the social landscape of customer support had evolved dramatically & the existing experience couldn’t keep pace with the increasing volume of customer queries & rising customer expectations for quicker resolutions.

In 2020, we set out to redesign the Agent Console with an overall aim to optimize the replying experience of support agents for faster resolution of customer queries.


Project Lead - Design

User research, usability analysis, product design, prototyping, testing, QA, project planning


2 designers, 1 design mentor,
1 researcher, 1 analyst,
2 product managers,
14 developers


4 months

Oct - Jan 2021


Figma, FigJam, Airtable, JIRA


The project was originally slotted for the following quarter but an upcoming potential deal with a tech giant led the project to be preponed & an ad-hoc swat team was established to take this on in a shorter timeline.

💡 New problem space

It was my first Care project, so had to quickly grasp the new problem space, users, interface, & the team

🔁 Modifying habituated experience

Agents had been using the console day in & out for years, redesigning it required careful considerations

🔭 Clear goal, unclear scope & journey

We knew what we had to achieve, but what & how it had to be done was unclear & upon us to figure out

👜 Baggage from the past

Over the years, tons of features got accumulated that affected console’s usability & made it overwhelming


We completed the project in two phases - first, fixing the existing ux gaps to optimize the replying experience & then introducing a new feature, Smart Assist, to facilitate the solution-finding experience.


Redesigned Agent Console

Agent response time reduced by 31%.

Resolution time reduced by 26%, SLA breaches went down by 37%, & CSAT increased by 13% within 4 months.
We also won the deal with a tech giant who eventually became one of the largest user of Care suite.

Understanding the Context


Being my first project in Customer Care suite, it was important to spend some initial time absorbing and getting acquainted with the product ecosystem, users, relevant terminologies, KPIs, and the team. I relied majorly on the following for this foundational understanding:

📚 Product Docs

Care product ecosystem, where & how Console is positioned; KPIs that matter; terminologies

🖥  User Screen Recordings

How users use Agent Console; where & how do they spend their time in the process

🔬 User Research Repository

Users; their tasks at hand; goals; metrics that are important to them

🗓 Meetings with triad and researchers

Doubts about legacy product; project scope, timelines, & expectations


1. What happens in Agent Console?

High-level journey of customer query resolution, where Agent Console fits within it, and what happens before & after.


Case Resolution Journey

2. Who are the users?

Agent Console is used by customer support teams at more than 250 leading organizations in the world.


Some prominent users of Sprinklr Agent Console

Customer Support agents somewhere between 60-500+ in each of these teams manage millions of customer queries coming from about 30-120+ social accounts of each of these prestigious brands across 35+ social media channels. They are the primary users of Agent Console.


Persona of customer support agent

3. How do they use it?

Going through the user research repository, screen recordings, & discussions with the triad helped in developing a fair understanding of the interface & how agents used it in different scenarios. Following is the molecular breakdown of the interface & the role each part plays:


Legacy Agent Console

In the scope discussions, we concluded not to redesign the entire console in one go. We wanted to avoid big changes in an already habituated product and accommodate the scope in the short deadline we had. We chose to focus on the middle conversation pane & the reply box as they formed the most critical part of the process.


Investigating Problems


Some high-level problems started coming to light from the initial discovery phase itself. That provided a good starting point as I dug a level deeper to investigate those initial hunches and more. I looked at three main avenues to inform the investigation - user interviews (supported by user research team), ux audit, and usage data reports (supported by product analyst team).

1. User Interviews

I worked closely with our in-house User Research team for this process. I revisited the existing user research repository with one of the researchers to identify potential problems. I also documented a set of questions that required further research and collaboratively took 12 new interviews & observational studies of support agents from different industries to get answers to them. Following are the major insights from the process:

⏳ Inefficient micro-level time optimization

Agents deal with a time critical setting where every second matters, the interface isn’t optimized for that

🔍 Siloed solution-finding experience

Requires agent to scavenge through 4 different sources disconnected with each other & the console

💬 Obstructive & daunting replying experience

Agents felt replying to be too overwhelming & at times even hindering their tasks instead of helping

⚙️ Time-consuming & complex housekeeping

Agents had to remember & find relevant properties & then fill each before they could send a reply

2. User Experience Audit

I did an in-depth usability analysis of the agent console analyzing every step in the journey, each interface element, and the interactions involved. Following were the major insights:

↕️ Space Distribution

Lack of context while replying as the reply box obscures the conversation with >50% screen height

🔀 Inconsistent Experiences

Inconsistent reply boxes and editor options reduced learnability & attracted misclicks

🤔 Non-intuitive actions

Certain flows like closing case required memorizing & recalling due to disconnected & unintuitive steps

💬 Reply Box viewport & scroll overlaps

Actual viewport of reply box fit less than 50 words at a time & led to 3 scroll-areas in single middle pane

⛔️️️ Inefficient Error Handling

Error validation after the reply is sent led to complete redo. The error copy was also not always very helpful

🧠 Cognitive load

Unnecessary information & no information hierarchy competed user’s attention for more important bits

3. Usage Data

In collaboration with a product analyst, I looked at the usage metrics for different entities and flows. A large part of this process overlapped with conceptualization & design phase where we had to make major design decisions.

🔍 Solution-finding is most time consuming

Researching solution takes the highest time in case-resolution process averaging around 4.5mins per case

📊 Lack of hierarchy for important actions

3 actions being used >93% times randomly positioned in an endless list of 17 message actions

🖱 Fast typing, slow clicking

Agents are super fast with keyboard but the high no. of mouse clicks slow them down; even the shortest flow takes 8 clicks to reply, consuming >50% time

📐️ Unbalanced space distribution & usage

Properties take >70% space in reply box (>90% with scrolled area) & get used < 10% times

🔁 Redundant selection for account & type

99.6% times agents choose the same social account, 99.3% times msg type matches customer’s last reply

⏱ Micro-time scale

Response time goals vary for different industries but they average < 1min for most. Dealing with such 60s timeframe, saving even 1s becomes critical


Conceptualizing the Solution


I reviewed the investigated problems with the triad to discuss the potential directions and scope ahead. There was a general consensus about the genuinity of discovered problems, their impact, and we had some good initial ideas for tackling them. But there was ambiguity around which ones to get started with & focus within the given short timeline.

1. Where to start?

To help us move forward, I laid out the micro step-by-step user journey of case resolution and mapped opportunity areas & relative time based on the pain points discovered from the research. Seeing these sequentially from the lens of an agent and on a temporal scale put things in perspective and helped us to have grounded discussions and get on the same page.


Micro-level analysis of replying flow

The existing case resolution journey had 3 main parts -

a. Finding Solution: Time spent researching the solutions that agent don’t already know. The most time-consuming part of the journey & majorly happens outside the agent console. Critically impacts case resolution time.

b. Replying: Time spent in typing the reply, and all the interactions involved before and after it. Time consumption for each individual action is very small but together account for a big share. Mostly impacts the response time.

c. Housekeeping: Time spent in all the official work for storing the right case details for future reference. Time spent is less but the current flow is one of the most complained for part of the journey.

Action Plan

We weighed the opportunity areas in each of the above parts based on their effort, impact, and ease of adoption and came up with the following two-phased Action Plan for the project:

⭐️ Phase I - Quick Wins

Fix existing replying & housekeeping experience
Low effort - high impact

New design system implementation was already in pipeline for next launch, we planned to integrate our small ux fixes with the same

🌟 Phase II - AI Assistance for solution finding

Introduce new solution finding experience
High effort - high impact

Solution finding tools already existed in Sprinklr, we planned to integrate those siloed tools within Agent Console along with making them proactive with AI

We also had another interesting direction for gamification & analytics to drive agent’s motivation further. But we reserved that for later since it required heavy research studies that couldn’t be included in the scope.

2. Guiding Principles

Based on the problems investigated & project constraints, we laid down the following principles to guide the designs ahead:

🎯 Focus on what’s important

Drive agent’s focus towards important things through data-driven information strip down & reorganization

⚡ Optimize for Speed

Optimize flows for micro-level time efficiency, even saving a single second or a click makes a difference

⚙️️️ Good defaults & automation

Save agent’s time & efforts from things that can be automated or defaulted

🔸️ Versatile & Consistent

Adapt to the differences in different channels while keeping the basic skeleton same

✨ Small & Intuitive Changes

Make it easy for agents to adapt to the redesign by ensuring that the changes are intuitive & not too huge

3. Success Metrics

We also established the following success metrics for the project early-on to inform the decisions ahead:

↩️ Lower Agent Response Time

The time an agent takes to send a reply after opening the case in Agent Console (subset of Response Time)

🚫️Lower Service Level Agreement (SLA) Breach

Number of cases that take more time to resolve than the pre-established benchmark

✅️️️ Lower Case Resolution Time

The time taken by an agent to resolve the complete case since customer’s first message

🙂 Higher Customer Satisfaction (CSAT) Score

Score out of 5 submitted by customer or analyzed via AI for their satisfaction with the case resolution

4. Design Approach

For taking design decisions, we heavily relied on prototyping, internal reviews, & testing with the users.


Based on the above set guidelines and metrics, we explored multiple explorations for each step of the journey. To decide which exploration to move ahead with, we again partnered with a researcher to conduct comparative testing with agents. We created quick mockups & defined scenario-based task for each step which were then tested with agents.


Micro-level analysis of replying flow

Testing iterations early in the process helped in making confident decisions & incorporate feedback early-on. Finalized directions from this exercise were then detailed out in close sync with the developers.


Final Solution


Taking ahead the finalized directions from last phase, I identified all the use-cases (& many edge-cases) in collaboration with PMs & Developers and got into designing detailed flows. After a series of design reviews with the triad & one user-testing round for each phase, this is how our redesigned Agent Console looked like.

Redesigned Agent Console


Phase I - Quick Wins

Speeding up Replying & Housekeeping


Side-by-side comparison of the agent response time in legacy & redesigned version for a similar scenario:

Legacy Conversation Pane

Redesigned Conversation Pane

I created multiple such prototypes for different scenarios & we tested them with 43 agents from 7 different industries to compare their response time between the legacy & redesigned versions and also to check how they adapted to new changes. The response was better than we anticipated - on average, agents saved more than 25% of their response time even with first-time acquainted usage. This stood as our final green signal to start production for go-live.

1. Restructuring reply box


Agent’s most important task is to write the reply, we restructured the reply box to focus on that.


Redesigned Reply Box


🔴 Problem

Lots of distracting entities made replying intimidating in an already tense environment. Also covered a large part of the feed blocking the context while replying


✅ Solution

Informed by the usage metrics, reorganized the entities & cleaned the space to direct agent’s attention to where it’s due without obstructing the context

2. Automated pre-replying setup


🔴 Problem

Agents had to do 3 tasks every time before they replied - open the reply box, select social account & message type.


Brands maintained a dedicated support account & 99.3% times replies were sent through it. 99.6% times message type matched customer’s most recent reply.

✅ Solution

Reply box opens automatically with preset defaults for social account & message type using a logic that meets more than 99% use-cases without any input.

This cuts down the pre-reply setup time to 0.
Agents can now start typing the reply, without wasting a single second or click.

3. Uniform replying experience


🔴 Problem

Highly inconsistent reply box structures, positioning of editor actions, & corresponding modal designs hampered learnability & efficiency.

✅ Solution

Basic skeleton that adapts consistently to 35+ channels with industry-standard structure & usage-based positioning of entities.


Consistent adaptations of Reply Box for 35+ different channels & message types


4. Efficient housekeeping


🔴 Problem

20+ uncategorized properties with usage less than 4%. Even when used, the flow was disjointed, illogical, & delayed the replying time.

✅ Solution

Segregated properties as per context of use, integrated in a cohesive, intuitive, & faster flow as secondary alternates to primary Send action.

5. Easy error handling


🔴 Problem

Agents had to redo the entire flow whenever they hit an error. The error text was often too technical & didn’t include clear details to resolve the error.

✅ Solution

Error validation integrated within reply box (initially aimed for error prevention but that was too costly from dev perspective). Layed out a framework for writing error text.

5. Other Considerations

Some other small fixes that didn’t form part of the main flow but indirectly impacted the experience & the time consumed.

a. Keyboard Shortcuts & Navigation

Introduced intuitive keyboard shortcuts & end-to-end navigation control that helped not only to speed-up workflows for agents by avoiding frequent switching to mouse but also fixed existing gaps in keyboard accessibility.


Key details of Shortcuts & Navigation implementation

b. Cleaner feed

Minor visual & ux fixes in the conversation feed to make it cleaner & well-organized for better legibility & less cognitive load while scanning through it. Also mentored another designer for updating assets & templates across all supported channels.


Legacy Conversation Feed


Redesigned Conversation Feed


c. Optimized Search

The main usecase for searching in the conversation feed is to find entities that the customer might have shared before in the conversation like their number, product codes, images, etc. The legacy experience didn’t facilitate that, rather included irrelevant fields. Updated the search experience to support entity search.


Legacy Search


Redesigned Search

Phase II - Smart Assist

Introducing new solution-finding experience


Smart Assist


How it works?

As the design & care development team worked on shipping Phase I, meanwhile the AI team developed the recommendation model that predicted similar cases, articles, & guided scripts by matching parameters detected from the existing message interpretation model. We ran few tests on real case messages & shortlisted the parameters for matching that made the most sense. This is how it works on a high level:


Smart Assist Framework

Smart Assist Cards


We tried multiple explorations for smart assist recommendation cards based on two different hypotheses
a. agents would first find the most relevant problem & then view its solution.
b. agents would directly look for solutions that they can use as their reply.

We received mixed responses from user reviews as well as internal reviews. From repetitive testing with the model, we observed that a large no. of times all the options came with very closely-matching problems. Hence, we decided to go with the latter option - the solution-based approach - with an option of making tweaks in the next version if it didn’t work.


Smart Assist Card Explorations


Final Smart Assist Card

Solution-finding in seconds


🔴 Problem

Agents had to move out of the agent console & manually search for solutions in different tools one at a time

✅ Solution

Proactive recommendations of similar cases, articles, & guided scripts all at one place within agent console

Agents can directly insert the extracted solution from recommended entities into the reply box.

Insert Response directly from the Smart Assist Card

Or they can go a step inside, view the entire context & then reply accordingly.

Research the recommendations in detail & choose parts of it accordingly

Dev Handover & Implementation

Reviewed the final designs with the triad & then handed over to developers. Once developed, we did 2 rounds of design QA & collaborated with our WalkMe team & the PMs to review the onboarding plan for agents & setup the metrics for evaluation.


Intitially what seemed a small redesign project turned out to be a big stepping stone for the product as well as personally.

At Product level

Helped 40k+ agents in leading brands to provide quality support to millions of their customers worldwide.

🎯 Agent response time reduced by 31%

Resolution time reduced by 26%, SLA breaches went down by 37% & CSAT increased by 13% in 4 months

🏆️️️ Won the deal with the tech giant

Led to multiple cross-sell opportunities, eventually establishing it as one of the biggest clients in Care

💰️️️ 1.4x increase in revenue

Product grew by 1.4x in revenue with multiple new & up-sell deals within a quarter of the launch

✨Conception of new impactful products

Insights discovered in the project led to the birth of new products like Supervisor Console & Care Console

At Personal level

The project was also a big milestone for me as it led me onto an exponential growth trajectory at Sprinklr.

💫 Got the opportunity to move to the Care team permanently where I got to lead multiple high-impact projects working & learning alongside some of the best minds across the triad.

Care Console

Next gen modular Agent Console that brands can create themselves based on their specific needs

Conversational AI Applications

A platform to discover, create, test, deploy, & analyze brand’s own Conversational Bots

Website Live Chat & Video Calling

Fully customizable customer support ecosystem that can be deployed on websites.

🚗 It was my first project being in the driver’s seat & it came packed with lots of challenges & learnings, some of which are documented in the subsequent section.

🏆 Got recognized by the C-suite for the project and received the Sprinklr Performer of the Quarter award.

Learnings & Takeaways

I am grateful to the wonderful design team at Sprinklr who trusted me with the project & helped me develop it through the way. I picked up many important lessons through the project, some of which are documented below:

🍎 Redesigning a habituated product

Agents were habituated to the console. Modifying it, even for good, could backfire. I learned what might & might not work, and how to approach such redesigns.

🙏️️️ When in doubt, seek users &/or data

Inform doubts in design decisions with user testing, data or research instead of your or anyone’s intuition.

🗓 Project planning comes with package

There will always be more things & less time. Plan well, keep buffers, communicate well, stay in sync, & try to stick to the plan.

💙 Good team relationships do wonders

Team dynamics can make or break a project. Invest time to know your team & actively collaborate throughout to bear good returns.

💻 Screen Recordings as research tools

Binging users using your product can bring in insights that interviews might not. Use it effectively.

🔁 Understand flowchart, test prototype

Flowcharts are great tools for understanding & analyzing existing processes. Prototypes are great for testing & analyzing proposed processes.

bottom of page