
Ressl AI - YC 26
Ressl AI - YC 26
Designing for Clarity and Trust in AI-Driven Workflows : revamping the ticketing experience
Designing for Clarity and Trust in AI-Driven Workflows : revamping the ticketing experience
Background & Context:
Ressl is building AI agents that run on top of Salesforce. The idea is straightforward: instead of your revenue team manually combing through dashboards, triaging updates, and deciding what to act on, agents do that for them. They analyze queries, write execution plans, make changes to your Salesforce org, and wait for human approval before deploying.
Ressl is building AI agents that run on top of Salesforce. The idea is straightforward: instead of your revenue team manually combing through dashboards, triaging updates, and deciding what to act on, agents do that for them. They analyze queries, write execution plans, make changes to your Salesforce org, and wait for human approval before deploying.
I came on board as the sole designer to own the ticketing experience end-to-end. The ticketing system is essentially the interface between humans and these agents, where you see what the agent did, what it's planning to do, and where you decide to approve, reject, or intervene. It's the most critical surface in the product. And at the time I joined, it was a mess.
I came on board as the sole designer to own the ticketing experience end-to-end. The ticketing system is essentially the interface between humans and these agents, where you see what the agent did, what it's planning to do, and where you decide to approve, reject, or intervene. It's the most critical surface in the product. And at the time I joined, it was a mess.
Business Goals
The aim was to reduce manual work for teams by using AI to highlight important tickets, explain what’s happening, and suggest next steps. Also, to help users quickly understand, trust the system, and take the right action without constantly switching between views
The aim was to reduce manual work for teams by using AI to highlight important tickets, explain what’s happening, and suggest next steps. Also, to help users quickly understand, trust the system, and take the right action without constantly switching between views
My Role
Product Designer
Product Designer
Timeline
3 weeks
3 weeks
Worked With
Co-Founders & 2 Engineers
Co-Founders & 2 Engineers
The problem I was actually solving
The problem I was actually solving
Before I touched anything in Figma, I needed to be clear about what was really broken. The existing design had a list of surface-level issues, misleading UI, poor information hierarchy, vague content structure.
Before I touched anything in Figma, I needed to be clear about what was really broken. The existing design had a list of surface-level issues, misleading UI, poor information hierarchy, vague content structure.

Solving for Core Users:
Think about who's using this. You're a RevOps lead or a Salesforce admin at a mid-sized company. An AI agent just analyzed a query from your team, created an execution plan, and is asking you to approve it before it makes changes to your production Salesforce org.
Think about who's using this. You're a RevOps lead or a Salesforce admin at a mid-sized company. An AI agent just analyzed a query from your team, created an execution plan, and is asking you to approve it before it makes changes to your production Salesforce org.
You have maybe two minutes to look at this ticket, understand what the agent is proposing, decide if it looks right, and either say go or no-go. That's a high-stakes, low-time decision. And the existing experience was doing nothing to help with it. The dashboard showed ticket that were confusing and poorly designed.
You have maybe two minutes to look at this ticket, understand what the agent is proposing, decide if it looks right, and either say go or no-go. That's a high-stakes, low-time decision. And the existing experience was doing nothing to help with it. The dashboard showed ticket that were confusing and poorly designed.
Synthesizing the Core Problem
Clarity → Users couldn’t quickly understand what was happening
Trust → Users couldn’t validate AI decisions
Control → Users couldn’t easily intervene or guide outcomes
Clarity → Users couldn’t quickly understand what was happening
Trust → Users couldn’t validate AI decisions
Control → Users couldn’t easily intervene or guide outcomes
How I picked & planned the opportunity areas
How I picked & planned the opportunity areas

I asked myself, how might I redesign a ticketing experience on top of AI agents so that businesses can more easily consume ticket information, clearly track and prioritize what matters, and take confident, quick actions at a glance - all through a cleaner, more structured interface?
I asked myself, how might I redesign a ticketing experience on top of AI agents so that businesses can more easily consume ticket information, clearly track and prioritize what matters, and take confident, quick actions at a glance - all through a cleaner, more structured interface?
With a 3 weeks timeline, I had to be quick about what I was trying to achieve. I couldn't redesign everything on the platform. I had to prioritize & sequence it based on the stakeholders & cohort customers’s requirements.
With a 3 weeks timeline, I had to be quick about what I was trying to achieve. I couldn't redesign everything on the platform. I had to prioritize & sequence it based on the stakeholders & cohort customers’s requirements.
I used Aarron Walter's hierarchy of user needs as my mental model. Functional, Reliable, Usable, Delightful, in that order. Not as a rigid checklist, but as a way to force-prioritize. In a product this early, with users making approval decisions on AI generated plans, the bar for the first two layers is non-negotiable. The interface needs to work correctly and behave predictably before anything else.
I used Aarron Walter's hierarchy of user needs as my mental model. Functional, Reliable, Usable, Delightful, in that order. Not as a rigid checklist, but as a way to force-prioritize. In a product this early, with users making approval decisions on AI generated plans, the bar for the first two layers is non-negotiable. The interface needs to work correctly and behave predictably before anything else.
Diving into solution space
Diving into solution space
The next question I asked myself was: what does an user actually need at each stage of their interaction with a ticket? I mapped this out around three mental states a user moves through:
The next question I asked myself was: what does an user actually need at each stage of their interaction with a ticket? I mapped this out around three mental states a user moves through:
Scanning —
they're on the dashboard, and they need to know at a glance which tickets need their attention right now. Status, type, team, and a clear action affordance.
they're on the dashboard, and they need to know at a glance which tickets need their attention right now. Status, type, team, and a clear action affordance.
Understanding —
now they've opened a ticket, and they need to quickly grasp what the agent did, what it's planning, and whether the plan looks right. This is where most users were failing with the existing experience. The information was there, but it wasn't structured for someone who needs to make a judgment call fast.
now they've opened a ticket, and they need to quickly grasp what the agent did, what it's planning, and whether the plan looks right. This is where most users were failing with the existing experience. The information was there, but it wasn't structured for someone who needs to make a judgment call fast.
Acting —
finally, they need to approve, reject, or push back on the plan & they need to feel certain about that action.
finally, they need to approve, reject, or push back on the plan & they need to feel certain about that action.

Phase #1
Structuring the Core Experience
Structuring the Core Experience
The first phase focused on the dashboard and the ticket detail side panel.
On the dashboard, the priority was signal over noise. Revenue teams carry a lot of tickets simultaneously across different teams. The interface needed to communicate clarity and status without making the user work for it.
The ticket table had to carry a lot of information: type, ID, status, team, timestamps, and actions without feeling like a spreadsheet. The challenge here was a classic B2B design problem where you're serving multiple roles simultaneously. Different team cares about different types of informations. The sorting and filtering layer was designed to handle that variance without cluttering the default view.
The first phase focused on the dashboard and the ticket detail side panel.
On the dashboard, the priority was signal over noise. Revenue teams carry a lot of tickets simultaneously across different teams. The interface needed to communicate clarity and status without making the user work for it.
The ticket table had to carry a lot of information: type, ID, status, team, timestamps, and actions without feeling like a spreadsheet. The challenge here was a classic B2B design problem where you're serving multiple roles simultaneously. Different team cares about different types of informations. The sorting and filtering layer was designed to handle that variance without cluttering the default view.

The side panel allowed users to —
View ticket details
Track execution progress
Understand plan steps
View ticket details
Track execution progress
Understand plan steps
This approach aimed to —
Keep users in context
Reduce navigation overhead
Surface key information progressively
Keep users in context
Reduce navigation overhead
Surface key information progressively
The hardest call in Phase I was deciding how to show ticket’s details & execution history. The existing one had 7 long steps in the overall process tied to diff. stages of a ticket. But those were weirdly named & planned out, I landed on a 4-step based execution history.
The hardest call in Phase I was deciding how to show ticket’s details & execution history. The existing one had 7 long steps in the overall process tied to diff. stages of a ticket. But those were weirdly named & planned out, I landed on a 4-step based execution history.

Feedback from the cohort: post v1 launch shenanigans
Feedback from the cohort: post v1 launch shenanigans
📌
One customer said it directly: "Is there a way I can prompt the agent to make changes in the plan before it starts executing?" — which told me something important.
One customer said it directly: "Is there a way I can prompt the agent to make changes in the plan before it starts executing?" — which told me something important.
The side panel hadn't just constrained visibility. It had limited the user's ability to feel like an active participant in the process. They felt like they were approving something they couldn't fully see or influence.
The side panel hadn't just constrained visibility. It had limited the user's ability to feel like an active participant in the process. They felt like they were approving something they couldn't fully see or influence.
After shipping the V1 designs, the team gathered feedback from early customers. 3 things came up consistently.
After shipping the V1 designs, the team gathered feedback from early customers. 3 things came up consistently.
1 —
The side panel was too constrained. Users who were actually using the product to process complex Salesforce changes, needed more visual real estate to properly read and evaluate an execution plan.
The side panel was too constrained. Users who were actually using the product to process complex Salesforce changes, needed more visual real estate to properly read and evaluate an execution plan.
2 —
The second issue was collaboration. Businesses wanted to communicate with the agent while a task was in progress. To ask questions, add context, flag concerns and they wanted their team members to be able to do the same. The V1 side panel approach felt static and disconnected from how people actually work together on a task.
The second issue was collaboration. Businesses wanted to communicate with the agent while a task was in progress. To ask questions, add context, flag concerns and they wanted their team members to be able to do the same. The V1 side panel approach felt static and disconnected from how people actually work together on a task.
3 —
The third issue was error communication. In V1, error states and Ressl-side limitations surfaced as alert-style banners: red boxes, warning colors. Users found them alarming and unclear. When something goes wrong with an AI agent touching your production data, the last thing you want to see is a red banner with no context about what to do next (Image context added later).
The third issue was error communication. In V1, error states and Ressl-side limitations surfaced as alert-style banners: red boxes, warning colors. Users found them alarming and unclear. When something goes wrong with an AI agent touching your production data, the last thing you want to see is a red banner with no context about what to do next (Image context added later).
Each of these problems pointed to the same underlying gap: the V1 design was built around displaying information, but users needed an interface that felt like a workspace, something they could act in, communicate through, and trust under uncertainty.
Each of these problems pointed to the same underlying gap: the V1 design was built around displaying information, but users needed an interface that felt like a workspace, something they could act in, communicate through, and trust under uncertainty.
Phase #2
Evolving the Interaction Model
Evolving the Interaction Model
In the 2nd Phase of redesigning the ticketing exp., I made two significant structural changes. Based on the above insights, the experience shifted from a panel-based inspection model to a bigger canvas and interactive system.
I replaced the side panel with a full-screen ticket detail view. This gave the execution plan, code output, and ticket history the space they needed to breathe. With a proper 2 column layout: Ticket details and history on the left, execution plan and analysis on the right, users could now read and evaluate simultaneously rather than toggling between tabs. The "Start Deployment" and "Cancel Ticket" actions moved to the top-right, persistently visible, so the path to action was never more than a glance away (I picked up this UI pattern from attio’s dashboard).
In the 2nd Phase of redesigning the ticketing exp., I made two significant structural changes. Based on the above insights, the experience shifted from a panel-based inspection model to a bigger canvas and interactive system.
I replaced the side panel with a full-screen ticket detail view. This gave the execution plan, code output, and ticket history the space they needed to breathe. With a proper 2 column layout: Ticket details and history on the left, execution plan and analysis on the right, users could now read and evaluate simultaneously rather than toggling between tabs. The "Start Deployment" and "Cancel Ticket" actions moved to the top-right, persistently visible, so the path to action was never more than a glance away (I picked up this UI pattern from attio’s dashboard).

I then added an Agent Chat + Comments interface within the ticket view. This was a meaningful shift in how the product positioned itself, not just as a place to review what the agent did, but as a place to work alongside it. Users could now add comments, communicate with team members, and have a structured conversation with the agent without leaving the ticket context. This also solved the collaboration problem, the team comments and the agent chat lived in the same place, under the same ticket, creating a complete record of the decision-making process.
For error and limitation states, I replaced the alarming banner approach with contextual inline cards, each paired with an "Ask chat" affordance. This was a new type of UI component in itself, I combined the “inline tooltip” component with cursor’s “Ask chat” nudge into a single interaction corner. If something failed or hit a capability limit, the user wasn't left staring at a red box, they were given a clear, low-pressure path to ask the agent what happened and what to do next.
I then added an Agent Chat + Comments interface within the ticket view. This was a meaningful shift in how the product positioned itself, not just as a place to review what the agent did, but as a place to work alongside it. Users could now add comments, communicate with team members, and have a structured conversation with the agent without leaving the ticket context. This also solved the collaboration problem, the team comments and the agent chat lived in the same place, under the same ticket, creating a complete record of the decision-making process.
For error and limitation states, I replaced the alarming banner approach with contextual inline cards, each paired with an "Ask chat" affordance. This was a new type of UI component in itself, I combined the “inline tooltip” component with cursor’s “Ask chat” nudge into a single interaction corner. If something failed or hit a capability limit, the user wasn't left staring at a red box, they were given a clear, low-pressure path to ask the agent what happened and what to do next.
Separating error severity from alarm semantics was important here: not every error is a crisis, and the design shouldn't make it feel like one.
Separating error severity from alarm semantics was important here: not every error is a crisis, and the design shouldn't make it feel like one.


Wrapping Up: Learnings & Impact
Wrapping Up: Learnings & Impact
The users I was designing for weren't worried about usability in the abstract. They were worried about consequences. That changed how I thought about clarity. In most products, clarity is about reducing confusion. Here, clarity was about enabling trust & confidence. Those are related, but they're not the same thing. A user can understand an interface perfectly and still not feel confident enough to act on it. That gap between comprehension and confidence is where most of my design work lived in this timeline.
I also learned something about designing under constraints of novelty. Ressl's users didn't have a mental model for "approving an AI agent's Salesforce execution plan" because this product category barely existed. Which meant I couldn't rely on pattern matching to familiar interfaces. Every structural choice had to carry its own explanation.
If I had more time, I would have invested earlier in understanding how different roles: admin, RevOps lead, engineer processed ticket information differently, and built more explicit role based views. The current design serves a generalist user well, but as the product grows, the variance in user intent across roles will demand more targeted solutions.
The users I was designing for weren't worried about usability in the abstract. They were worried about consequences. That changed how I thought about clarity. In most products, clarity is about reducing confusion. Here, clarity was about enabling trust & confidence. Those are related, but they're not the same thing. A user can understand an interface perfectly and still not feel confident enough to act on it. That gap between comprehension and confidence is where most of my design work lived in this timeline.
I also learned something about designing under constraints of novelty. Ressl's users didn't have a mental model for "approving an AI agent's Salesforce execution plan" because this product category barely existed. Which meant I couldn't rely on pattern matching to familiar interfaces. Every structural choice had to carry its own explanation.
If I had more time, I would have invested earlier in understanding how different roles: admin, RevOps lead, engineer processed ticket information differently, and built more explicit role based views. The current design serves a generalist user well, but as the product grows, the variance in user intent across roles will demand more targeted solutions.
Getting into Y Combinator
The redesigned experience played a key role in shaping a clearer, more compelling product demo. This helped the team better communicate the value of AI-driven workflows, ultimately contributing to Ressl being accepted into Y Combinator’s Winter batch.
The redesigned experience played a key role in shaping a clearer, more compelling product demo. This helped the team better communicate the value of AI-driven workflows, ultimately contributing to Ressl being accepted into Y Combinator’s Winter batch.
Lesser support calls
The improved experience made it easier for customers to understand tickets, actions, and system behavior on their own. As a result, founders saw fewer support calls, since customers could navigate and resolve most workflows independently.
The improved experience made it easier for customers to understand tickets, actions, and system behavior on their own. As a result, founders saw fewer support calls, since customers could navigate and resolve most workflows independently.
Across platform adoption
While customers earlier relied more on slack integrations for raising queries, the new design encouraged more usage of the web app. This improved adoption of key features like the inbuilt organization's LLM chat and other features that were previously underused due to unclear design.
While customers earlier relied more on slack integrations for raising queries, the new design encouraged more usage of the web app. This improved adoption of key features like the inbuilt organization's LLM chat and other features that were previously underused due to unclear design.



