AI Support Guide

Change Management

Prepare your team for AI adoption with clear communication, training, and phased rollout.

AI tools change how support teams work. Without thoughtful change management, even the best technology fails. This page covers how to address concerns, build trust, train every role, and roll out in phases that set you up for success.

Addressing agent concerns

The biggest question on every agent's mind: "Will AI replace me?"

The answer is no, and you need to say it clearly and back it up with specifics. AI handles the repetitive, mechanical parts of support work. Agents handle the complex, empathetic, human parts.

Here is how each service reinforces this:

  • Triage routes tickets so agents spend less time in the wrong queue and more time on tickets that match their skills.
  • Reply drafts responses so agents spend less time typing boilerplate and more time reviewing, personalizing, and adding judgment.
  • QA scores every conversation so agents get consistent, timely feedback instead of waiting months for a random review.
  • Sentiment detects escalation risk so agents can intervene proactively -- something that requires human empathy and creativity.
  • Pulse automates reporting so team leads spend less time in spreadsheets and more time coaching.
  • KB identifies knowledge gaps so agents are not forced to reinvent answers that should already exist.

The key message: AI makes agents more effective, not redundant. Teams that adopt AI tools typically handle more volume at higher quality, which makes the team more valuable to the organization.

Building trust in AI drafts (Reply)

Trust is earned, not assumed. Follow this progression:

Start in suggestion mode

Deploy Reply so that drafts appear as suggestions, not auto-sends. Agents review every draft and decide whether to use it, edit it, or discard it. This gives agents full control.

Track and share acceptance rate

Use the Reply /feedback endpoint to track how often agents accept, edit, or reject drafts. Share the numbers with the team weekly. When acceptance hits 60-70%, celebrate it. When it is low, investigate specific failure patterns.

Make feedback easy

Agents should be able to flag a bad draft with one click and optionally add a note. This feedback loop is what makes the system improve. If giving feedback is cumbersome, agents will stop doing it.

Make overrides frictionless

Editing or discarding a draft should never feel like going against the system. The UI should make it just as easy to override as to accept. Agents are the final authority.

Celebrate accuracy wins

When Reply nails a complex response, share it with the team. Concrete examples of AI doing something impressive build trust faster than metrics alone.

QA rubric buy-in

Automated QA only works if agents trust the rubric. Here is how to get buy-in:

Involve agents in rubric design. Before deploying QA, hold a calibration session where agents and QA analysts score the same conversations independently. Discuss disagreements. Refine the rubric together.

Make scoring transparent. Agents should be able to see their own QA scores, the rubric criteria, and the specific evidence the system used. No black boxes.

Use coaching notes for development, not punishment. QA scores should feed into coaching conversations, not disciplinary actions. The moment scores become punitive, agents game the system instead of improving.

Share success stories. When an agent improves their scores over time, highlight the growth. When the team's average rises, celebrate it as a collective win.

Compare automated and manual scores. During the first few months, run automated QA alongside manual reviews. Show agents that the automated scores are consistent with human judgment. This builds confidence in the system.

Phased rollout

Do not deploy everything at once. A phased approach reduces risk and builds momentum.

Phase 1: Pilot (2-4 weeks)

Select a single team for the pilot. The ideal pilot team:

  • Is willing and enthusiastic (volunteers, not conscripts)
  • Handles moderate ticket volume (enough data to be meaningful, not so much that problems are amplified)
  • Has good baseline data (you have been tracking their metrics already)
  • Has a supportive team lead who will champion the tools

Start with one or two services. Triage and Reply are usually the best starting pair because they have the most visible day-to-day impact.

Phase 2: Measure and iterate (2-4 weeks)

Collect feedback from the pilot team. Look at the metrics. Fix problems. Tune prompts. Adjust the QA rubric. This phase is where you earn the right to expand.

Phase 3: Expand (4-8 weeks)

Roll out to additional teams one at a time. Use the pilot team as advocates. They can answer questions and share their experience with new teams. Add additional services (QA, Sentiment, Pulse) as teams are ready.

Phase 4: Full deployment

Once all teams are onboarded and metrics are stable, move to full deployment. Continue monitoring and iterating -- this is not a one-time project.

Per-role training plan

Different roles need different training. Here is what each role needs to know.

Agents

  • How to review AI-generated drafts from Reply: what to look for, when to accept, when to edit, when to discard
  • How to override Triage routing when the AI gets it wrong
  • How to give feedback on drafts (the feedback mechanism and why it matters)
  • How Sentiment alerts work: what triggers them, what to do when one fires, how to de-escalate
  • How to read their own QA scorecards and use coaching notes

Team leads

  • Reading QA scorecards at the team level: trends, outliers, calibration
  • Coaching with AI data: using QA scores and Sentiment trends to guide 1:1 conversations
  • Using Pulse dashboards for daily operations: volume, SLA, staffing
  • Monitoring escalation alerts from Sentiment and knowing when to step in

QA analysts

  • Rubric design and calibration: how to define criteria, weight them, and test them
  • Interpreting QA trends: what a rising or falling score means, how to investigate
  • Comparing automated vs manual scores: running calibration checks and adjusting
  • Using QA data to identify training needs across the team

Executives

  • Reading ROI dashboards: cost vs savings, quality trends, productivity metrics
  • Understanding cost drivers: which services consume the most tokens, how costs scale with volume
  • Making scaling decisions: when to add services, when to invest in tuning, when to expand to new teams

Communication templates

Launch announcement (for agents)

Subject: Introducing AI-assisted support tools

Team,

Starting [date], we are rolling out new AI tools to help with [specific services]. These tools are designed to handle the repetitive parts of our work so you can focus on what you do best: helping customers with complex problems.

Here is what is changing:

  • [Triage] will automatically route incoming tickets to the right queue
  • [Reply] will suggest draft responses that you can accept, edit, or discard

What is NOT changing:

  • You are still in control of every response that goes to a customer
  • Your expertise and judgment are what make our support great
  • These tools are here to help you, not replace you

We are starting with [pilot team] and will expand based on results. Training sessions are scheduled for [dates].

Questions? Reach out to [contact].

Agent FAQ

Q: Will AI replace support agents? A: No. AI handles routine work like drafting responses and routing tickets. You handle the complex, empathetic, human work that AI cannot do. Teams using these tools typically handle more volume at higher quality.

Q: What if the AI draft is wrong? A: Discard it or edit it. You are always the final authority. Your feedback helps the system improve.

Q: Will I be penalized if my QA scores are low? A: No. QA scores are for coaching and development. If your scores are low in an area, your team lead will work with you to improve. The goal is growth, not punishment.

Q: Can I opt out? A: During the pilot phase, talk to your team lead about any concerns. We are committed to making these tools work for the team, and your feedback is essential.

Common failure modes

Deploying everything at once. Teams get overwhelmed. There is no time to learn one tool before the next arrives. Problems are harder to diagnose because too many variables changed simultaneously.

Not getting agent feedback. If you skip the feedback loop, you miss critical signal about what is working and what is not. Agents are your best source of truth about AI quality.

Treating AI as set-and-forget. Prompts need tuning. Rubrics need calibration. Knowledge bases need updating. AI tools require ongoing attention, just less attention than the manual processes they replace.

Using QA scores punitively. The fastest way to destroy trust in automated QA is to use it for discipline. Agents will game the system, morale will drop, and the data becomes meaningless.

Skipping the pilot phase. Pilots catch problems when the blast radius is small. Skipping them means problems surface at full scale, which is expensive and demoralizing.

Next steps

On this page