AI Support Guide

Executive Reporting

Build leadership dashboards and board-level reports from Simpli data.

Support leaders need data to make decisions, justify investment, and communicate value to the rest of the organization. This page shows you how to pull data from Simpli services and turn it into reports that directors, VPs, and board members can act on.

Data sources

Each Simpli service contributes a different piece of the executive reporting picture.

ServiceWhat it providesKey endpoints
PulseOperational metrics, volume forecasts, SLA compliance/metrics, /sla, /forecast
QAQuality scores, agent performance, coaching data/scorecards, /trends
SentimentCustomer health trends, escalation data, risk signals/trends, /customers/{id}/sentiment
ReplyDraft acceptance rate (a proxy for agent trust in AI)/feedback, /usage
KBKnowledge base health, gap analysis/gaps, /stale, /search
TriageRouting accuracy, classification metrics/metrics, /accuracy

Every service also exposes a /usage endpoint that reports token consumption and API call counts, which feeds into cost tracking.

Building a weekly operations report

The weekly ops report is for team leads and directors. It should take less than 15 minutes to produce (and ideally be automated).

Report structure

# Weekly Support Operations Report
## Week of [date range]

### Highlights
- [2-3 key wins or concerns from the week]

### Volume and SLA
- Total tickets: [number] ([+/-]% vs last week)
- First response time (avg): [time] (target: [time])
- SLA compliance: [%] (target: [%])
- Forecast next week: [number] tickets

### Quality
- Average QA score: [number]/100 ([+/-] vs last week)
- Conversations scored: [number] ([%] coverage)
- Top coaching theme: [theme from QA trends]

### Customer Health
- Average sentiment: [score] ([+/-] vs last week)
- Escalation alerts triggered: [number]
- At-risk customers flagged: [number]

### AI Performance
- Reply draft acceptance rate: [%]
- Triage routing accuracy: [%]
- KB gaps identified: [number] (closed this week: [number])

### Action Items
- [ ] [Specific action with owner and deadline]
- [ ] [Specific action with owner and deadline]

Where to pull each number

  • Total tickets, FRT, SLA compliance: Pulse /metrics with the date range filter
  • Forecast next week: Pulse /forecast
  • Average QA score, conversations scored, coaching themes: QA /scorecards and /trends
  • Average sentiment, escalation alerts: Sentiment /trends
  • Draft acceptance rate: Reply /feedback aggregated over the period
  • Routing accuracy: Triage /metrics
  • KB gaps: KB /gaps

Board-level metrics mapping

Board members and C-suite executives care about a small set of metrics. Here is how Simpli data maps to what they want to see.

Board MetricSimpli SourceEndpoint
CSATQA average scoresQA /scorecards
First Response TimePulse/metrics
SLA CompliancePulse/sla
Cost per ResolutionPulse + usage data/metrics + /usage
Customer HealthSentiment/customers/{id}/sentiment
Self-Service DeflectionKB + help center analyticsKB /search hit rates
Agent ProductivityPulse + Reply/metrics + /feedback
AI ROIAll services/usage endpoints + productivity metrics

Translating support metrics to business language

Executives do not think in handle times and CSAT scores. Translate:

  • "AHT dropped 30%" becomes "Each agent now handles 30% more tickets per day at the same quality level, effectively adding 3 FTEs of capacity without hiring."
  • "CSAT increased 5 points" becomes "Customer satisfaction with support is at an all-time high, which correlates with a [X]% improvement in retention."
  • "SLA compliance hit 98%" becomes "We are meeting our contractual support commitments to enterprise customers nearly 100% of the time."
  • "Sentiment flagged 12 at-risk accounts" becomes "We proactively intervened with 12 accounts showing signs of dissatisfaction before they escalated or churned."

Forecasting for capacity planning

Pulse's /forecast endpoint predicts ticket volume for the coming days and weeks based on historical patterns. This is critical for staffing decisions.

How to use forecasts

  1. Pull the forecast: Call Pulse /forecast with your desired time horizon (1 week, 2 weeks, 1 month)
  2. Compare to current capacity: Calculate your current team's maximum throughput (agents * tickets per agent per day)
  3. Identify gaps: If forecasted volume exceeds capacity, you need to act -- hire, shift schedules, or optimize

Connecting to staffing models

A simple staffing model:

Required agents = Forecasted daily volume / Tickets per agent per day
Buffer          = Required agents * 1.15 (15% buffer for PTO, meetings, etc.)
Gap             = Buffer - Current headcount

Pull forecasted volume from Pulse /forecast and tickets-per-agent-per-day from Pulse /metrics. If Reply is deployed and improving throughput, factor in the higher per-agent capacity.

ROI dashboard

The ROI dashboard answers the question every executive asks: "Is the AI investment paying off?"

Data to gather

From each service's /usage endpoint:

  • Total API calls
  • Total tokens consumed
  • Estimated cost (tokens * rate)

From productivity metrics:

  • Handle time reduction (Pulse /metrics, before vs after)
  • Agent throughput increase (tickets per agent per day, before vs after)
  • Quality improvement (QA score trends)

The ROI story

Structure the ROI section of your report like this:

### AI Investment ROI - [Month]

**Costs**
- Total LLM spend across all services: $[amount]
- Infrastructure: $[amount]
- Total AI cost: $[amount]

**Savings**
- Agent hours saved (AHT reduction): [hours] = $[amount]
- QA analyst hours saved (automated scoring): [hours] = $[amount]
- Reporting hours saved (Pulse automation): [hours] = $[amount]
- Total savings: $[amount]

**Net ROI: $[savings - costs] ([X]x return)**

**Quality Impact (not dollarized)**
- CSAT: [trend]
- Escalation rate: [trend]
- QA coverage: [before]% -> [after]%

Report automation

Instead of pulling numbers manually each week, automate the process. Here is an example Python script that gathers data from all services and generates a report.

"""Weekly report generator. Run via cron or scheduled task."""

import httpx
from datetime import datetime, timedelta

SERVICES = {
    "pulse": "http://localhost:8001",
    "qa": "http://localhost:8002",
    "sentiment": "http://localhost:8003",
    "reply": "http://localhost:8004",
    "triage": "http://localhost:8005",
    "kb": "http://localhost:8006",
}

def fetch_json(base_url: str, path: str, params: dict | None = None) -> dict:
    """Fetch JSON from a service endpoint."""
    resp = httpx.get(f"{base_url}{path}", params=params or {})
    resp.raise_for_status()
    return resp.json()

def generate_weekly_report() -> str:
    end = datetime.utcnow()
    start = end - timedelta(days=7)
    date_params = {
        "start": start.isoformat(),
        "end": end.isoformat(),
    }

    # Gather data from each service
    metrics = fetch_json(SERVICES["pulse"], "/metrics", date_params)
    sla = fetch_json(SERVICES["pulse"], "/sla", date_params)
    forecast = fetch_json(SERVICES["pulse"], "/forecast")
    qa_scores = fetch_json(SERVICES["qa"], "/scorecards", date_params)
    sentiment = fetch_json(SERVICES["sentiment"], "/trends", date_params)
    reply_feedback = fetch_json(SERVICES["reply"], "/feedback", date_params)
    kb_gaps = fetch_json(SERVICES["kb"], "/gaps")

    # Build the report
    report = f"""# Weekly Support Operations Report
## Week of {start.strftime('%b %d')} - {end.strftime('%b %d, %Y')}

### Volume and SLA
- Total tickets: {metrics.get('total_tickets', 'N/A')}
- First response time (avg): {metrics.get('avg_first_response_time', 'N/A')}
- SLA compliance: {sla.get('compliance_rate', 'N/A')}%
- Forecast next week: {forecast.get('predicted_volume', 'N/A')} tickets

### Quality
- Average QA score: {qa_scores.get('average_score', 'N/A')}/100
- Conversations scored: {qa_scores.get('total_scored', 'N/A')}

### Customer Health
- Average sentiment: {sentiment.get('average_score', 'N/A')}
- Escalation alerts: {sentiment.get('escalation_count', 'N/A')}

### AI Performance
- Reply draft acceptance rate: {reply_feedback.get('acceptance_rate', 'N/A')}%
- KB gaps identified: {kb_gaps.get('total_gaps', 'N/A')}
"""
    return report

if __name__ == "__main__":
    report = generate_weekly_report()
    filename = f"report_{datetime.utcnow().strftime('%Y-%m-%d')}.md"
    with open(filename, "w") as f:
        f.write(report)
    print(f"Report written to {filename}")

Adapt the service URLs to match your deployment. For production, add error handling, authentication headers, and output to your preferred format (Markdown, JSON, PDF, or directly into a Slack message).

Scheduling

Run the script weekly via cron, a CI/CD scheduled job, or any task scheduler:

# Every Monday at 8am UTC
0 8 * * 1 python3 /path/to/generate_report.py

Next steps

On this page