QA

Score every conversation. Coach with data.

Automated quality assurance that scores conversations against configurable rubrics. Track agent performance, identify coaching opportunities, and ensure consistency.

Everything you need

Built for production. Designed for simplicity.

Rubric-Based Scoring

Score conversations against configurable quality rubrics with per-dimension breakdowns.

Performance Tracking

Track quality trends over time by agent, team, and topic. Identify coaching opportunities early.

100% Coverage

Score every conversation automatically — not just a random sample. Find issues before customers complain.

Calibration Tools

Compare AI scores against human reviewers to continuously calibrate and improve scoring accuracy.

Multi-Dimension Analysis

Evaluate empathy, accuracy, completeness, tone, and resolution quality independently.

Provider Agnostic

Use any LiteLLM-supported model to power your quality evaluations.

Simple to use

Score a support conversation against quality rubrics.

Full API Reference
qa_example.py
import httpx

resp = httpx.post("http://localhost:8003/qa/score", json={
    "messages": [
        {"role": "customer", "content": "My order is late"},
        {"role": "agent", "content": "I apologize for the delay..."}
    ],
})

Endpoints

Clean REST APIs. No SDK required.

POST
/qa/score
Score a conversation
POST
/qa/batch
Score multiple conversations
GET
/health
Service health check

Learn more about QA

Explore how this AI capability can transform your support operations.