Skip to main content
Metrics provides Agent observability for Cloud Agents and automated workflows. Agent observability answers a simple question: What are my AI agents doing, and is it working? Use Metrics to monitor agent activity, understand human intervention, measure success rates, and evaluate the cost and impact of AI-driven work across your repositories.

What Metrics Show About Your Cloud Agents

Continue’s Metrics give you operational observability for AI agents, similar to how traditional observability tools provide visibility into services, jobs, and pipelines. Instead of logs and latency, agent observability focuses on:
  • Runs and execution frequency
  • Success vs. human intervention
  • Pull request outcomes
  • Cost per run and per workflow
Understand when and how often your agents run.
  • See which Cloud Agents are running most often
  • Spot spikes, trends, or recurring failures
  • Monitor automated Workflows in production
Measure whether agents produce usable results.
  • Total runs
  • PR creation rate
  • PR status (open, merged, closed, failed)
  • Success vs. intervention rate
Evaluate automated agent workflows in production.
  • Which Workflows generate the most work
  • Completion and success rates
  • Signals that a Workflow needs refinement or guardrails

Why Metrics Matter

Improve Agent Reliability

Identify which Agents need better rules, tools, or prompts.

Measure Automation Value

See how much work your automated Workflows are completing across your repos.

Sharing Metrics

Share a read-only view of your metrics with stakeholders outside your organization. Click Share on the Metrics page to open the sharing dialog. Toggle Public link enabled to generate a shareable URL.
Public viewers can see aggregate metrics (total runs, merge rate, activity, workflow breakdown) but not cost data or user identities. Links expire after 30 days.