Integration Guide
Go from a working WebRTC app to sessions analyzed on your rtcstats.com dashboard - four steps, under an hour.
You have a WebRTC app in production. Users complain about call quality, and right now your debugging workflow is "pull a webrtc-internals dump, open it, scroll through thousands of lines, hope you spot the problem."
This guide gets your sessions flowing into rtcStats so you stop doing that. Four steps: (1) install the client SDK, (2) stand up the collection server, (3) connect it to rtcstats.com, and (4) verify your first analyzed session. The whole thing takes under an hour.
Before you start
You need three things:
- A WebRTC app running in the browser (any framework)
- A server or container where you can run rtcstats-server
- An rtcstats.com account (the free tier works fine for setup verification)
Step 1: Install rtcstats-js in your app
npm install @rtcstats/rtcstats-js
Add this to your app's entry point, before any **RTCPeerConnection** is created:
import { wrapRTCStatsWithDefaultOptions } from '@rtcstats/rtcstats-js';
const trace = wrapRTCStatsWithDefaultOptions();
trace.connect('wss://your-rtcstats-server.example.com' + window.location.pathname);
That's it. wrapRTCStatsWithDefaultOptions() patches RTCPeerConnection on the window object. Every peer connection created after this call is automatically monitored. No changes to your existing WebRTC code.
We will be dealing with that wss://your-rtcstats-server.example.com in our next step.
Where to place it (by framework)
React / Next.js App Router: Inside a useEffect in a 'use client' component at the app root. Return trace.close() as the cleanup function.
Next.js Pages Router: Inside useEffect in _app.tsx.
Vue 3: Inside onMounted() in App.vue. Call trace.close() in onUnmounted().
Vanilla JS: After DOMContentLoaded.
Step 2: Stand up rtcstats-server
rtcstats-server is the open-source collection layer. It sits inside your infrastructure, receives data from rtcstats-js, and forwards the sessions you choose to rtcstats.com for analysis.
You own this server. Your data hits your infrastructure first.
Docker (recommended):
docker run -p 3000:3000 rtcstats/rtcstats-server
Point your rtcstats-js trace.connect() URL at this server:
trace.connect('wss://your-rtcstats-server.example.com' + window.location.pathname);
// Local development:
trace.connect('ws://localhost:3000' + window.location.pathname);
See the rtcstats-server README for the full list of configuration options.
Step 3: Connect to rtcstats.com (Optional)
By default, rtcstats-server stores sessions locally. To get proper visualization along with Observations, Deductions, an Experience Score, and an AI Summary for each session, configure forwarding to rtcstats.com.
Grab an application token:
- Sign in at rtcstats.com
- Go to Settings > Applications
- Create an application and give it a name
Add the token generated to your rtcstats-server configuration. The server uses it to authenticate uploads to https://rtcstats.com/api/rtcstats/v1.0/upload.
Tokens for API use are available from Developer plan and above. You can manually upload files collected by rtcstats-server on an ad-hoc basis when needed.
Step 4: Verify your first session
- Start a WebRTC call in your app
- Give it a minute or ten to run, and then close the app
- Check to see that the file is found in
packages/rtcstats-server/upload(unless you modified the server's configuration) - To view the analysis, upload the file to rtcstats.com/dashboard/upload. If you connected to rtcstats.com in Step 3, the file should already be there for you without requiring an upload
If nothing shows up:
- Open chrome devtools, go to the page. Check that the websocket to rtcstats gets opened when you expect it
- Confirm rtcstats-server is reachable at the WebSocket URL you passed to
trace.connect() - Confirm your forwarding config in rtcstats-server points to rtcstats.com with a valid application token
- Confirm
wrapRTCStatsWithDefaultOptions()runs beforenew RTCPeerConnection()in your code
What you get
Once a session is analyzed, you stop scrolling through raw getStats() output. Instead, you get:
- Complete visual analysis of the session, split into easy to navigate tabs and sections
- Observations - the specific moments that mattered. Not a haystack of metrics - the needles, already pulled out
- Deductions - what those Observations actually mean. Getting you closer to the root cause, not just symptoms
- Experience Score - how the call felt, in a single number (0-100). No more guessing whether a session was "bad enough" to investigate
- AI Summary - the whole story in plain English. What happened, why, and what to look at
Access your results three ways:
- The dashboard at rtcstats.com
- The REST API:
GET https://rtcstats.com/api/rtcstats/v1.0/sessions/{sessionId}- see the API reference - The MCP server:
get_sessiontool athttps://rtcstats.com/api/rtcstats/v1.0/mcp
Connect your AI coding assistant (MCP)
Once sessions are flowing, you can wire up Claude Code, Cursor, or any MCP-compatible tool directly to your rtcStats data. Stop context-switching between your IDE and the dashboard - ask your AI assistant about session quality while you debug.
Add this to your MCP config:
{
"mcpServers": {
"rtcstats": {
"url": "https://rtcstats.com/api/rtcstats/v1.0/mcp",
"headers": {
"Authorization": "Bearer YOUR_APPLICATION_TOKEN"
}
}
}
}
Available tools:
get_quota- check your remaining analysis creditslist_sessions- list recent sessions with metadataget_session- pull the full analysis for a session (Observations, Deductions, Experience Score, AI Summary)
MCP tools are read-only and don't consume analysis credits. Requires the Enterprise plan.
What's next
You're collecting sessions and getting analysis. From here:
Tweak and shape rtcstats-server collection
- Integration instructions for rtcstats-js and rtcstats-server on various WebRTC frameworks and IaaS environments
- Setup storage for rtcstats-server collected files
- Handle session and user identity
- Configure rtcstats-server for geolocation
Do more with the rtcStats solution
- Embeddable Viewer - embed rtcStats analysis directly in your own dashboard or support tools
- Best Practices - move from ad-hoc debugging to active observability
- Sampling and Forwarding Rules - fine-tune what gets analyzed at scale
Was this page helpful?