Skip to content

What Is Session Replay? A Complete Guide for Engineering Teams

12 min read

It's Monday morning. A customer emails support: "Your checkout button doesn't work." No screenshot. No error message. No steps to reproduce. Just a frustrated user and a vague complaint that could mean a dozen different things.

Your first instinct is to reproduce it. You open the checkout page, click the button — it works fine. You check the error logs — nothing. You ask the customer for more details, they reply two days later with "still broken," and somewhere in the gap a sale was lost.

This is the problem session replay was built to solve. Instead of triangulating from fragments, you watch exactly what the customer experienced: every click, every error, every failed network request — replayed in your browser, on your time.

What Session Replay Actually Is

Session replay is a technique for recording user interactions in a web application and replaying them as a near-exact re-rendering of what the user saw. Unlike a screen recording, it doesn't capture video pixels. Instead, it captures the state of the DOM and every change to it, which allows the replay engine to reconstruct the interface accurately at any point in time — often with higher fidelity than a video, and at a fraction of the file size.

Think of it less like a security camera and more like a flight data recorder. It captures structured data about every meaningful event that occurred, rather than raw footage. That distinction matters a great deal, both for privacy (more on that later) and for what you can do with the recording: search it, annotate it, correlate it with other signals.

The output is a timeline you can scrub through. You see the cursor move. You see the user type into a form, pause, retype, click submit. You see the network request go out, the 500 response come back, the JavaScript error fire — all synchronized on the same timeline. In thirty seconds you know more than you'd learn from a thirty-minute support call.

How Session Replay Works Under the Hood

Modern session replay tools that Clairvio builds on — work by taking a full DOM snapshot when recording begins, then capturing incremental mutations as the page changes. This is fundamentally different from frame-by-frame video encoding.

The initial snapshot

When recording starts, the library serialises the entire DOM tree into a structured JSON representation. This snapshot captures the HTML structure, applied styles, attribute values, and text content of every visible element. It's essentially a portable, self-contained copy of the page's initial render state.

Incremental mutations

After the snapshot, the library attaches a MutationObserver to watch for DOM changes — elements added or removed, attributes changed, text content updated. Each change is recorded as a small, timestamped delta. This means the recording file doesn't grow linearly with the complexity of the page; it only grows with the amount of change that happens.

User interaction events

Layered on top of the DOM mutations are synthetic user interaction events: mouse move coordinates (sampled at a configurable rate to limit volume), mouse clicks with target element information, scroll position, window resize events, keyboard input (with sensitive fields masked), and focus/blur transitions. These are recorded as timestamped events and overlaid on the DOM replay to show the cursor path and interaction points.

Network and console capture

Beyond the DOM, session replay tools can intercept XMLHttpRequest and fetch calls, recording request URLs, methods, status codes, and optionally response payloads. Console output — console.log, console.warn, console.error — can be captured and timestamped on the same timeline. Unhandled JavaScript exceptions are recorded with their stack traces. This transforms session replay from "what did the user see" into "what was the full technical state of the page."

Session Replay vs. Screen Recording

When people first hear about session replay, they often ask: why not just ask the user to record their screen with Loom or a native screenshot tool?

Screen recordings have real limitations for engineering debugging:

  • They require user action. Users need to know how to start a recording, remember to start it before the problem occurs, and actually do it — all of which introduces friction and means intermittent bugs rarely get captured.
  • They don't capture technical state. A screen recording shows what the user saw, but not why. You can't see the network requests, the console errors, or the JavaScript exceptions that caused the visible symptom.
  • They're large and slow to share. A five-minute Loom video is many megabytes. A five-minute session replay is tens of kilobytes of structured data.
  • They're not searchable or programmable. You can't write a query over a video to find every session where a specific API endpoint returned a 4xx response. You can over structured session data.
  • They capture sensitive information indiscriminately. Video captures everything visible — including passwords, account numbers, and other sensitive content the user might not want to share. DOM-based replay can mask sensitive fields precisely.

Session replay sidesteps all of these. It starts automatically, captures both visible and technical state, produces compact structured data, and gives you fine-grained control over what gets recorded.

What Gets Captured in a Session Replay

A complete session replay typically includes several layers of information, each valuable for different parts of the debugging process:

Visual replay

The reconstructed DOM replay is what most people think of when they hear "session replay." You watch the page render, see the cursor move, watch the user interact with elements, and observe how the UI responds. This is invaluable for reproducing bugs that only manifest under specific UI interaction sequences — the kind of bug that's nearly impossible to reproduce without knowing the exact sequence.

Console output

Timestamped console messages give you the JavaScript logging context that's normally only visible in the user's DevTools. Console errors often contain the actual exception message that your production error tracker didn't capture, or log lines that a developer left in for debugging that illuminate exactly what state the application was in when the problem occurred.

Network activity

Request and response logs show you which API calls were made, in what order, and what the server returned. This is the layer where many production bugs live — a race condition in parallel requests, a backend error that wasn't surfaced to the user, an authentication token that silently expired. The visual replay alone might show the user clicking a button multiple times out of frustration; the network log shows that each click was firing a duplicate request and the third response had a 409 Conflict.

JavaScript errors

Unhandled exceptions and promise rejections captured in context — alongside the DOM state and network activity at the time they fired — are far more actionable than the same error captured in isolation by an error tracker. You know exactly what the user was doing and what the page looked like when the error occurred.

Environment snapshot

Browser type and version, operating system, screen resolution, viewport dimensions, device pixel ratio, timezone, language settings — all captured automatically at session start. This is the information that, in a support ticket, you'd need to ask three follow-up questions to get. In a session replay, it's there on the first frame.

Key Use Cases for Engineering Teams

The debugging application is obvious, but session replay is useful across a broader range of engineering workflows:

Reproducing intermittent bugs

The hardest bugs to fix are the ones you can't reproduce. "It happened once last Tuesday and I can't make it happen again" is a familiar nightmare. Session replay gives you the exact interaction sequence, browser state, and network conditions that preceded the failure. If you can watch it happen, you can figure out why it happened.

Support escalations

When a support ticket gets escalated to engineering, the handoff usually loses context. The support agent paraphrases the customer's description, which was already an imprecise description of a technical event. Session replay collapses that game of telephone — the engineer looks at the session directly and forms their own precise understanding of what occurred.

Customer-specific environments

Enterprise customers often have unusual environments: custom SSO configurations, restrictive network proxies, browser extensions that inject scripts, unusual screen configurations. Bugs in these environments can be genuinely impossible to reproduce in a standard development setup. Session replay gives you access to what happened in the customer's actual environment without needing to replicate it.

Post-incident analysis

After an incident, session replays from affected users provide ground truth about what the user-visible impact actually was. Rather than inferring from server-side metrics, you can watch the experience degrade in real time. This improves post-mortems and helps you build more accurate user impact statements for stakeholders.

QA edge cases

QA teams can use session replay to document exact reproduction steps for edge cases discovered in testing, making bug reports to engineering dramatically more precise. A video link is replaced by a structured replay with all the underlying technical context attached.

Always-On vs. On-Demand Recording

Session replay tools typically operate in one of two modes: always-on recording (every user session is captured by default) or on-demand recording (recording is triggered only when needed).

Always-on recording — used by tools like FullStory, LogRocket, and Hotjar — captures all sessions continuously. This gives you historical data before a problem is reported and enables aggregate analysis (funnel analysis, rage clicks, and so on). But it comes with significant trade-offs: higher cost at scale, a heavier privacy footprint, and the engineering overhead of integrating the SDK into your production build and ensuring it doesn't affect performance.

On-demand recording — Clairvio's approach — triggers recording only when a support case is opened. A support agent generates a magic link and sends it to the customer. When the customer opens the link in their browser, recording begins. This approach has a fundamentally smaller privacy footprint (you only record users who have specifically consented to help you debug their issue), zero ongoing cost for sessions that never need debugging, and no impact on the production application for the vast majority of users.

The trade-off is that you don't have historical recordings of sessions before the problem was reported. For live support scenarios — "I'm on the page right now and it's broken" — this isn't a limitation at all. For bugs that only happened once in the past, it is.

Privacy and Session Replay

The most common concern about session replay is privacy, and it's a legitimate one. Recording what users do in your application raises real questions about consent, data minimisation, and the handling of sensitive information.

A well-implemented session replay tool gives you control over what gets captured. Sensitive input fields — passwords, payment card numbers, social security numbers — should be masked before they leave the browser, either by CSS class configuration or by automatically masking all inputs of type password. Elements containing personal information can be blocked from the recording entirely. HTTP headers containing authentication tokens are redacted from network logs.

The on-demand model has a natural privacy advantage: you only record users who are actively seeking help, which implies a level of consent. The customer clicked the link knowing it would share their session with your team. That's a much cleaner consent model than a persistent tracking SDK embedded in every page.

Under GDPR, the legal basis for on-demand recording is typically legitimate interests or explicit consent (the user activated the magic link and agreed to share). Under CCPA, support-related data collection typically falls outside the scope of "sale" restrictions. That said, data handling practices should always be reviewed with legal counsel for your specific jurisdiction and use case.

How to Evaluate a Session Replay Tool

If you're evaluating session replay tools for your team, here are the questions worth asking:

  • What gets captured by default, and what requires configuration? Understand the default privacy posture before deployment.
  • How does it handle sensitive fields? Auto-masking of password inputs is table stakes; look for configurable masking for custom PII fields.
  • What's the performance overhead? An SDK that slows down your application creates its own problems. DOM serialisation and mutation observation are not free; understand the real-world impact.
  • How is data stored and for how long? Data retention limits should align with your privacy policy and relevant regulations.
  • Always-on or on-demand? The right answer depends on your use case. If you primarily need session replay for support debugging, on-demand is significantly more privacy-friendly and cost-effective. If you need it for product analytics and funnel optimisation, always-on may be worth the overhead.
  • What else is captured alongside the visual replay? Console logs, network requests, and JavaScript errors dramatically increase the debugging value of a replay. Make sure the tool captures them.
  • How does it integrate with your support workflow? A replay tool that lives entirely separate from your support stack creates friction. Look for tools that fit how your support and engineering teams actually collaborate.

Getting Started

Session replay is one of those tools that immediately changes how your team approaches customer-reported bugs. The first time you watch a three-minute replay and immediately understand a bug that had been open for two weeks, the value proposition becomes obvious.

Clairvio takes an on-demand approach: your support team generates magic links, customers activate them with a click, and you get a live view of the session — DOM replay, console logs, network activity, and environment snapshot — without any always-on SDK embedded in your production application.

If your team regularly fields support tickets with vague reproduction steps, spends time on screen-sharing calls trying to reproduce issues, or escalates bugs to engineering without enough context to act on them — session replay is worth evaluating. The gap between "the checkout button is broken" and "I can see exactly what happened" is exactly what it closes.

Ready to stop guessing and start seeing?

Clairvio gives your support and engineering teams full session context with a single shareable link — no installs, no screen sharing.

Try Clairvio free
← Back to all posts