Session Replay vs. Error Tracking: When You Need Both
Your error tracker fires an alert at 2 PM on a Thursday: TypeError: Cannot read
properties of undefined (reading 'id'), occurring in checkout.js at
line 147. Forty-three occurrences in the last hour. You open the stack trace. You look at
the source. The stack trace points to a line where you're accessing a property on a
response object that's somehow undefined — but you've been shipping this code for six
months without incident. What changed?
You check your deployment history. Nothing recent. You look at the error context your tracker captured — browser, OS, URL, user agent. A spread of different browsers, all on the same checkout page. You add a breadcrumb annotation and wait for the next occurrence. It comes in. Still no clarity.
What you need isn't more metadata about the error. What you need is to watch what the user was doing in the sixty seconds before the error fired.
This is the gap between error tracking and session replay — and understanding it helps you use both tools more effectively.
What Error Tracking Does
Error tracking tools — Sentry, Bugsnag, Rollbar, Datadog APM — monitor your application for unhandled exceptions, promise rejections, and logged errors. When something goes wrong, they capture the exception message, the stack trace, the runtime environment, and whatever breadcrumbs or contextual metadata you've configured.
They're exceptionally good at detection and triage. An error tracker answers:
- Did something break?
- How often is it happening?
- Is it getting worse or better?
- Which line of code is failing?
- Which users and environments are affected?
- Is this a regression from a recent deployment?
These are important questions, and error tracking answers them reliably at scale. For many bugs — especially backend errors, simple null reference exceptions, and network failures with obvious causes — the stack trace and context an error tracker provides is enough to find and fix the problem.
But error trackers have a structural limitation: they capture the moment of failure, not the path that led to it. The stack trace tells you where the code was executing when it broke. It doesn't tell you what sequence of user actions, application state transitions, and network responses created the conditions that made it break.
The Questions Error Tracking Can't Answer
Error tracking struggles with a category of bugs where the failure is a consequence of state — where the code is technically correct but the sequence of events that reached it was not anticipated. These include:
Race conditions and timing bugs
A user double-clicks a submit button. Two requests go out simultaneously. The second response arrives before the first and overwrites state the first response was expecting to write. An exception fires in the event handler. The stack trace points to a state mutation operation. Without seeing the two concurrent requests in the network log, the error looks inexplicable.
Multi-step form bugs
A user fills out a five-step checkout flow, goes back to step two to change their shipping address, proceeds forward again, and hits an error on step four. The state management code didn't account for the back-and-forward navigation pattern. The error fires in a generic state update function. Nothing in the stack trace indicates that the back navigation was the precondition.
Conditional UI interaction bugs
A feature flag rolls out to 10% of users. Those users see a new UI element that, under a specific interaction pattern, causes a conflict with an existing component's event handlers. The error fires in a shared utility function. Your error tracker shows a sudden spike in that function, but the connection to the feature flag isn't visible in the metadata.
Third-party interference
A browser extension injects a script that modifies the DOM in a way that conflicts with your application's assumptions. Or a corporate network proxy strips certain response headers. Or an ad blocker intercepts a request to an analytics endpoint that your application incorrectly assumed would succeed. These environmental factors don't appear in stack traces.
In all of these cases, the error tracker gives you an accurate signal that something is wrong and where in the code it's failing. What it can't give you is the "because" — the sequence of events that created the conditions for failure.
What Session Replay Adds
Session replay fills the space between the error and its cause. It gives you the user's journey: what they clicked, what they typed, what the page showed them, what network requests went out and what came back — all synchronised on a timeline that ends at the moment the error fired.
For the TypeError scenario at the top of this article: with session replay, you can watch the forty-three occurrences and immediately see a pattern. Each one was preceded by the user navigating away from the checkout page and returning via the browser's back button — a navigation path your state management didn't account for. The error tracker gave you the what and the where. Session replay gave you the why.
Session replay is particularly powerful for:
- Reproducing bugs with specific interaction sequences. Watch the exact steps and reproduce them yourself with confidence.
- Understanding the network state at failure time. See every request and response in the sixty seconds before the error fired.
- Identifying environmental preconditions. The session replay captures the browser, extensions visible in the DOM, viewport size, and other environmental factors that may be contributing.
- Communicating bugs to the rest of the team. A replay link is worth a thousand words in a bug ticket. Product managers, designers, and QA engineers can all watch what happened without needing a technical explanation.
But session replay has its own structural limitations. It tells you about individual sessions, not aggregate patterns. It doesn't give you deployment-correlated error rates, alerting on error spikes, or the statistical view of how many users are affected. You can't build an on-call rotation around session replay the way you can around error tracking alerts.
The Complementary Stack
The most effective engineering teams use error tracking and session replay for what each does best. A mature workflow looks something like this:
Detection and alerting → error tracking
Your error tracker monitors production continuously, fires alerts when error rates cross thresholds, and gives you the first signal that something has broken. This is where error tracking is unmatched — high-volume, real-time, aggregate.
Triage → error tracking
Once alerted, you use your error tracker to understand the scope and severity: how many users affected, which browser and OS combinations, is it a regression from the latest deploy, is it getting worse. This shapes how urgently you respond and what you suspect.
Diagnosis → session replay
Once you know what broke and roughly who's affected, you need to understand why. This is where session replay earns its value. Pull up a session from an affected user and watch what happened in the moments before the error. In many cases, the cause is immediately obvious — a network request you didn't expect to fail, an interaction sequence you hadn't considered, a state combination that's clearly wrong in the replay.
Communication → session replay
Once you've diagnosed the bug, sharing a session replay in the ticket or incident post-mortem is dramatically more communicative than a stack trace. Non-technical stakeholders can watch what users experienced. Designers can see the UI state at the time of failure. QA can use the replay as a precise reproduction guide.
When You Don't Need Both
Not every team needs both tools, and it's worth being honest about that.
If your primary bugs are backend errors — database failures, timeout exceptions, infrastructure issues — session replay adds limited value. The problem isn't in the browser interaction; it's in your infrastructure. Error tracking, APM, and log aggregation are the right tools.
If your application is simple — a few pages, straightforward interactions, a small user base — you may find that detailed support tickets or a quick screen share are sufficient. The value of session replay scales with complexity.
If you're in very early stage development and your user base is small enough that you can do synchronous debugging with every affected user, the overhead of a session replay tool may not be worth it yet. That changes quickly as you scale.
If you already have error tracking and your primary pain point is the gap between error detection and diagnosis — customer-reported bugs that can't be reproduced, intermittent failures with no clear trigger, escalations from support that arrive without context — session replay directly addresses that gap.
Practical Integration
The most friction-free integration of the two tools is a direct link between an error occurrence and the session replay for that user at that time. Some teams build this manually by tagging error events with a session identifier and storing that alongside the replay. Clairvio's magic link model takes a different approach: the session replay is triggered on-demand when a support case is opened, so it's naturally attached to the support ticket rather than the error event.
Either way, the goal is the same: when a developer sits down to investigate a bug, they should be able to get from the error report to the session context in as few steps as possible. The fewer mental context switches between your error tracker and your replay tool, the faster you diagnose and fix.
The Bottom Line
Error tracking and session replay are not competitors. They operate at different points in the debugging lifecycle and answer fundamentally different questions. Error tracking tells you what broke. Session replay tells you why.
If you have error tracking and are still spending significant time on hard-to-reproduce bugs and slow support escalations — session replay is likely the missing half of your debugging stack. It won't replace your error tracker. It'll make it dramatically more useful.