Session Replay with Console Logs and Network Request Capture (2026)
A user files a support ticket: "The submit button doesn't work." You ask for details. They send a screenshot of the form, filled out correctly, with the button in view. Nothing looks wrong. You ask if they can reproduce it. They try again. It works this time. Ticket closed — except it'll be back next week with a different user.
Session replay helps here. But a recording of mouse movements and clicks alone won't tell you what actually happened. What you need is the full picture: what the browser logged, what network requests went out, and what came back — all synchronized on the same timeline as the user's actions.
Most session replay tools capture the DOM. The better ones also capture console output and network activity. The difference between those two categories is the difference between watching a silent film and watching a film with audio and subtitles.
Why Console Log Capture Matters
Your application is already telling you what went wrong. It logs warnings when configuration is missing, errors when state mutations fail, and debug messages when conditional branches are hit. The problem is that most of this information lives in the browser's developer tools and disappears the moment the user closes the tab.
Console log capture in session replay preserves that output and ties it to the exact moment on
the replay timeline. When you're watching a session, you can see a console.warn fire
two clicks before the crash — which is often all you need to understand the root cause.
It's not just errors
Tools that only capture console.error are missing most of the story. In practice,
many failures are preceded by lower-severity log output:
- A
console.warnthat says a feature flag returned an unexpected value, silently falling back to a path the developer didn't intend - A
console.logshowing a response object where a required key is undefined — logged during a speculative data fetch, not where the crash eventually fires - A
console.infofrom a third-party SDK indicating it failed to initialize, which your application code assumed had succeeded
Useful console capture covers all levels — log, warn, error, info, debug — not just the ones that look alarming.
Structured data vs. stringified output
There's a meaningful difference between a tool that records [object Object] and one that preserves the actual structure of the logged value. When your code logs a response
body, a state snapshot, or a config object, you want to be able to inspect its properties — not
read a string representation. Look for tools that capture log arguments in structured form.
Silent failures
Not every bug throws an exception. A significant category of frontend bugs are silent failures:
the code ran without error, but the outcome was wrong. A form submitted to the wrong endpoint. A
calculated value that produced NaN instead of a number. A conditional that evaluated
the wrong branch. These often leave traces in console output that appear unimportant at
the time but explain exactly what happened in retrospect.
Why Network Request Capture Matters
A large category of frontend bugs aren't JavaScript errors at all. They're failed or unexpected API responses that the UI silently swallowed.
Your submit button doesn't work. No JavaScript exception fires. The user sees nothing happen. But
in the network panel: POST /api/checkout → 422 Unprocessable Entity.
The form validation on the server rejected the request. The frontend received the error response
and quietly did nothing, because the error handler was incomplete. Without network request
capture, the session replay shows a user clicking a button and nothing happening — mysterious.
With network request capture, the cause is immediately visible.
The requests you didn't expect to fail
API failures that look inexplicable in isolation often make sense in context. A 403 Forbidden on a request that should have been authenticated tells you the session
token expired or wasn't sent. A 504 Gateway Timeout mid-session tells you the
failure was infrastructure, not user error. A successful 200 response whose body
contains an error flag — common in older APIs — explains why the UI updated in a way the
developer didn't intend.
Timing as diagnostic data
Response timing captured alongside network requests helps distinguish between two very different problems that can look identical in a replay. A user waiting on a loading spinner for 12 seconds before giving up is a different bug from a user seeing a loading spinner flash briefly before an error appears. The first is a performance problem; the second is a failure-handling problem. Network request timing makes that distinction clear.
What to capture — and what to redact
Useful network capture includes: request URL, HTTP method, status code, response time, and
optionally request and response bodies. The bodies are where the diagnostic value lives — a
validation error message in a 422 response body tells you exactly what went wrong.
But they're also where sensitive data lives.
Any tool you consider should automatically strip authentication headers — Authorization, Cookie, X-Auth-Token — from captured network data. Storing these in
session recordings is a security risk. Tools that don't redact them by default require you to
trust that the configuration will never be missed. Look for automatic redaction as a
non-negotiable default.
How the Three Work Together
The real value of combining DOM replay, console capture, and network capture isn't any one of them in isolation — it's the synchronized timeline that lets you move between them as you watch a session.
Walk through the checkout bug:
- The replay shows the user filling out the form, all fields populated correctly, and clicking Submit.
- The network panel shows
POST /api/checkout→422, response time 340ms. - The console panel shows
console.error: Validation failed: card_expiry invalid format, logged by the API client after receiving the 422. - The DOM replay shows the button returning to its default state with no user-facing error message displayed.
You now know: the server rejected the request because the card expiry format was wrong, the API client logged the error, and the UI error handler never surfaced the message to the user. Three bugs in two minutes — without a screen share, without a reproduction environment, without asking the user anything further. For more on this on-demand debugging workflow without screen sharing, see our full guide.
This kind of diagnosis is only possible when the three data streams are correlated on a single timeline. If you have to cross-reference a separate console log export with a replay video and a network HAR file, the cognitive overhead of the investigation goes up by an order of magnitude. The tool needs to present all three together.
What to Look for When Evaluating Tools
Not all session replay tools that advertise console and network capture implement it with equal depth. A few questions worth asking:
Console capture
- Does it capture all log levels (
log,warn,error,info,debug) or only errors? - Does it preserve structured log arguments (objects, arrays) or serialize everything to strings?
- Are log entries timestamped and correlated to the replay timeline, or displayed as a separate flat list?
Network capture
- Does it capture both XHR and
fetchrequests? - Does it record request and response bodies, or only headers and status codes?
- Are authentication headers (
Authorization,Cookie) automatically redacted? - Is response timing captured alongside status codes?
- Can you filter or exclude specific endpoints (e.g., analytics pings) from capture?
Recording model
Always-on tools that record every user session capture network payloads and console output for every session — which means storing potentially sensitive data at scale. If a user logs in and your app logs their profile data to the console, that data is in every session recording. This creates a non-trivial compliance surface. If you're subject to GDPR or CCPA, see our guide on session replay and regulatory compliance.
On-demand tools that only record when a session is explicitly triggered — via a support ticket, magic link, or manual activation — dramatically narrow this surface. Sensitive console and network data is only captured for the specific sessions where you need diagnostic detail.
Performance overhead
Intercepting all fetch and XHR requests, capturing all console output, and
serializing structured log data adds overhead beyond DOM recording alone. Evaluate whether the
tool implements lazy serialization (capturing references, serializing only on send) or eager
serialization (serializing everything immediately). The former is substantially cheaper.
How Clairvio Handles This
Clairvio captures console output at all levels — log, warn, error, info, debug — with structured argument
preservation. Log entries appear on the replay timeline alongside DOM events, so you can see
exactly when a warning fired relative to the user action that preceded the failure.
Network capture covers both XHR and fetch requests, recording the URL, method,
status code, response time, and response body. Authentication headers are automatically stripped
— Authorization, Cookie, and common API key headers are never stored.
This happens by default, not as an opt-in configuration.
Clairvio's recording model is on-demand: sessions are only captured when a magic link is activated or a support widget is triggered. This means console and network data is only collected for sessions where a user is actively receiving support — not for every session across your entire user base. The narrower data collection scope simplifies both your privacy posture and your data retention obligations.
If you're evaluating tools specifically for debugging and support workflows, the complete buyer's guide covers Clairvio alongside FullStory, LogRocket, Hotjar, Microsoft Clarity, and Datadog RUM — with honest notes on which tools actually surface console and network data in a usable form.