Replacing sendBeacon with fetch in the tracking code
March 24, 2026
In October 2025, we updated our tracking code to replace navigator.sendBeacon with window.fetch + keepalive. It's backend change that's invisible to most, but we thought it was worth writing up because there's some interesting browser behavior involved, and the tradeoffs aren't necessarily obvious.
This one of those things we tweeted about, but never documented otherwise. We're going to try to get better abouut documenting these changes on our blog, to make sure as many people know abouut them as possible.
![]()
A little history
Back in November 2018 we migrated most of our tracking beacons from script tag injection to sendBeacon. That was a meaningful upgrade. It removed the artificial 500ms pause we'd been adding before navigating users away from tracked outbound and download links. Before sendBeacon, we needed that pause to give the network request time to complete before the browser killed it. sendBeacon fixed that entirely.
But we couldn't use it for everything. Specifically, the initial pageview had to stay on the old script injection method, because our tracking server needs to echo some data back to the page when it processes a first visit (things like setting up visitor state). sendBeacon is purely fire-and-forget. It completes reliably in the background, and you never hear from it again (there's no response). So we were left maintaining two different code paths: script injection for the initial pageview, sendBeacon for everything after.
That's the situation that fetch with keepalive finally resolves.
The core problem both methods solve
Browser network requests are tied to the page lifecycle. If a user closes a tab or navigates away before a request completes, the browser cancels it. For most web applications this doesn't matter much, but for analytics it's a significant problem. You often need to fire tracking requests at exactly the moment a user is leaving: capturing time-on-page, logging an outbound link click, recording the last action before they closed the tab.
The naive solution is to fire your request in a beforeunload or visibilitychange handler. The problem is that browsers don't guarantee those requests complete. (beforeunload is also not acceptable for analytics, as it shows a prompt to the visitor). If the page is being torn down, in-flight XHR and standard fetch requests often get dropped.
Both sendBeacon and fetch with keepalive: true solve this by decoupling the request from the page lifecycle. The browser queues the request and completes it even after the page is gone.
sendBeacon
sendBeacon was designed specifically for this use case. The API is about as simple as it gets:
navigator.sendBeacon( '/page', data );
It returns a boolean (whether the request was successfully queued), and that's it. No callbacks, no promises, no response. The browser handles completion entirely in the background.
What's good about it: it's purpose-built, reliable, and simple. Browsers have implemented it correctly across the board for years, and it just works.
What's bad about it: the fire-and-forget nature is a fundamental limitation, not just a missing feature. There is no mechanism to read a response, because the request completes outside of any page context that could receive one. For tracking pixels and beacons where you only care about logging data, this is fine. But if your tracking server needs to send anything back to the page (data to parse, scripts to conditionally load, state to set), sendBeacon can't help you.
It's also trivially blockable. Because sendBeacon was purpose-built for analytics and tracking, its name is essentially an advertisement for what it does, and ad-blockers have always targeted it specifically. Some browsers even have preferences to disable it directly: Firefox and Vivaldi, to name a few.
fetch with keepalive
The keepalive option on the Fetch API does what it says: it tells the browser to keep the request alive and complete it even if the page is unloaded. You get the same lifecycle guarantee as sendBeacon, but with the full fetch API on top of it.
fetch( src, {
method: 'GET',
keepalive: true,
credentials: 'include',
cache: 'no-store',
})
.then( r => r.json() )
.then( r => {
// do stuff with the response
});
This meant we could finally use a single code path for everything. Initial pageview, pings, events, goals, clicks: all of it goes through fetch with keepalive. And since we can receive a response, we updated our beacon endpoint to return JSON instead of executable JavaScript, which the tracking code now parses and acts on client-side.
That JSON change also fixed a long-standing CSP compatibility issue. The old approach of echoing executable JS that gets evaluated via a script element requires either unsafe-eval or unsafe-inline in a site's Content Security Policy. With fetch returning JSON, none of that is needed. We parse structured data and do everything in our own already-loaded JavaScript. Sites with strict CSP headers that previously had problems with tracking now work correctly.
And because fetch can be used for just about anything (not just analytics), nothing blocks it by default. Doing that would break almost every website quite badly.
Fallback to script injection? No.
When we shipped this, we kept a fallback to script injection in case fetch itself failed. The idea was that something like a strict CSP could theoretically block a fetch request, and we'd rather fall back than drop the tracking entirely.
But after monitoring it in production for a while, we disabled the fallback entirely. Every single failure we observed was a bot getting 403'd by our Cloudflare bot-blocking rules, not a legitimate visitor. CSP is the only realistic scenario where a fetch request would fail for a real user, and if a site has CSP configured, they're already accustomed to updating it when they add third-party scripts. So we pulled the fallback and kept the code smaller and simpler.
Firefox quirks
During testing, we noticed that keepalive requests weren't showing up in Firefox's dev tools network panel, whereas all Chromium browsers displayed them fine. But tracking was somehow still working, which was quite the mystery.
Turns out, Firefox hides keepalive requests from DevTools. The requests are still sent and received, they just don't appear in the panel. If you're debugging something similar and it looks like requests aren't going out in Firefox, try disabling keepalive temporarily and they'll show up. Then re-enable it when you're done.
We have to assume this is a bug with Firefox, as there's literally zero reason why someone using developer tools wouldn't want to see those requests?
End result
The tracking code is simpler and more consistent than it's been in years. Everything goes through one path, the initial pageview gets the same reliability guarantees as every other beacon, and the CSP story is cleaner. For most Clicky users, none of this is visible... which is exactly how infrastructure changes should work!