December 2, 2025
Our last post on the Turbopack migration for Next.js caused a bit of a stir. For context, we did some research on an open source project (Cal.com), comparing bundle sizes with the new Turbopack packager vs the older Webpack one.
Large bundle sizes are the #1 issue we see with Next.js causing serious performance degradation. We've written an article looking at a 9MB production bundle size, and a deeper dive into bundle sizes across the web if you're interested in reading more about this.
At the heart of our research was the next build output. This (used to) provide very helpful output on how big each page route is from a client side bundle perspective. This is extremely useful to monitor on CI/CD, ensuring that each build you do doesn't inflate the bundle - by failing the build if it inflates. It (was) the best way to avoid performance bloat.
After we published the potential regression, Luke Sandberg from Vercel helpfully reached out to us saying that after they did a deep dive into our article findings, the next build statistics on a per page route basis are unreliable, and have been undercounting Webpack and overcounting Turbopack, showing a regression that doesn't really exist.
As such, Vercel have deleted these metrics in Next.js 16 - unfortunately they were deemed to be unfixable.
If you are now upgrading to Next.js 16 you will lose the ability to have build time stats on your bundle at build time. We believe this is a major problem, and we hope Vercel can come up with an alternative mechanism for tracking this.
The main alternative Vercel is suggesting is to look at RUM analytics data. While we believe RUM is essential (and our RUM platform includes many Next.js statistics that are hard to find elsewhere!) - it's only one part of the solution. You do not want to be pushing a new version of your app and regressing performance. We believe it's much better to catch this before users experience it on production.
It's important to be able to check this before you go live with a new version, and also many RUM tools do not adequately catch the impact of Next.js script execution as it often occurs after LCP and is hidden.
Option 1: include bundle size metrics as part of e2e tests
If you already have a browser based e2e test workflow you can start instrumenting network calls, especially those with next in the URL. Keep track of these and add another step to verify the size of them as part of your key workflows.
Here's a simple example in Playwright of how to check the size of each response from Next.js. You can have a list of these sizes and assert at the end of your e2e test it hasn't increased.
page.on('response', async (response) => {const url = response.url();if (url.includes('/_next/') && url.endsWith('.js')) {const sizeKB = (await response.body()).length / 1024; //keep track of this across your tests, and assert it later}});
However, this may miss important parts of the bundle if you have routes and behaviours that aren't covered by e2e, and it would be extremely time consuming to make this completely comprehensive.
Option 2: assert total build output size at CI/CD
You can still check the entire size of the outputted bundle files with CI/CD as a very rough metric. However, this has significant limitations in that you could have regressions that are ''covered up" by reductions elsewhere.
Download our service guide to understand how we can help you optimise your site speed
The bigger problem of course is that if you add new pages/functionality, this will increase the total bundle size output - but your users will almost certainly not be downloading these new chunks on existing routes.
This may work as a quick check, but be prepared to battle with it and we suspect many engineers will start ignoring it due to high false positive rate.
Option 3: manually check with bundle-analyzer
The only other way we can suggest to do this is to manually check before release. Next.js bundle-analyzer can help visualize this. It's time consuming to look into this, but it may be worth it especially after a big refactor (though we would point out sometimes minor-looking changes end up inflating bundles badly).
Vercel are working on an improved bundle analyzer. Tim Neutkens shared a preview showing some promising features:
Vercel have also indicated they plan to make the outputs more toolable, which should make it easier to write regression monitoring in CI. Combined with the e2e approach above, this could provide a solid solution for catching bundle size issues before they hit production.
Bundle bloat remains a significant issue for many teams, and if it's not well monitored can have serious user experience and commercial impacts. We're glad to see Vercel investing in better tooling here.
CatchMetrics are specialists in fixing Next.js performance issues. We have a unique RUM product designed with Next.js performance in mind and offer expert support and advisory to help you mitigate them.Get in touch.
Download our service guide to understand how
we can help you optimise your site speed