Skip to main content
GoodFirstPicks
DashboardIssuesReposLeaderboard

GoodFirstPicks by Leaveitblank © 2026

CreatorRequest a RepoPrivacy PolicyTerms of Service
Bug: Performance regression due to deferTask not batching | GoodFirstPicks

Bug: Performance regression due to deferTask not batching

facebook/react 6 comments 1mo ago
View on GitHub
highopenScope: somewhat clearSkill match: maybeReactJavaScriptTypeScript

Why this is a good first issue

Performance regression in server components due to inefficient batching of deferred tasks.

AI Summary

The issue describes a performance regression in React's server components where deferred tasks are not batched efficiently, leading to many small rows and increased overhead. The problem is illustrated with a reproduction showing significant slowdowns. The solution likely involves modifying the batching strategy in `renderModelDestructive` and `deferTask`, but requires deep knowledge of React's server-side rendering internals.

Issue Description

Overview

Pulling this out of #35089 (read this for background)

As mentioned above, https://github.com/facebook/react/pull/33030 introduced a fixed MAX_ROW_SIZE=3200, above which elements are deferred, which introduced a performance regression for certain payloads.

A large part of the issue appears to be that the children aren't deferred in batches, they're deferred individually. This can result in many rows that are far smaller than the MAX_ROW_SIZE – because renderModelDestructive will call deferTask on each child, which then becomes its own lazy chunk, until all children have finished being serialized, and the serializedSize is reset again.

Example

I've created a small reproduction that illustrates the problem. It's a ~120kb page with 20 sections each containing ~100 paragraphs. It can be made roughly 1.75x faster by better batching.

Using plain React and renderToReadableStream this page renders in 1.02ms on Bun, and 1.27ms on Node.js. In Next.js (16.0.3) it's roughly 15x slower with the current batching strategy.

This screenshot shows an example of what happens with the RSC stream: the first row reaches its limit after a few children, and starts deferring, but as you can see, each deferred child becomes its own row (a lazy chunk). So you can easily end up with hundreds or thousands of tiny rows if you're just synchronously rendering a table or similar.

Image

This example ends up with ~2000 rows, but each row is on average only 60 chars in length – far below the 3200 limit. And each row has non-trivial overhead as (in the Next.js case) it needs to serialized, de-serialized, and then re-serialized again (NB: this would be another optimization that would be good to tackle).

Increasing the MAX_ROW_SIZE is one way to reduce the number of rendered rows which is

GitHub Labels

Component: Server Components

Want to work on this?

Claim this issue to let others know you're working on it. You'll earn 35 points when you complete it!

Risk Flags

  • requires deep understanding of React internals
  • potential impact on server-side rendering
Loading labels...

Details

Points35 pts
Difficultyhigh
Scopesomewhat clear
Skill Matchmaybe
Test Focusedno