Many small things I found
The most important is probably the typescript transform
The remaining bits should hopefully be self-explanatory from the commit messages
With React.memo:
```
bench_hmr_to_commit/Turbopack CSR/30000 modules
time: [50.608 ms 51.659 ms 52.553 ms]
```
Without React.memo:
```
bench_hmr_to_commit/Turbopack CSR/30000 modules
time: [853.47 ms 1.0191 s 1.1873 s]
change: [+1543.4% +1872.7% +2207.8%] (p = 0.00 < 0.05)
Performance has regressed.
```
Since we're only ever editing the top-level triangle in our HMR benchmarks, we're incurring the time it takes for React to re-render the whole tree, which is a function of the number of components in said tree. By using `React.memo`, we can skip updating children components during HMR.
moves the logic of creating SourceMap assets into the asset reference. This avoids depending on the chunk items in the references() method directly. It also avoids calling CodeVc::source_map() until the source map is read
avoid circular dependency in call graph
It also avoids checking `has_source_map()` and just inserts potential Source Maps assets for every chunk item. Checking `has_source_map()` seems unnecessary work to do for all chunk items, when we can just send an empty source map.
only expose SourceMaps for chunk items when HMR is enabled
picked typescript transform from https://github.com/vercel/turbo-tooling/pull/341
add resolve options context as global resolve config
enable typescript only for next-dev
move emulating logic from environment to resolve options context
Co-authored-by: Leah <8845940+ForsakenHarmony@users.noreply.github.com>
This implements support for styled-jsx in next-dev using swc's styled_jsx crate.
It's only applied in next-dev, and is only applied as a transform to app code, much like the react-refresh transform.
To do:
* [x] The transform doesn't seem to be applied. Pass the added test.
Test Plan: `cargo test -p next-dev --
test_crates_next_dev_tests_integration_turbopack_basic_styled_jsx
--nocapture`
Remaining questions:
* Should we have some static analysis for `getStaticProps` instead of looking into exports at runtime?
* For now, the output of `getStaticProps` (if defined) will always trump the value passed in as `data`. If we consider `data` to be the cached output of `getStaticProps` (in the future, as this is not yet implemented), this logic should be adapted.
Previously, we ran multiple `npm install` operations in serial using multiple calls to `install_from_npm`. Instead, this allows us to express dependencies all at once as a single command to the npm cli, which should reduce the time we spend installing from npm and updating package.json.
Test Plan: Manually confirmed that package.json was updated correctly. `cargo bench`.
This adds webpack 5 to the benchmark suite.
Test Plan: Manually confirmed package.json updates and webpack config written to temp dir correctly. `cargo bench`.
This splits the benchmark code into more modules. Notes:
* ~Moved/left `get_bundlers()` and `get_module_counts()` to/in mod.rs. In particular, moving `get_bundlers()` to either bundle.rs or util.rs would lead to a circular dependency. These both also rely on env var configuration, so I figured this was a reasonable place for them.~
* The Bundler trait has its own module (not moved to util), since it's a top-level concern and not really a miscellaneous utility.
* Each bundler has its own module file.
Test Plan: `TURBOPACK_BENCH_BUNDLERS=all cargo test --benches -p next-dev -- --nocapture` and verify same output as before change.
* Benchmark Parcel
* add Parcel to the CI benchmarks
* move some turbopack dependencies to the bundler as they conflict with other bundlers
Co-authored-by: Tobias Koppers <tobias.koppers@googlemail.com>
This implements benchmark support for Next.js 12. Next.js (the tool) expects to be able to resolve from the `next` package in the cwd, so it must be installed alongside the other node_modules in the test. `prepare` was added to the Bundler trait to handle this case.
Test Plan: `TURBOPACK_BENCH_ALL=all cargo bench -p next-dev`
Co-authored-by: Alex Kirszenberg <1621758+alexkirsz@users.noreply.github.com>
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>
This adds a comparison against Vite to our benchmark suite, running the startup, change, and restart benchmarks.
Test Plan: `cargo bench`
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>
* avoid cloning Strings and reading/calling many functions when building ecmascript chunk content
* introduce `ReadRef` to allow storing snapshot of a value in values
* use snapshot trees to allow caching more function calls and reads
Before:
```
updated in 35ms (18 tasks)
updated in 34ms (18 tasks)
updated in 37ms (18 tasks)
updated in 31ms (18 tasks)
updated in 40ms (19 tasks)
updated in 37ms (18 tasks)
updated in 37ms (18 tasks)
updated in 34ms (18 tasks)
updated in 35ms (18 tasks)
updated in 52ms (18 tasks)
```
After:
```
updated in 6.105ms (19 tasks)
updated in 5.279ms (18 tasks)
updated in 10.471ms (19 tasks)
updated in 6.863ms (18 tasks)
updated in 4.593ms (18 tasks)
updated in 4.173ms (18 tasks)
updated in 5.352ms (18 tasks)
updated in 10.69ms (18 tasks)
updated in 5.065ms (18 tasks)
updated in 6.309ms (19 tasks)
```
a 5x performance improvement
This implements the basics of parameterizing the tool/devserver used in these tests. Following PRs will implement benchmarking of Vite, bun, Parcel, etc.
Test Plan: `cargo bench -p next-dev` and verify no change in performance.
This implements a benchmark of restarting the devserver after successfully starting it and shutting it down.
## Question/TODO:
Since our goal is metrics that don't scale with project size, should we
assert that the small/medium benchmark results don't differ?
Test Plan: `cargo bench -p next-dev`
This builds on vercel/turbo#240, starting up a server and then benchmarking the response to a small file change.
This change does not introduce nor remove any dependencies. A followup
benchmark should do so.
Test Plan: cargo bench -p next-dev
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>
Instead of dropping back to prefix calling form in `try_join_all(iterator_of_intofutures).await?`, we now use postfix form `iterator_of_intofutures.try_join().await?`.
This adds an assertion that no runtime (browser) errors occurred when
loading a benchmark page.
Test Plan: Temporarily removed the npm install for the test app and
verified the benchmark failed as the test app requires react, react-dom.
Restored the npm install and verified the benchmark runs to completion.
This:
* Runs `npm install` in test directories to provide turbopack with modules necessary to bundle them.
* Reuses test directories for iterations across the given benchmark. This prevents unnecessary file writing and `npm install` for each iteration, improving the times to run benchmarks.
Currently cherry-picks vercel/turbo#278 as it's necessary along with vercel/turbo#277.
Test Plan: Connected to the running devserver mid-test and confirmed no errors are thrown and the triangle is rendered correctly.
This PR implements HMR support with React Refresh built-in.
For now, in order for React Refresh to be enabled, you'll need the `@next/react-refresh-utils` package to be resolveable: `yarn add @next/react-refresh-utils` in your app folder.
* Depends on vercel/turbo#266
* Integrated both HMR-and-React-Refresh-specific logic directly into the ES chunks' runtime. Webpack has more complex setup here, but for now this makes the logic much more easy to follow since everything is in one place. I have yet to implement the "dependencies" signature for `hot.accept`/`hot.dispose`, since React Refresh does not depend on them. We'll have to see if they're even used in the wild or if we should deprecate them.
* Only implemented the [module API](https://webpack.js.org/api/hot-module-replacement/#module-api), not the [management API](https://webpack.js.org/api/hot-module-replacement/#management-api). We apply all updates as soon as we receive them.
* Added support for "runtime entries" to ES chunks. These are assets that will be executed *before* the main entry of an ES chunk. They'll be useful for polyfills in the future, but for now they're here to evaluate the react refresh runtime before any module is instantiated.
Next steps for HMR:
* Implement CSS HMR
* Implement (or decide to deprecate) the [dependencies form](https://webpack.js.org/api/hot-module-replacement/#accept) of `hot.accept`/`hot.dispose`
* Clean up `runtime.js` some more: switch to TypeScript, split into multiple files, etc. It'd be nice if all of this could be done at compile time, but how to achieve this is unclear at the moment. _Can we run turbopack to compile turbopack?_
* Basic startup bench for dev server
* fixes to benchmarking (vercel/turbo#268)
* use bench profile for benchmarking
* make setup and teardown not part of the measurement
add support for async setup and teardown
share browser between measurements
* updates for changes TestApp
Co-authored-by: Tobias Koppers <tobias.koppers@googlemail.com>
The snapshot tests were failing because Windows paths were sneaking into the `FileSystemPathVc::path`. Reviewing, the `::new` method didn't normalize `\` into `/`, and the various `::join` methods didn't either. This came up in both the chunk ids, and the request pathnames used by the server.
No path with backslash should enter the FileSystemPath APIs, they should be normalized during conversion from `Path` to `String`
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>