Closesvercel/turbo#440
This uses swc's preset_env to automatically downlevel code according to the environment's browser targets, and sets next-dev to use a limited, modern target.
To do:
* [x] Add snapshot test
* [x] turbotrace test failures — looks like this has something to do with the Buffer module? Hitting swc's "multiple constructors" not implemented: f655488cfa/crates/swc_ecma_transforms_compat/src/es2015/classes/mod.rs (L545)
* [x] ~Benchmark downleveling node_modules (probably on front) and make a decision re: downleveling everything vs. just workspaces~ Filed as vercel/turbo#457
* Fix DiskFileSystem::read_link
* pnpm-like integration test
* Introduce AssetContent to handle symlink assets
* Fix read_link on Windows
* Run clippy fix
* Rename `path` to `target`
Co-authored-by: Justin Ridgewell <justin@ridgewell.name>
* Split Windows specified code
* Add comments about Redirect is only represent Directory
* Handle symlink while reading content
* Clippy fix
* Revert previous changes in FileSystemPathVc::get_type
* Fix Unix read_link
* cleanup
* handle symlink while resolving native bindings
* Make LinkContent::Link contains only target
* Add LinkType to represent link type
* Cleanup VersionedAsset
* Cleanup LinkType
* Normalize the LinkContent::target
* Comments
* Revert special case workaround for sharp on Windows
* comments
* node_native_binding follow file link
* Apply CR suggestion
Co-authored-by: Justin Ridgewell <justin@ridgewell.name>
With React.memo:
```
bench_hmr_to_commit/Turbopack CSR/30000 modules
time: [50.608 ms 51.659 ms 52.553 ms]
```
Without React.memo:
```
bench_hmr_to_commit/Turbopack CSR/30000 modules
time: [853.47 ms 1.0191 s 1.1873 s]
change: [+1543.4% +1872.7% +2207.8%] (p = 0.00 < 0.05)
Performance has regressed.
```
Since we're only ever editing the top-level triangle in our HMR benchmarks, we're incurring the time it takes for React to re-render the whole tree, which is a function of the number of components in said tree. By using `React.memo`, we can skip updating children components during HMR.
Previously, we ran multiple `npm install` operations in serial using multiple calls to `install_from_npm`. Instead, this allows us to express dependencies all at once as a single command to the npm cli, which should reduce the time we spend installing from npm and updating package.json.
Test Plan: Manually confirmed that package.json was updated correctly. `cargo bench`.
This adds webpack 5 to the benchmark suite.
Test Plan: Manually confirmed package.json updates and webpack config written to temp dir correctly. `cargo bench`.
This splits the benchmark code into more modules. Notes:
* ~Moved/left `get_bundlers()` and `get_module_counts()` to/in mod.rs. In particular, moving `get_bundlers()` to either bundle.rs or util.rs would lead to a circular dependency. These both also rely on env var configuration, so I figured this was a reasonable place for them.~
* The Bundler trait has its own module (not moved to util), since it's a top-level concern and not really a miscellaneous utility.
* Each bundler has its own module file.
Test Plan: `TURBOPACK_BENCH_BUNDLERS=all cargo test --benches -p next-dev -- --nocapture` and verify same output as before change.
* Benchmark Parcel
* add Parcel to the CI benchmarks
* move some turbopack dependencies to the bundler as they conflict with other bundlers
Co-authored-by: Tobias Koppers <tobias.koppers@googlemail.com>
This implements benchmark support for Next.js 12. Next.js (the tool) expects to be able to resolve from the `next` package in the cwd, so it must be installed alongside the other node_modules in the test. `prepare` was added to the Bundler trait to handle this case.
Test Plan: `TURBOPACK_BENCH_ALL=all cargo bench -p next-dev`
Co-authored-by: Alex Kirszenberg <1621758+alexkirsz@users.noreply.github.com>
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>
This adds a comparison against Vite to our benchmark suite, running the startup, change, and restart benchmarks.
Test Plan: `cargo bench`
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>
This implements the basics of parameterizing the tool/devserver used in these tests. Following PRs will implement benchmarking of Vite, bun, Parcel, etc.
Test Plan: `cargo bench -p next-dev` and verify no change in performance.
This implements a benchmark of restarting the devserver after successfully starting it and shutting it down.
## Question/TODO:
Since our goal is metrics that don't scale with project size, should we
assert that the small/medium benchmark results don't differ?
Test Plan: `cargo bench -p next-dev`
This builds on vercel/turbo#240, starting up a server and then benchmarking the response to a small file change.
This change does not introduce nor remove any dependencies. A followup
benchmark should do so.
Test Plan: cargo bench -p next-dev
Co-authored-by: Tobias Koppers <1365881+sokra@users.noreply.github.com>
This adds an assertion that no runtime (browser) errors occurred when
loading a benchmark page.
Test Plan: Temporarily removed the npm install for the test app and
verified the benchmark failed as the test app requires react, react-dom.
Restored the npm install and verified the benchmark runs to completion.
This:
* Runs `npm install` in test directories to provide turbopack with modules necessary to bundle them.
* Reuses test directories for iterations across the given benchmark. This prevents unnecessary file writing and `npm install` for each iteration, improving the times to run benchmarks.
Currently cherry-picks vercel/turbo#278 as it's necessary along with vercel/turbo#277.
Test Plan: Connected to the running devserver mid-test and confirmed no errors are thrown and the triangle is rendered correctly.
* Basic startup bench for dev server
* fixes to benchmarking (vercel/turbo#268)
* use bench profile for benchmarking
* make setup and teardown not part of the measurement
add support for async setup and teardown
share browser between measurements
* updates for changes TestApp
Co-authored-by: Tobias Koppers <tobias.koppers@googlemail.com>