There are a bunch or problems with invalidations:
* The fs impl watches path case-insenstive. This means two paths might conflict when on a case-sensitive filesystem. It uses an array of Invalidators now
* Move the next-dev bootstrapping logic out of the run_once scope (which is not updated when invalidations occur). Instead it's executed inside the request handling resp. update stream where changes can be handled.
* `TransientValue` was not an Value type actually. This fixes that.
* Adds a new `TransientInstance` wrapper to pass transient by reference.
* `strongly_consistent` was broken when using nested TaskScopes. This fixes that.
Server Rendering:
* This adds an additional ContentSource to next-dev which takes care of handling the `pages` directory.
* The content source creates a ServerRenderedAsset from each file in the `src/pages` or `pages` directory and a AssetGraphContentSource for that.
* The ServerRenderedAsset will reference an underlying asset for the node.js context which will be passed to the node executable for rendering. It uses a WrapperAsset to add additional communication logic.
Client Transition:
* When annotating `import`s with `transition: "next-client"` the NextClientTransition is used
* This transition changes the environment to browser
* It wraps the referenced asset with a next-hyrdation wrapper asset
* It leaves a little module in the previous context which exports a list of URLs for chunks needed.
* The NextClientTransition takes a client_chunking_context as argument which specifies how the client code is chunked.
These integration tests have been flaky, failing when a "free" port turns out to be in use. Since nextest parallelizes test runs and portpicker guesses and checks free ports [0], I'm guessing that there's a collision occurring.
Instead, ask the operating system for a free port by binding to port 0 and read the port back from the resulting address.
Test Plan: Tried local runs with nextest, but those succeeded before as well. I'll probably retry things on CI a few times.
[0] 912f913ac3/src/lib.rs (L53)
This adds test runs for integration tests in `__skipped__` directories,
ensuring that they fail, otherwise they should probably be unskipped.
Test Plan: Temporarily moved a succeeding test into a `__skipped__`
directory and ensured that cargo test began failing that test.
This commits webpack's chunk tests (test/cases/chunks) and skips those
that do not pass yet.
Test Plan: `cargo test -p next-dev -- --nocapture` and verify the
non-skipped tests run.
This prepares the way for HMR (vercel/turbo#160) by letting us diff assets between
versions.
1. Add `Asset::versioned_content` which returns a `VersionedContentVc`.
2. `VersionedContent`s have a built-in versioning mechanism as they must implement `version() -> VersionVc`. `Version` is a trait, so `VersionVc` can contain a specific version implementation by asset type. This is particularly important because...
3. A `VersionedContentVc` can be diffed with a `VersionVc` from the same underlying `VersionedContent` type with `content.update(from: version)`. This returns an `UpdateVc` which describes the steps necessary to update from one verson to the next. In the case of ES chunks, this will be a map of added, and modified module IDs, with their respective factories, and a set of deleted module IDs.
4. Implement diffing for ES chunks.
This implements a manual debug mode for next-dev tests, enabled by
setting the environment variable TURBOPACK_DEBUG_BROWSER to any value.
It launches the test browser in non-headless mode and holds it open
~~indefinitely~~ until the user closes it, so it can be inspected.
Test Plan: `TURBOPACK_DEBUG_BROWSER=1 cargo test -p next-dev --
test_crates_next_dev_tests_integration_chunks_circular_correctness
--nocapture` and verify the browser is opened non-headless and is held
open
This is a very early version of the next-dev test runner. I'm opening this early to get thoughts from folks re: the direction of the design and implementation.
Fixesvercel/turbo#204
Currently it:
* Discovers integration test fixtures from the filesystem. Right now these are expected to be single files that get bundled and will eventually include assertions. This is powered by the test-generator crate, which allows us not to have to manually enumerate each case. We could consider using this for the node-file-trace tests as well.
* Starts the dev server on a free port and opens a headless browser to its root. The browser control is implemented with the https://crates.io/crates/chromiumoxide crate, which expects Chrome or Chromium to already be available.
Eventually it will:
* [x] Implement a minimal test environment loaded in the browser so that assertions can be run there from bundled code.
* [x] Report back the results of these assertions to rust, where we can pass/fail cargo tests with those results.
In the future it could:
* Possibly include snapshot-style tests to assert on transformed results. This could be in the form of fixture directories instead of files cc @jridgewell
* Support expressing special configuration of turbopack in a fixture, possibly as another file in the fixture directory.
* [x] ~Possibly support distributing tests to a pool of open browsers instead of opening and closing for each test.~
Test Plan: See next PRs
This PR builds our own `Debug`-like derive-macro machinery for formatting structs, relying on `std::fmt::Formatter` for the actual formatting.
### Usage
A new `ValueDebug` trait is automatically implemented for all `#[turbo_tasks::value]`s, which has a single `.dbg()` method which resolves to a debug representation that can then be printed to the screen. `#[turbo_tasks::value_trait]` also implement the `.dbg()` method directly.
```rust
dbg!(any_vc.dbg().await?);
```
If you have a `#[turbo_tasks::value]` struct with a field that doesn't implement `Debug`, you'll want to declare that field as `#[debug_ignore]`. For instance:
```rust
#[turbo_tasks::value(ContentSource, serialization: none, eq: manual, cell: new, into: new)]
pub struct TurboTasksSource {
#[debug_ignore]
#[trace_ignore]
pub turbo_tasks: Arc<TurboTasks<MemoryBackend>>,
}
```
### Why not use `Debug` directly?
We can't use `Debug` because our values are resolved asynchronously and can nest `Vc`s arbitrarily. I tried using `futures::executor::block_on` to resolve them synchronously in a `Debug` implementation but that causes deadlocks.
Cherry-picking this from my work on the next-dev test runner.
This moves browser opening from the turbopack-dev-server crate into the next-dev crate, which has the cli entrypoint that runs the dev server. It looks like the dev server package is meant to be used as a library (it's only a library crate), and having this external side effect feels unexpected and makes it difficult to use this crate in situations like a test runner for next-dev, where we should test with a headless web browser.
Alternatively, opening the browser could be an option passed when creating the dev server, but this feels a bit cleaner to me.
Test Plan: `cargo run -p next-dev` and verify the browser still opens and successfully connects to the dev server.
* Initialize Node.js/TypeScript workspace
* node-module-trace Webpack plugin
* Add new fmt checks to pipeline
* Popup unwind error
* Implement --exact flag
* Yarn 3.2.2
* Reformat toml files
* Fix socket io test, 100ms timeout is too long
* remove unnecessary CI cache config
* regenerate lockfile from old lockfile, align the dependencies version
* Run nmt tests in system tmp dir
* Apply code review suggestions
* allow to wait for task completion and propagate errors and panics
* revert method addition
* spawn_root_task should be sync
Co-authored-by: Tobias Koppers <tobias.koppers@googlemail.com>
make TransientValue functional
make console-subscriber an optional feature
avoid cloning when calling a function
capture timings of task executions
fix LazyAsset
expose graph from dev server