rsnext/packages/next/server/lib/squoosh/main.ts

73 lines
2 KiB
TypeScript
Raw Normal View History

import { Worker } from 'jest-worker'
import * as path from 'path'
import { execOnce } from '../../../shared/lib/utils'
Fix memory leak in image optimization (#23565) This RP fixes the problem that the image optimization API uses a large amount of memory, and is not correctly freed afterwards. There're multiple causes of this problem: ### 1. Too many WebAssembly instances are created We used to do all the image processing operations (decode, resize, rotate, encodeJpeg, encodePng, encodeWebp) inside each worker thread, where each operation creates at least one WASM instance, and we create `os.cpus().length - 1` workers by default. That means in the worst case, there will be `N*6` WASM instances created (N is the number of CPU cores minus one). This PR changes it to a pipeline-like architecture: there will be at most 6 workers, and the same type of operations will always be assigned to the same worker. With this change, 6 WASM instances will be created in the worst case. ### 2. WebAssembly memory can't be deallocated It's known that [WebAssembly can't simply deallocate its memory as of today](https://stackoverflow.com/a/51544868/2424786). And due to the implementation/design of the WASM modules that we are using, they're not very suitable for long-running cases and it's more like a one-off use. For each operation like resize, it will allocate **new memory** to store that data. So the memory will increase quickly as more images are processed. The fix is to get rid of `execOnce` for WASM module initializations, so each time a new WASM module will be created and the old module will be GC'd entirely as there's no reference to it. That's the only and easiest way to free the memory use of a WASM module AFAIK. ### 3. WebAssembly memory isn't correctly freed after finishing the operation `wasm-bindgen` generates code with global variables like `cachegetUint8Memory0` and `wasm` that always hold the WASM memory as a reference. We need to manually clean them up after finishing each operation. This PR ensures that these variables will be deleted so the memory overhead can go back to 0 when an operation is finished. ### 4. Memory leak inside event listeners `emscripten` generates code with global error listener registration (without cleaning them up): https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L39-L43 And the listener has references to the WASM instance directly or indirectly: https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L183-L192 (`e`, `y`, `r`). That means whenever a WASM module is created (emscripten), its memory will be kept by the global scope. And when we replace the WASM module with a new one, the newer will be added again and the old will still be referenced, which causes a leak. Since we're running them inside worker threads (which will retry on fail), this PR simply removes these listeners. ### Test Here're some statistics showing that these changes have improved the memory usage a lot (the app I'm using to test has one page of 20 high-res PNGs): Before this PR (`next@10.1.0`): <img src="https://user-images.githubusercontent.com/3676859/113058480-c3496100-91e0-11eb-9e5a-b325e484adac.png" width="500"> Memory went from ~250MB to 3.2GB (peak: 3.5GB) and never decreased again. With fix 1 applied: <img src="https://user-images.githubusercontent.com/3676859/113059060-921d6080-91e1-11eb-8ac6-83c70c1f2f75.png" width="500"> Memory went from ~280MB to 1.5GB (peak: 2GB). With fix 1+2 applied: <img src="https://user-images.githubusercontent.com/3676859/113059207-bf6a0e80-91e1-11eb-845a-870944f9e116.png" width="500"> Memory went from ~280MB to 1.1GB (peak: 1.6GB). With fix 1+2+3+4 applied: <img src="https://user-images.githubusercontent.com/3676859/113059362-ec1e2600-91e1-11eb-8d9a-8fbce8808802.png" width="500"> It's back to normal; memory changed from ~300MB to ~480MB, peaked at 1.2GB. You can clearly see that GC is working correctly here. --- ## Bug - [x] Related issues #23189, #23436 - [ ] Integration tests added ## Feature - [ ] Implements an existing feature request or RFC. Make sure the feature request has been accepted for implementation before opening a PR. - [ ] Related issues linked using `fixes #number` - [ ] Integration tests added - [ ] Documentation added - [ ] Telemetry added. In case of a feature if it's used or not. ## Documentation / Examples - [ ] Make sure the linting passes
2021-03-31 10:24:42 +02:00
import { cpus } from 'os'
type RotateOperation = {
type: 'rotate'
numRotations: number
}
type ResizeOperation = {
type: 'resize'
} & ({ width: number; height?: never } | { height: number; width?: never })
Fix memory leak in image optimization (#23565) This RP fixes the problem that the image optimization API uses a large amount of memory, and is not correctly freed afterwards. There're multiple causes of this problem: ### 1. Too many WebAssembly instances are created We used to do all the image processing operations (decode, resize, rotate, encodeJpeg, encodePng, encodeWebp) inside each worker thread, where each operation creates at least one WASM instance, and we create `os.cpus().length - 1` workers by default. That means in the worst case, there will be `N*6` WASM instances created (N is the number of CPU cores minus one). This PR changes it to a pipeline-like architecture: there will be at most 6 workers, and the same type of operations will always be assigned to the same worker. With this change, 6 WASM instances will be created in the worst case. ### 2. WebAssembly memory can't be deallocated It's known that [WebAssembly can't simply deallocate its memory as of today](https://stackoverflow.com/a/51544868/2424786). And due to the implementation/design of the WASM modules that we are using, they're not very suitable for long-running cases and it's more like a one-off use. For each operation like resize, it will allocate **new memory** to store that data. So the memory will increase quickly as more images are processed. The fix is to get rid of `execOnce` for WASM module initializations, so each time a new WASM module will be created and the old module will be GC'd entirely as there's no reference to it. That's the only and easiest way to free the memory use of a WASM module AFAIK. ### 3. WebAssembly memory isn't correctly freed after finishing the operation `wasm-bindgen` generates code with global variables like `cachegetUint8Memory0` and `wasm` that always hold the WASM memory as a reference. We need to manually clean them up after finishing each operation. This PR ensures that these variables will be deleted so the memory overhead can go back to 0 when an operation is finished. ### 4. Memory leak inside event listeners `emscripten` generates code with global error listener registration (without cleaning them up): https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L39-L43 And the listener has references to the WASM instance directly or indirectly: https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L183-L192 (`e`, `y`, `r`). That means whenever a WASM module is created (emscripten), its memory will be kept by the global scope. And when we replace the WASM module with a new one, the newer will be added again and the old will still be referenced, which causes a leak. Since we're running them inside worker threads (which will retry on fail), this PR simply removes these listeners. ### Test Here're some statistics showing that these changes have improved the memory usage a lot (the app I'm using to test has one page of 20 high-res PNGs): Before this PR (`next@10.1.0`): <img src="https://user-images.githubusercontent.com/3676859/113058480-c3496100-91e0-11eb-9e5a-b325e484adac.png" width="500"> Memory went from ~250MB to 3.2GB (peak: 3.5GB) and never decreased again. With fix 1 applied: <img src="https://user-images.githubusercontent.com/3676859/113059060-921d6080-91e1-11eb-8ac6-83c70c1f2f75.png" width="500"> Memory went from ~280MB to 1.5GB (peak: 2GB). With fix 1+2 applied: <img src="https://user-images.githubusercontent.com/3676859/113059207-bf6a0e80-91e1-11eb-845a-870944f9e116.png" width="500"> Memory went from ~280MB to 1.1GB (peak: 1.6GB). With fix 1+2+3+4 applied: <img src="https://user-images.githubusercontent.com/3676859/113059362-ec1e2600-91e1-11eb-8d9a-8fbce8808802.png" width="500"> It's back to normal; memory changed from ~300MB to ~480MB, peaked at 1.2GB. You can clearly see that GC is working correctly here. --- ## Bug - [x] Related issues #23189, #23436 - [ ] Integration tests added ## Feature - [ ] Implements an existing feature request or RFC. Make sure the feature request has been accepted for implementation before opening a PR. - [ ] Related issues linked using `fixes #number` - [ ] Integration tests added - [ ] Documentation added - [ ] Telemetry added. In case of a feature if it's used or not. ## Documentation / Examples - [ ] Make sure the linting passes
2021-03-31 10:24:42 +02:00
export type Operation = RotateOperation | ResizeOperation
export type Encoding = 'jpeg' | 'png' | 'webp'
const getWorker = execOnce(
() =>
new Worker(path.resolve(__dirname, 'impl'), {
enableWorkerThreads: true,
Fix memory leak in image optimization (#23565) This RP fixes the problem that the image optimization API uses a large amount of memory, and is not correctly freed afterwards. There're multiple causes of this problem: ### 1. Too many WebAssembly instances are created We used to do all the image processing operations (decode, resize, rotate, encodeJpeg, encodePng, encodeWebp) inside each worker thread, where each operation creates at least one WASM instance, and we create `os.cpus().length - 1` workers by default. That means in the worst case, there will be `N*6` WASM instances created (N is the number of CPU cores minus one). This PR changes it to a pipeline-like architecture: there will be at most 6 workers, and the same type of operations will always be assigned to the same worker. With this change, 6 WASM instances will be created in the worst case. ### 2. WebAssembly memory can't be deallocated It's known that [WebAssembly can't simply deallocate its memory as of today](https://stackoverflow.com/a/51544868/2424786). And due to the implementation/design of the WASM modules that we are using, they're not very suitable for long-running cases and it's more like a one-off use. For each operation like resize, it will allocate **new memory** to store that data. So the memory will increase quickly as more images are processed. The fix is to get rid of `execOnce` for WASM module initializations, so each time a new WASM module will be created and the old module will be GC'd entirely as there's no reference to it. That's the only and easiest way to free the memory use of a WASM module AFAIK. ### 3. WebAssembly memory isn't correctly freed after finishing the operation `wasm-bindgen` generates code with global variables like `cachegetUint8Memory0` and `wasm` that always hold the WASM memory as a reference. We need to manually clean them up after finishing each operation. This PR ensures that these variables will be deleted so the memory overhead can go back to 0 when an operation is finished. ### 4. Memory leak inside event listeners `emscripten` generates code with global error listener registration (without cleaning them up): https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L39-L43 And the listener has references to the WASM instance directly or indirectly: https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L183-L192 (`e`, `y`, `r`). That means whenever a WASM module is created (emscripten), its memory will be kept by the global scope. And when we replace the WASM module with a new one, the newer will be added again and the old will still be referenced, which causes a leak. Since we're running them inside worker threads (which will retry on fail), this PR simply removes these listeners. ### Test Here're some statistics showing that these changes have improved the memory usage a lot (the app I'm using to test has one page of 20 high-res PNGs): Before this PR (`next@10.1.0`): <img src="https://user-images.githubusercontent.com/3676859/113058480-c3496100-91e0-11eb-9e5a-b325e484adac.png" width="500"> Memory went from ~250MB to 3.2GB (peak: 3.5GB) and never decreased again. With fix 1 applied: <img src="https://user-images.githubusercontent.com/3676859/113059060-921d6080-91e1-11eb-8ac6-83c70c1f2f75.png" width="500"> Memory went from ~280MB to 1.5GB (peak: 2GB). With fix 1+2 applied: <img src="https://user-images.githubusercontent.com/3676859/113059207-bf6a0e80-91e1-11eb-845a-870944f9e116.png" width="500"> Memory went from ~280MB to 1.1GB (peak: 1.6GB). With fix 1+2+3+4 applied: <img src="https://user-images.githubusercontent.com/3676859/113059362-ec1e2600-91e1-11eb-8d9a-8fbce8808802.png" width="500"> It's back to normal; memory changed from ~300MB to ~480MB, peaked at 1.2GB. You can clearly see that GC is working correctly here. --- ## Bug - [x] Related issues #23189, #23436 - [ ] Integration tests added ## Feature - [ ] Implements an existing feature request or RFC. Make sure the feature request has been accepted for implementation before opening a PR. - [ ] Related issues linked using `fixes #number` - [ ] Integration tests added - [ ] Documentation added - [ ] Telemetry added. In case of a feature if it's used or not. ## Documentation / Examples - [ ] Make sure the linting passes
2021-03-31 10:24:42 +02:00
// There will be at most 6 workers needed since each worker will take
// at least 1 operation type.
numWorkers: Math.max(1, Math.min(cpus().length - 1, 6)),
computeWorkerKey: (method) => method,
})
)
export async function processBuffer(
buffer: Buffer,
operations: Operation[],
encoding: Encoding,
quality: number
): Promise<Buffer> {
const worker: typeof import('./impl') = getWorker() as any
Fix memory leak in image optimization (#23565) This RP fixes the problem that the image optimization API uses a large amount of memory, and is not correctly freed afterwards. There're multiple causes of this problem: ### 1. Too many WebAssembly instances are created We used to do all the image processing operations (decode, resize, rotate, encodeJpeg, encodePng, encodeWebp) inside each worker thread, where each operation creates at least one WASM instance, and we create `os.cpus().length - 1` workers by default. That means in the worst case, there will be `N*6` WASM instances created (N is the number of CPU cores minus one). This PR changes it to a pipeline-like architecture: there will be at most 6 workers, and the same type of operations will always be assigned to the same worker. With this change, 6 WASM instances will be created in the worst case. ### 2. WebAssembly memory can't be deallocated It's known that [WebAssembly can't simply deallocate its memory as of today](https://stackoverflow.com/a/51544868/2424786). And due to the implementation/design of the WASM modules that we are using, they're not very suitable for long-running cases and it's more like a one-off use. For each operation like resize, it will allocate **new memory** to store that data. So the memory will increase quickly as more images are processed. The fix is to get rid of `execOnce` for WASM module initializations, so each time a new WASM module will be created and the old module will be GC'd entirely as there's no reference to it. That's the only and easiest way to free the memory use of a WASM module AFAIK. ### 3. WebAssembly memory isn't correctly freed after finishing the operation `wasm-bindgen` generates code with global variables like `cachegetUint8Memory0` and `wasm` that always hold the WASM memory as a reference. We need to manually clean them up after finishing each operation. This PR ensures that these variables will be deleted so the memory overhead can go back to 0 when an operation is finished. ### 4. Memory leak inside event listeners `emscripten` generates code with global error listener registration (without cleaning them up): https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L39-L43 And the listener has references to the WASM instance directly or indirectly: https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L183-L192 (`e`, `y`, `r`). That means whenever a WASM module is created (emscripten), its memory will be kept by the global scope. And when we replace the WASM module with a new one, the newer will be added again and the old will still be referenced, which causes a leak. Since we're running them inside worker threads (which will retry on fail), this PR simply removes these listeners. ### Test Here're some statistics showing that these changes have improved the memory usage a lot (the app I'm using to test has one page of 20 high-res PNGs): Before this PR (`next@10.1.0`): <img src="https://user-images.githubusercontent.com/3676859/113058480-c3496100-91e0-11eb-9e5a-b325e484adac.png" width="500"> Memory went from ~250MB to 3.2GB (peak: 3.5GB) and never decreased again. With fix 1 applied: <img src="https://user-images.githubusercontent.com/3676859/113059060-921d6080-91e1-11eb-8ac6-83c70c1f2f75.png" width="500"> Memory went from ~280MB to 1.5GB (peak: 2GB). With fix 1+2 applied: <img src="https://user-images.githubusercontent.com/3676859/113059207-bf6a0e80-91e1-11eb-845a-870944f9e116.png" width="500"> Memory went from ~280MB to 1.1GB (peak: 1.6GB). With fix 1+2+3+4 applied: <img src="https://user-images.githubusercontent.com/3676859/113059362-ec1e2600-91e1-11eb-8d9a-8fbce8808802.png" width="500"> It's back to normal; memory changed from ~300MB to ~480MB, peaked at 1.2GB. You can clearly see that GC is working correctly here. --- ## Bug - [x] Related issues #23189, #23436 - [ ] Integration tests added ## Feature - [ ] Implements an existing feature request or RFC. Make sure the feature request has been accepted for implementation before opening a PR. - [ ] Related issues linked using `fixes #number` - [ ] Integration tests added - [ ] Documentation added - [ ] Telemetry added. In case of a feature if it's used or not. ## Documentation / Examples - [ ] Make sure the linting passes
2021-03-31 10:24:42 +02:00
let imageData = await worker.decodeBuffer(buffer)
for (const operation of operations) {
if (operation.type === 'rotate') {
imageData = await worker.rotate(imageData, operation.numRotations)
} else if (operation.type === 'resize') {
if (
operation.width &&
imageData.width &&
imageData.width > operation.width
) {
imageData = await worker.resize({
image: imageData,
width: operation.width,
})
} else if (
operation.height &&
imageData.height &&
imageData.height > operation.height
) {
imageData = await worker.resize({
image: imageData,
height: operation.height,
})
Fix memory leak in image optimization (#23565) This RP fixes the problem that the image optimization API uses a large amount of memory, and is not correctly freed afterwards. There're multiple causes of this problem: ### 1. Too many WebAssembly instances are created We used to do all the image processing operations (decode, resize, rotate, encodeJpeg, encodePng, encodeWebp) inside each worker thread, where each operation creates at least one WASM instance, and we create `os.cpus().length - 1` workers by default. That means in the worst case, there will be `N*6` WASM instances created (N is the number of CPU cores minus one). This PR changes it to a pipeline-like architecture: there will be at most 6 workers, and the same type of operations will always be assigned to the same worker. With this change, 6 WASM instances will be created in the worst case. ### 2. WebAssembly memory can't be deallocated It's known that [WebAssembly can't simply deallocate its memory as of today](https://stackoverflow.com/a/51544868/2424786). And due to the implementation/design of the WASM modules that we are using, they're not very suitable for long-running cases and it's more like a one-off use. For each operation like resize, it will allocate **new memory** to store that data. So the memory will increase quickly as more images are processed. The fix is to get rid of `execOnce` for WASM module initializations, so each time a new WASM module will be created and the old module will be GC'd entirely as there's no reference to it. That's the only and easiest way to free the memory use of a WASM module AFAIK. ### 3. WebAssembly memory isn't correctly freed after finishing the operation `wasm-bindgen` generates code with global variables like `cachegetUint8Memory0` and `wasm` that always hold the WASM memory as a reference. We need to manually clean them up after finishing each operation. This PR ensures that these variables will be deleted so the memory overhead can go back to 0 when an operation is finished. ### 4. Memory leak inside event listeners `emscripten` generates code with global error listener registration (without cleaning them up): https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L39-L43 And the listener has references to the WASM instance directly or indirectly: https://github.com/vercel/next.js/blob/99a4ea6/packages/next/next-server/server/lib/squoosh/webp/webp_node_dec.js#L183-L192 (`e`, `y`, `r`). That means whenever a WASM module is created (emscripten), its memory will be kept by the global scope. And when we replace the WASM module with a new one, the newer will be added again and the old will still be referenced, which causes a leak. Since we're running them inside worker threads (which will retry on fail), this PR simply removes these listeners. ### Test Here're some statistics showing that these changes have improved the memory usage a lot (the app I'm using to test has one page of 20 high-res PNGs): Before this PR (`next@10.1.0`): <img src="https://user-images.githubusercontent.com/3676859/113058480-c3496100-91e0-11eb-9e5a-b325e484adac.png" width="500"> Memory went from ~250MB to 3.2GB (peak: 3.5GB) and never decreased again. With fix 1 applied: <img src="https://user-images.githubusercontent.com/3676859/113059060-921d6080-91e1-11eb-8ac6-83c70c1f2f75.png" width="500"> Memory went from ~280MB to 1.5GB (peak: 2GB). With fix 1+2 applied: <img src="https://user-images.githubusercontent.com/3676859/113059207-bf6a0e80-91e1-11eb-845a-870944f9e116.png" width="500"> Memory went from ~280MB to 1.1GB (peak: 1.6GB). With fix 1+2+3+4 applied: <img src="https://user-images.githubusercontent.com/3676859/113059362-ec1e2600-91e1-11eb-8d9a-8fbce8808802.png" width="500"> It's back to normal; memory changed from ~300MB to ~480MB, peaked at 1.2GB. You can clearly see that GC is working correctly here. --- ## Bug - [x] Related issues #23189, #23436 - [ ] Integration tests added ## Feature - [ ] Implements an existing feature request or RFC. Make sure the feature request has been accepted for implementation before opening a PR. - [ ] Related issues linked using `fixes #number` - [ ] Integration tests added - [ ] Documentation added - [ ] Telemetry added. In case of a feature if it's used or not. ## Documentation / Examples - [ ] Make sure the linting passes
2021-03-31 10:24:42 +02:00
}
}
}
switch (encoding) {
case 'jpeg':
return Buffer.from(await worker.encodeJpeg(imageData, { quality }))
case 'webp':
return Buffer.from(await worker.encodeWebp(imageData, { quality }))
case 'png':
return Buffer.from(await worker.encodePng(imageData))
default:
throw Error(`Unsupported encoding format`)
}
}