pingora/tinyufo
Matthew Gumport ae8ea771b1 compile and test cleanly with nightly
The vast majority of these are redundant imports.
2024-03-15 14:37:56 -07:00
..
benches Release Pingora version 0.1.0 2024-02-27 20:25:44 -08:00
src compile and test cleanly with nightly 2024-03-15 14:37:56 -07:00
Cargo.toml Release Pingora version 0.1.0 2024-02-27 20:25:44 -08:00
LICENSE Release Pingora version 0.1.0 2024-02-27 20:25:44 -08:00
README.md Release Pingora version 0.1.0 2024-02-27 20:25:44 -08:00

TinyUFO

TinyUFO is a fast and efficient in-memory cache. It adopts the state-of-the-art S3-FIFO as well as TinyLFU algorithms to achieve high throughput and high hit ratio as the same time.

Usage

See docs

Performance Comparison

We compare TinyUFO with lru, the most commonly used cache algorithm and moka, another great cache library that implements TinyLFU.

Hit Ratio

The table below show the cache hit ratio of the compared algorithm under different size of cache, zipf=1.

cache size / total assets TinyUFO TinyUFO - LRU TinyUFO - moka (TinyLFU)
0.5% 45.26% +14.21pp -0.33pp
1% 52.35% +13.19pp +1.69pp
5% 68.89% +10.14pp +1.91pp
10% 75.98% +8.39pp +1.59pp
25% 85.34% +5.39pp +0.95pp

Both TinyUFO and moka greatly improves hit ratio from lru. TinyUFO is the one better in this workload. This paper contains more thorough cache performance evaluations S3-FIFO, which TinyUFO varies from, against many caching algorithms under a variety of workloads.

Speed

The table below shows the number of operations performed per second for each cache library. The tests are performed using 8 threads on a x64 Linux desktop.

Setup TinyUFO LRU moka
Pure read 148.7 million ops 7.0 million ops 14.1 million ops
Mixed read/write 80.9 million ops 6.8 million ops 16.6 million ops

Because of TinyUFO's lock-free design, it greatly outperforms the others.

Memory overhead

The table below show the memory allocation (in bytes) of the compared cache library under certain workloads to store zero-sized assets.

cache size TinyUFO LRU moka
100 39,409 9,408 354,376
1000 236,053 128,512 535,888
10000 2,290,635 1,075,648 2,489,088

Whether these overheads matter depends on the actual sizes and volume of the assets. The more advanced algorithms are likely to be less memory efficient than the simple LRU.