Fix typos and grammar issues in docs

And other things.

Co-authored-by: DimanNe <causal.consistency@proton.me>
Co-authored-by: Xiaobo Liu <cppcoffee@gmail.com>
Co-authored-by: houseme <housemecn@gmail.com>
Co-authored-by: lilo <luolee.me@gmail.com>
Co-authored-by: Yang He <abruzzi.hraig@gmail.com>
This commit is contained in:
Yuchen Wu 2024-03-29 15:42:20 -07:00 committed by Andrew Hauck
parent 3c5d99c3f4
commit dfcd3d1d9f
43 changed files with 110 additions and 110 deletions

2
.bleep
View file

@ -1 +1 @@
c687a394930e83b98dc057612d27258c62ee3c01
3ca77e7ac5f665afa94f77ef9d3852db3195470d

View file

@ -1,7 +1,7 @@
# Pingora User Manual
## Quick Start
In this section we show you how to build a barebones load balancer.
In this section we show you how to build a bare-bones load balancer.
[Read the quick start here.](quick_start.md)

View file

@ -60,7 +60,7 @@ Any object that implements the `ProxyHttp` trait essentially defines how a reque
the proxy. The only required method in the `ProxyHttp` trait is `upstream_peer()` which returns
the address where the request should be proxied to.
In the body of the `upstream_peer()`, let's use the `select()` method for the `LoadBalancer` to round-robin across the upstream IPs. In this example we use HTTPS to connect to the backends, so we also need to specify to `use_tls` and set the SNI when constructing our [`Peer`](peer.md) object.
In the body of the `upstream_peer()`, let's use the `select()` method for the `LoadBalancer` to round-robin across the upstream IPs. In this example we use HTTPS to connect to the backends, so we also need to specify to `use_tls` and set the SNI when constructing our [`Peer`](user_guide/peer.md)) object.
```rust
#[async_trait]

View file

@ -14,10 +14,10 @@ Otherwise, once the response header is already sent downstream, there is nothing
In order to implement retry or failover, `fail_to_connect()` / `error_while_proxy()` needs to mark the error as "retry-able." For failover, `fail_to_connect() / error_while_proxy()` also needs to update the `CTX` to tell `upstream_peer()` not to use the same `Peer` again.
### Safety
In general, idempotent HTTP requests, e.g., `GET`, are safe to retry. Other requests, e.g., `POST`, are not safe to retry if the requests have already been sent. When `fail_to_connect()` is called, pingora-proxy guarantees that nothing was sent upstream. Users are not recommended to retry an non-idempotent request after `error_while_proxy()` unless they know the upstream server enough to know whether it is safe.
In general, idempotent HTTP requests, e.g., `GET`, are safe to retry. Other requests, e.g., `POST`, are not safe to retry if the requests have already been sent. When `fail_to_connect()` is called, pingora-proxy guarantees that nothing was sent upstream. Users are not recommended to retry a non-idempotent request after `error_while_proxy()` unless they know the upstream server enough to know whether it is safe.
### Example
In the following example we set a `tries` variable on the `CTX` to track how many connection attempts we've made. When setting our peer in `upstream_peer` we check if `tries` is less than one and connect to 192.0.2.1. On connect failure we increment `tries` in `fail_to_connect` and set `e.set_retry(true)` which tells Pingora this a retryable error. On retry we enter `upstream_peer` again and this time connect to 1.1.1.1. If we're unable to connect to 1.1.1.1 we return a 502 since we only set `e.set_retry(true)` in `fail_to_connect` when `tries` is zero.
In the following example we set a `tries` variable on the `CTX` to track how many connection attempts we've made. When setting our peer in `upstream_peer` we check if `tries` is less than one and connect to 192.0.2.1. On connect failure we increment `tries` in `fail_to_connect` and set `e.set_retry(true)` which tells Pingora this is a retryable error. On retry, we enter `upstream_peer` again and this time connect to 1.1.1.1. If we're unable to connect to 1.1.1.1 we return a 502 since we only set `e.set_retry(true)` in `fail_to_connect` when `tries` is zero.
```Rust
pub struct MyProxy();

View file

@ -140,7 +140,7 @@ pub fn ssl_use_second_key_share(_ssl: &mut SslRef, _enabled: bool) {}
/// Clear the error stack
///
/// SSL calls should check and clear the BoringSSL error stack. But some calls fail to do so.
/// This causes the next unrelated SSL call to fail due to the leftover errors. This function allow
/// This causes the next unrelated SSL call to fail due to the leftover errors. This function allows
/// the caller to clear the error stack before performing SSL calls to avoid this issue.
pub fn clear_error_stack() {
let _ = ErrorStack::get();
@ -186,7 +186,7 @@ pub fn is_suspended_for_cert(error: &boring::ssl::Error) -> bool {
#[allow(clippy::mut_from_ref)]
/// Get a mutable SslRef ouf of SslRef. which is a missing functionality for certain SslStream
/// # Safety
/// the caller need to make sure that they hold a &mut SslRef
/// the caller needs to make sure that they hold a &mut SslRef
pub unsafe fn ssl_mut(ssl: &SslRef) -> &mut SslRef {
unsafe { SslRef::from_ptr_mut(ssl.as_ptr()) }
}

View file

@ -219,7 +219,7 @@ impl CacheControl {
self.has_key("public")
}
/// Whether the given directive exists and it has no value.
/// Whether the given directive exists, and it has no value.
fn has_key_without_value(&self, key: &str) -> bool {
matches!(self.directives.get(key), Some(None))
}

View file

@ -55,7 +55,7 @@ impl<const N: usize> Manager<N> {
assert!(shard < N);
// NOTE: This could use a lot memory to buffer the serialized data in memory
// NOTE: This could use a lot of memory to buffer the serialized data in memory
// NOTE: This for loop could lock the LRU for too long
let mut nodes = Vec::with_capacity(self.0.shard_len(shard));
self.0.iter_for_each(shard, |(node, size)| {

View file

@ -39,7 +39,7 @@ struct Node {
/// A simple LRU eviction manager
///
/// The implementation is not optimized. All operation require global locks.
/// The implementation is not optimized. All operations require global locks.
pub struct Manager {
lru: RwLock<LruCache<u64, Node>>,
limit: usize,
@ -86,7 +86,7 @@ impl Manager {
}
}
// evict items until the used capacity is below limit
// evict items until the used capacity is below the limit
fn evict(&self) -> Vec<CompactCacheKey> {
if self.used.load(Ordering::Relaxed) <= self.limit {
return vec![];
@ -107,13 +107,13 @@ impl Manager {
to_evict
}
// This could use a lot memory to buffer the serialized data in memory and could lock the LRU
// This could use a lot of memory to buffer the serialized data in memory and could lock the LRU
// for too long
fn serialize(&self) -> Result<Vec<u8>> {
use rmp_serde::encode::Serializer;
use serde::ser::SerializeSeq;
use serde::ser::Serializer as _;
// NOTE: This could use a lot memory to buffer the serialized data in memory
// NOTE: This could use a lot of memory to buffer the serialized data in memory
let mut ser = Serializer::new(vec![]);
// NOTE: This long for loop could lock the LRU for too long
let lru = self.lru.read();

View file

@ -31,7 +31,7 @@ pub fn request_cacheable(req_header: &ReqHeader) -> bool {
/// Decide if the response is cacheable.
///
/// `cache_control` is the parsed [CacheControl] from the response header. It is an standalone
/// `cache_control` is the parsed [CacheControl] from the response header. It is a standalone
/// argument so that caller has the flexibility to choose to use, change or ignore it.
// TODO: vary processing
pub fn resp_cacheable(

View file

@ -67,7 +67,7 @@ where
pub struct LruShard<V>(RwLock<LruCache<u128, V>>);
impl<V> Default for LruShard<V> {
fn default() -> Self {
// help satisfy default construction of array
// help satisfy default construction of arrays
LruShard(RwLock::new(LruCache::unbounded()))
}
}

View file

@ -73,7 +73,7 @@ pub trait CacheHashKey {
/// An extra tag for identifying users
///
/// For example if the storage backend implements per user quota, this tag can be used.
/// For example, if the storage backend implements per user quota, this tag can be used.
fn user_tag(&self) -> &str;
/// The hex string of [Self::primary_bin()]
@ -95,13 +95,13 @@ pub trait CacheHashKey {
/// General purpose cache key
#[derive(Debug, Clone)]
pub struct CacheKey {
// All strings for now, can be more structural as long as it can hash
// All strings for now. It can be more structural as long as it can hash
namespace: String,
primary: String,
variance: Option<HashBinary>,
/// An extra tag for identifying users
///
/// For example if the storage backend implements per user quota, this tag can be used.
/// For example, if the storage backend implements per user quota, this tag can be used.
pub user_tag: String,
}

View file

@ -69,7 +69,7 @@ pub enum CachePhase {
/// Cache was enabled, the request decided not to use it
// HttpCache.inner is kept
Bypass,
/// Awaiting cache key to be generated
/// Awaiting the cache key to be generated
CacheKey,
/// Cache hit
Hit,
@ -81,7 +81,7 @@ pub enum CachePhase {
Expired,
/// A staled (expired) asset was found, and it was revalidated to be fresh
Revalidated,
/// Revalidated, but deemed uncacheable so we do not freshen it
/// Revalidated, but deemed uncacheable, so we do not freshen it
RevalidatedNoCache(NoCacheReason),
}
@ -114,12 +114,12 @@ pub enum NoCacheReason {
ResponseTooLarge,
/// Due to internal caching storage error
StorageError,
/// Due to other type of internal issues
/// Due to other types of internal issues
InternalError,
/// will be cacheable but skip cache admission now
///
/// This happens when the cache predictor predicted that this request is not cacheable but
/// the response turns out to be OK to cache. However it might be too large to re-enable caching
/// This happens when the cache predictor predicted that this request is not cacheable, but
/// the response turns out to be OK to cache. However, it might be too large to re-enable caching
/// for this request.
Deferred,
/// The writer of the cache lock sees that the request is not cacheable (Could be OriginNotCache)
@ -285,7 +285,7 @@ impl HttpCache {
let lock = inner.lock.take();
if let Some(Locked::Write(_r)) = lock {
let lock_status = match reason {
// let next request try to fetch it
// let the next request try to fetch it
InternalError | StorageError | Deferred => LockStatus::TransientError,
// no need for the lock anymore
OriginNotCache | ResponseTooLarge => LockStatus::GiveUp,
@ -414,7 +414,7 @@ impl HttpCache {
/// Return the cache key used for asset lookup
/// # Panic
/// Can only be called after cache key is set and cache is not disabled. Panic otherwise.
/// Can only be called after the cache key is set and the cache is not disabled. Panic otherwise.
pub fn cache_key(&self) -> &CacheKey {
match self.phase {
CachePhase::Disabled(_) | CachePhase::Uninit => panic!("wrong phase {:?}", self.phase),
@ -511,7 +511,7 @@ impl HttpCache {
}
}
/// Return the body reader during a cache admission(miss/expired) which decouples the downstream
/// Return the body reader during a cache admission (miss/expired) which decouples the downstream
/// read and upstream cache write
pub fn miss_body_reader(&mut self) -> Option<&mut HitHandler> {
match self.phase {
@ -588,7 +588,7 @@ impl HttpCache {
};
if inner.storage.support_streaming_partial_write() {
// If reader can access partial write, the cache lock can be released here
// If a reader can access partial write, the cache lock can be released here
// to let readers start reading the body.
let lock = inner.lock.take();
if let Some(Locked::Write(_r)) = lock {
@ -929,7 +929,7 @@ impl HttpCache {
match self.phase {
CachePhase::CacheKey => {
let inner = self.inner_mut();
// make sure that all variance found are fresher than this asset
// make sure that all variances found are fresher than this asset
// this is because when purging all the variance, only the primary slot is deleted
// the created TS of the primary is the tombstone of all the variances
inner.valid_after = Some(meta.created());
@ -943,8 +943,8 @@ impl HttpCache {
let matches_variance = meta.variance() == variance_binary;
// We should remove the variance in the lookup `key` if this is the primary variant
// slot. We know this is the primary variant slot if this is the initial cache hit
// AND the variance in the `key` already matches the `meta`'s.)
// slot. We know this is the primary variant slot if this is the initial cache hit,
// AND the variance in the `key` already matches the `meta`'s.
//
// For the primary variant slot, the storage backend needs to use the primary key
// for both cache lookup and updating the meta. Otherwise it will look for the
@ -1039,7 +1039,7 @@ impl HttpCache {
/// Delete the asset from the cache storage
/// # Panic
/// Need to be called after cache key is set. Panic otherwise.
/// Need to be called after the cache key is set. Panic otherwise.
pub async fn purge(&mut self) -> Result<bool> {
match self.phase {
CachePhase::CacheKey => {
@ -1066,7 +1066,7 @@ impl HttpCache {
}
}
/// Tell the predictor that this response which is previously predicted to be uncacheable
/// Tell the predictor that this response, which is previously predicted to be uncacheable,
/// is cacheable now.
pub fn response_became_cacheable(&self) {
if let Some(predictor) = self.inner().predictor {
@ -1083,7 +1083,7 @@ impl HttpCache {
}
}
/// Set the header compression dictionary that help serialize http header.
/// Set the header compression dictionary, which helps serialize http header.
///
/// Return false if it is already set.
pub fn set_compression_dict_path(path: &str) -> bool {

View file

@ -28,7 +28,7 @@ pub struct CacheLock {
timeout: Duration, // fixed timeout value for now
}
/// A struct prepresenting a locked cache access
/// A struct representing locked cache access
#[derive(Debug)]
pub enum Locked {
/// The writer is allowed to fetch the asset
@ -174,8 +174,8 @@ impl LockCore {
fn unlock(&self, reason: LockStatus) {
self.lock_status.store(reason.into(), Ordering::SeqCst);
// any small positive number will do, 10 is used for RwLock too
// no need to wake up all at once
// Any small positive number will do, 10 is used for RwLock as well.
// No need to wake up all at once.
self.lock.add_permits(10);
}
@ -186,7 +186,7 @@ impl LockCore {
// all 3 structs below are just Arc<LockCore> with different interfaces
/// ReadLock: requests who get it need to wait until it is released
/// ReadLock: the requests who get it need to wait until it is released
#[derive(Debug)]
pub struct ReadLock(Arc<LockCore>);
@ -245,7 +245,7 @@ impl WritePermit {
impl Drop for WritePermit {
fn drop(&mut self) {
// writer exit without properly unlock, let others to compete for the write lock again
// Writer exited without properly unlocking. We let others to compete for the write lock again
if self.0.locked() {
self.unlock(LockStatus::Dangling);
}

View file

@ -41,7 +41,7 @@ pub(crate) struct CacheObject {
pub(crate) struct TempObject {
pub meta: BinaryMeta,
// these are Arc because they need to continue exist after this TempObject is removed
// these are Arc because they need to continue to exist after this TempObject is removed
pub body: Arc<RwLock<Vec<u8>>>,
bytes_written: Arc<watch::Sender<PartialState>>, // this should match body.len()
}
@ -307,7 +307,7 @@ impl Storage for MemCache {
}
async fn purge(&'static self, key: &CompactCacheKey, _trace: &SpanHandle) -> Result<bool> {
// This usually purges the primary key because, without a lookup, variance key is usually
// This usually purges the primary key because, without a lookup, the variance key is usually
// empty
let hash = key.combined();
let temp_removed = self.temp.write().remove(&hash).is_some();

View file

@ -166,9 +166,9 @@ mod internal_meta {
{
// v0 has 4 items and no version number
4 => Ok(InternalMetaV0::deserialize(buf)?.into()),
// other V should has version number encoded
// other V should have version number encoded
_ => {
// rmp will encode version < 128 into a fixint (one byte),
// rmp will encode `version` < 128 into a fixint (one byte),
// so we use read_pfix
let version = rmp::decode::read_pfix(preread_buf)
.or_err(InternalError, "failed to decode meta version")?;

View file

@ -481,7 +481,7 @@ mod parse_response {
resp.status,
StatusCode::NO_CONTENT | StatusCode::NOT_MODIFIED
) {
// these status code cannot have body by definition
// these status codes cannot have body by definition
return ParseState::Done(0);
}
if let Some(encoding) = resp.headers.get(http::header::TRANSFER_ENCODING) {

View file

@ -54,7 +54,7 @@ pub trait ServerApp {
fn cleanup(&self) {}
}
/// This trait defines the interface of a HTTP application.
/// This trait defines the interface of an HTTP application.
#[cfg_attr(not(doc_async_trait), async_trait)]
pub trait HttpServerApp {
/// Similar to the [`ServerApp`], this function is called whenever a new HTTP session is established.
@ -118,7 +118,7 @@ where
server::HttpSession::from_h2_conn(&mut h2_conn, digest.clone()).await;
let h2_stream = match h2_stream {
Err(e) => {
// It is common for client to just disconnect TCP without properly
// It is common for the client to just disconnect TCP without properly
// closing H2. So we don't log the errors here
debug!("H2 error when accepting new stream {e}");
return None;

View file

@ -23,7 +23,7 @@ use crate::apps::http_app::ServeHttp;
use crate::modules::http::compression::ResponseCompressionBuilder;
use crate::protocols::http::ServerSession;
/// A HTTP application that reports Prometheus metrics.
/// An HTTP application that reports Prometheus metrics.
///
/// This application will report all the [static metrics](https://docs.rs/prometheus/latest/prometheus/index.html#static-metrics)
/// collected via the [Prometheus](https://docs.rs/prometheus/) crate;

View file

@ -209,7 +209,7 @@ impl TransportConnector {
/// Return the [Stream] to the [TransportConnector] for connection reuse.
///
/// Not all TCP/TLS connection can be reused. It is the caller's responsibility to make sure
/// Not all TCP/TLS connections can be reused. It is the caller's responsibility to make sure
/// that protocol over the [Stream] supports connection reuse and the [Stream] itself is ready
/// to be reused.
///

View file

@ -164,7 +164,7 @@ async fn bind_tcp(addr: &str, opt: Option<TcpSocketOptions>) -> Result<Listener>
.or_err_with(BindError, || format!("fail to create address {sock_addr}"))?;
// NOTE: this is to preserve the current TcpListener::bind() behavior.
// We have a few test relying on this behavior to allow multiple identical
// We have a few tests relying on this behavior to allow multiple identical
// test servers to coexist.
listener_socket
.set_reuseaddr(true)

View file

@ -17,7 +17,7 @@
//! This mod implements the most rudimentary CONNECT client over raw stream.
//! The idea is to yield raw stream once the CONNECT handshake is complete
//! so that the protocol encapsulated can use the stream directly.
//! this idea only works for CONNECT over HTTP 1.1 and localhost (or where the server is close by).
//! This idea only works for CONNECT over HTTP 1.1 and localhost (or where the server is close by).
use super::http::v1::client::HttpSession;
use super::http::v1::common::*;

View file

@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! TLS server specific implementation
//! TLS server specific implementation
use super::SslStream;
use crate::protocols::{Shutdown, IO};
@ -73,7 +73,7 @@ pub async fn handshake_with_callback<S: IO>(
#[async_trait]
pub trait TlsAccept {
// TODO: return error?
/// This function is called in the middle of a TLS handshake. Structs who implements this function
/// This function is called in the middle of a TLS handshake. Structs who implement this function
/// should provide tls certificate and key to the [SslRef] via [ext::ssl_use_certificate] and [ext::ssl_use_private_key].
async fn certificate_callback(&self, _ssl: &mut SslRef) -> () {
// does nothing by default

View file

@ -27,7 +27,7 @@ use structopt::StructOpt;
/// The configuration file
///
/// Pingora configuration files are by default YAML files but any key value format can potentially
/// Pingora configuration files are by default YAML files, but any key value format can potentially
/// be used.
///
/// # Extension

View file

@ -34,11 +34,11 @@ pub use transfer_fd::Fds;
use pingora_error::{Error, ErrorType, Result};
/* time to wait before exiting the program
this is the graceful period for all existing session to finish */
/* Time to wait before exiting the program.
This is the graceful period for all existing sessions to finish */
const EXIT_TIMEOUT: u64 = 60 * 5;
/* time to wait before shutting down listening sockets
this is the graceful period for the new service to get ready */
/* Time to wait before shutting down listening sockets.
This is the graceful period for the new service to get ready */
const CLOSE_TIMEOUT: u64 = 5;
enum ShutdownType {

View file

@ -95,7 +95,7 @@ impl Fds {
}
fn serialize_vec_string(vec_string: &[String], mut buf: &mut [u8]) -> usize {
// There are many way to do this. serde is probably the way to go
// There are many ways to do this. Serde is probably the way to go
// But let's start with something simple: space separated strings
let joined = vec_string.join(" ");
// TODO: check the buf is large enough

View file

@ -64,7 +64,7 @@ pub trait Peer: Display + Clone {
fn sni(&self) -> &str;
/// To decide whether a [`Peer`] can use the connection established by another [`Peer`].
///
/// The connection to two peers are considered reusable to each other if their reuse hashes are
/// The connections to two peers are considered reusable to each other if their reuse hashes are
/// the same
fn reuse_hash(&self) -> u64;
/// Get the proxy setting to connect to the remote server

View file

@ -148,7 +148,7 @@ impl RequestHeader {
/// Append the header name and value to `self`.
///
/// If there are already some header under the same name, a new value will be added without
/// If there are already some headers under the same name, a new value will be added without
/// any others being removed.
pub fn append_header(
&mut self,
@ -377,7 +377,7 @@ impl ResponseHeader {
/// Append the header name and value to `self`.
///
/// If there are already some header under the same name, a new value will be added without
/// If there are already some headers under the same name, a new value will be added without
/// any others being removed.
pub fn append_header(
&mut self,

View file

@ -101,7 +101,7 @@ struct Point {
hash: u32,
}
// We only want to compare the hash when sorting so we implement these traits by hand.
// We only want to compare the hash when sorting, so we implement these traits by hand.
impl Ord for Point {
fn cmp(&self, other: &Self) -> Ordering {
self.hash.cmp(&other.hash)

View file

@ -29,7 +29,7 @@ pub struct Rate {
// 2 slots so that we use one to collect the current events and the other to report rate
red_slot: Estimator,
blue_slot: Estimator,
red_or_blue: AtomicBool, // true: current slot is red, otherwise blue
red_or_blue: AtomicBool, // true: the current slot is red, otherwise blue
start: Instant,
// Use u64 below instead of Instant because we want atomic operation
reset_interval_ms: u64, // the time interval to reset `current` and move it to `previous`
@ -160,6 +160,6 @@ mod tests {
// second: 3
sleep(Duration::from_secs(1));
assert_eq!(r.rate(&key), 0f64); // no event observed in the past 2 second
assert_eq!(r.rate(&key), 0f64); // no event observed in the past 2 seconds
}
}

View file

@ -42,9 +42,9 @@ pub trait HealthCheck {
///
/// This health check checks if a TCP (or TLS) connection can be established to a given backend.
pub struct TcpHealthCheck {
/// Number of successful check to flip from unhealthy to healthy.
/// Number of successful checks to flip from unhealthy to healthy.
pub consecutive_success: usize,
/// Number of failed check to flip from healthy to unhealthy.
/// Number of failed checks to flip from healthy to unhealthy.
pub consecutive_failure: usize,
/// How to connect to the backend.
///
@ -52,7 +52,7 @@ pub struct TcpHealthCheck {
/// The SocketAddr of `peer_template` is just a placeholder which will be replaced by the
/// actual address of the backend when the health check runs.
///
/// By default this check will try to establish a TCP connection. When the `sni` field is
/// By default, this check will try to establish a TCP connection. When the `sni` field is
/// set, it will also try to establish a TLS connection on top of the TCP connection.
pub peer_template: BasicPeer,
connector: TransportConnector,
@ -118,9 +118,9 @@ type Validator = Box<dyn Fn(&ResponseHeader) -> Result<()> + Send + Sync>;
///
/// This health check checks if it can receive the expected HTTP(s) response from the given backend.
pub struct HttpHealthCheck {
/// Number of successful check to flip from unhealthy to healthy.
/// Number of successful checks to flip from unhealthy to healthy.
pub consecutive_success: usize,
/// Number of failed check to flip from healthy to unhealthy.
/// Number of failed checks to flip from healthy to unhealthy.
pub consecutive_failure: usize,
/// How to connect to the backend.
///
@ -157,7 +157,7 @@ impl HttpHealthCheck {
/// * consecutive_success: 1
/// * consecutive_failure: 1
/// * reuse_connection: false
/// * validator: `None`, any 200 response is consider successful
/// * validator: `None`, any 200 response is considered successful
pub fn new(host: &str, tls: bool) -> Self {
let mut req = RequestHeader::build("GET", b"/", None).unwrap();
req.append_header("Host", host).unwrap();
@ -241,7 +241,7 @@ impl HealthCheck for HttpHealthCheck {
struct HealthInner {
/// Whether the endpoint is healthy to serve traffic
healthy: bool,
/// Whether the endpoint is allowed to serve traffic independent from its health
/// Whether the endpoint is allowed to serve traffic independent of its health
enabled: bool,
/// The counter for stateful transition between healthy and unhealthy.
/// When [healthy] is true, this counts the number of consecutive health check failures

View file

@ -32,7 +32,7 @@ pub trait BackendSelection {
/// Select backends for a given key.
///
/// An [BackendIter] should be returned. The first item in the iter is the first
/// choice backend. The user should continue iterate over it if the first backend
/// choice backend. The user should continue to iterate over it if the first backend
/// cannot be used due to its health or other reasons.
fn iter(self: &Arc<Self>, key: &[u8]) -> Self::Iter
where

View file

@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! An implementation of a LRU that focuses on memory efficiency, concurrency and persistence
//! An implementation of an LRU that focuses on memory efficiency, concurrency and persistence
//!
//! Features
//! - keys can have different sizes
@ -85,7 +85,7 @@ impl<T, const N: usize> Lru<T, N> {
/// Promote the key to the head of the LRU
///
/// Return `true` if the key exist.
/// Return `true` if the key exists.
pub fn promote(&self, key: u64) -> bool {
self.units[get_shard(key, N)].write().access(key)
}

View file

@ -18,7 +18,7 @@
//!
//! Features
//! - Preallocate consecutive memory, no memory fragmentation.
//! - No shrink function: for Lru cache that grows to a certain size but never shrink.
//! - No shrink function: for Lru cache that grows to a certain size but never shrinks.
//! - Relatively fast and efficient.
// inspired by clru::FixedSizeList (Élie!)
@ -72,10 +72,10 @@ impl Nodes {
next: NULL,
data,
};
// Constrain the growth of vec: vec always double its capacity when it needs to grow
// Constrain the growth of vec: vec always double its capacity when it needs to grow.
// It could waste too much memory when it is already very large.
// Here we limit the memory waste to 10% once it grows beyond the cap.
// The amortized growth cost is O(n) beyond the max of the initial reserved capacity and
// The amortized growth cost is O(n) beyond the max of the initially reserved capacity and
// the cap. But this list is for limited sized LRU and we recycle released node, so
// hopefully insertions are rare beyond certain sizes
if self.data_nodes.capacity() > VEC_EXP_GROWTH_CAP
@ -129,7 +129,7 @@ pub struct LinkedList {
free: Vec<Index>, // to keep track of freed node to be used again
}
// Panic when index used as parameters are invalid
// Index returned by push_* are always valid.
// Index returned by push_* is always valid.
impl LinkedList {
/// Create a [LinkedList] with the given predicted capacity.
pub fn with_capacity(capacity: usize) -> Self {
@ -178,7 +178,7 @@ impl LinkedList {
self.node(index).map(|n| n.data)
}
// safe because index still needs to be in the range of the vec
// safe because the index still needs to be in the range of the vec
fn peek_unchecked(&self, index: Index) -> &u64 {
&self.nodes[index].data
}

View file

@ -25,7 +25,7 @@ pub use read_through::{Lookup, MultiLookup, RTCache};
#[derive(Debug, PartialEq, Eq)]
/// [CacheStatus] indicates the response type for a query.
pub enum CacheStatus {
/// The key was found in cache
/// The key was found in the cache
Hit,
/// The key was not found.
Miss,
@ -109,7 +109,7 @@ impl<K: Hash, T: Clone + Send + Sync + 'static> MemoryCache<K, T> {
/// Insert a key and value pair with an optional TTL into the cache.
///
/// An item with zero TTL of zero not inserted.
/// An item with zero TTL of zero will not be inserted.
pub fn put(&self, key: &K, value: T, ttl: Option<Duration>) {
if let Some(t) = ttl {
if t.is_zero() {

View file

@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! An async read through cache where cache miss are populated via the provided
//! An async read through cache where cache misses are populated via the provided
//! async callback.
use super::{CacheStatus, MemoryCache};
@ -293,7 +293,7 @@ where
{
/// Same behavior as [RTCache::get] but for an arbitrary amount of keys.
///
/// If there are keys that are missing from cache, `multi_lookup` is invoked to populate the
/// If there are keys that are missing from the cache, `multi_lookup` is invoked to populate the
/// cache before returning the final results. This is useful if your type supports batch
/// queries.
///
@ -316,7 +316,7 @@ where
match CB::multi_lookup(&misses, extra).await {
Ok(miss_results) => {
// assert! here to prevent index panic when building results,
// final_results has full list of misses but miss_results might not
// final_results has the full list of misses but miss_results might not
assert!(
miss_results.len() == misses.len(),
"multi_lookup() failed to return the matching number of results"
@ -657,7 +657,7 @@ mod tests {
assert_eq!(resp[1].1, CacheStatus::Miss);
assert_eq!(resp[2].0, 3);
assert_eq!(resp[2].1, CacheStatus::Miss);
// all hit after a fetch
// all hits after a fetch
let resp = cache
.multi_get([1, 2, 3].iter(), None, opt1.as_ref())
.await
@ -673,7 +673,7 @@ mod tests {
#[tokio::test]
#[should_panic(expected = "multi_lookup() failed to return the matching number of results")]
async fn test_inconsistent_miss_results() {
// force empty result
// force an empty result
let opt1 = Some(ExtraOpt {
error: false,
empty: true,

View file

@ -157,7 +157,7 @@ pub fn ssl_use_second_key_share(_ssl: &mut SslRef, _enabled: bool) {}
/// Clear the error stack
///
/// SSL calls should check and clear the OpenSSL error stack. But some calls fail to do so.
/// This causes the next unrelated SSL call to fail due to the leftover errors. This function allow
/// This causes the next unrelated SSL call to fail due to the leftover errors. This function allows
/// caller to clear the error stack before performing SSL calls to avoid this issue.
pub fn clear_error_stack() {
let _ = ErrorStack::get();
@ -203,7 +203,7 @@ pub fn is_suspended_for_cert(error: &openssl::ssl::Error) -> bool {
#[allow(clippy::mut_from_ref)]
/// Get a mutable SslRef ouf of SslRef, which is a missing functionality even when holding &mut SslStream
/// # Safety
/// the caller need to make sure that they hold a &mut SslStream (or other mutable ref to the Ssl)
/// the caller needs to make sure that they hold a &mut SslStream (or other types of mutable ref to the Ssl)
pub unsafe fn ssl_mut(ssl: &SslRef) -> &mut SslRef {
SslRef::from_ptr_mut(ssl.as_ptr())
}

View file

@ -121,8 +121,8 @@ impl<T> PoolNode<T> {
}
}
// This function acquires 2 locks and iterates over the entire hot queue
// But it should be fine because remove() rarely happens on a busy PoolNode
// This function acquires 2 locks and iterates over the entire hot queue.
// But it should be fine because remove() rarely happens on a busy PoolNode.
/// Remove the item associated with the id from the pool. The item is returned
/// if it is found and removed.
pub fn remove(&self, id: ID) -> Option<T> {
@ -141,7 +141,7 @@ impl<T> PoolNode<T> {
// this is the item, it is already popped
return Some(conn);
} else {
// not this item, put back to hot queue but it could also be full
// not this item, put back to hot queue, but it could also be full
self.insert(conn_id, conn);
}
} else {
@ -167,7 +167,7 @@ pub struct ConnectionPool<S> {
impl<S> ConnectionPool<S> {
/// Create a new [ConnectionPool] with a size limit.
///
/// when a connection is released to this pool, the least recently used connection will be dropped.
/// When a connection is released to this pool, the least recently used connection will be dropped.
pub fn new(size: usize) -> Self {
ConnectionPool {
pool: RwLock::new(HashMap::with_capacity(size)), // this is oversized since some connections will have the same key
@ -187,7 +187,7 @@ impl<S> ConnectionPool<S> {
{
// write lock section
let mut pool = self.pool.write();
// check again since another task might already added it
// check again since another task might have already added it
if let Some(v) = pool.get(&key) {
return (*v).clone();
}
@ -198,7 +198,7 @@ impl<S> ConnectionPool<S> {
}
}
// only remove from pool because lru already removed it
// only remove from the pool because lru already removed it
fn pop_evicted(&self, meta: &ConnectionMeta) {
let pool_node = {
let pool = self.pool.read();
@ -309,7 +309,7 @@ impl<S> ConnectionPool<S> {
/// Passively wait to close the connection after the timeout
///
/// If this connection is not being picked up or evicted before the timeout is reach, this
/// function will removed it from the pool and close the connection.
/// function will remove it from the pool and close the connection.
pub async fn idle_timeout(
&self,
meta: &ConnectionMeta,

View file

@ -114,7 +114,7 @@ impl<SV> HttpProxy<SV> {
// phase 1 read request header
let res = tokio::select! {
biased; // biased select is cheaper and we don't want to drop already buffered requests
biased; // biased select is cheaper, and we don't want to drop already buffered requests
res = downstream_session.read_request() => { res }
_ = self.shutdown.notified() => {
// service shutting down, dropping the connection to stop more req from coming in
@ -416,7 +416,7 @@ impl<SV> HttpProxy<SV> {
{
Ok(proxy_to_upstream) => {
if !proxy_to_upstream {
// The hook can choose to write its own response, but if it doesn't we respond
// The hook can choose to write its own response, but if it doesn't, we respond
// with a generic 502
if session.response_written().is_none() {
match session.write_response_header_ref(&BAD_GATEWAY).await {
@ -489,7 +489,7 @@ impl<SV> HttpProxy<SV> {
}
// serve stale if error
// check both error and cache before calling the function because await is not cheap
// Check both error and cache before calling the function because await is not cheap
let serve_stale_result = if proxy_error.is_some() && session.cache.can_serve_stale_error() {
self.handle_stale_if_error(&mut session, &mut ctx, proxy_error.as_ref().unwrap())
.await
@ -562,7 +562,7 @@ where
None => return, // bad request
};
// no real downstream to keepalive but it doesn't matter what is set here because at the end
// no real downstream to keepalive, but it doesn't matter what is set here because at the end
// of this fn the dummy connection will be dropped
session.set_keepalive(None);
@ -606,7 +606,7 @@ where
}
fn http_cleanup(&self) {
// Notify all keepalived request blocking on read_request() to abort
// Notify all keepalived requests blocking on read_request() to abort
self.shutdown.notify_waiters();
// TODO: impl shutting down flag so that we don't need to read stack.is_shutting_down()

View file

@ -269,7 +269,7 @@ impl<SV> HttpProxy<SV> {
Ok(b) => b,
Err(e) => {
if serve_from_cache.is_miss() {
// ignore downstream error so that upstream can continue write cache
// ignore downstream error so that upstream can continue to write cache
downstream_state.to_errored();
warn!(
"Downstream Error ignored during caching: {}, {}",

View file

@ -247,7 +247,7 @@ impl<SV> HttpProxy<SV> {
Ok(b) => b,
Err(e) => {
if serve_from_cache.is_miss() {
// ignore downstream error so that upstream can continue write cache
// ignore downstream error so that upstream can continue to write cache
downstream_state.to_errored();
warn!(
"Downstream Error ignored during caching: {}, {}",

View file

@ -20,7 +20,7 @@ use pingora_cache::{key::HashBinary, CacheKey, CacheMeta, RespCacheable, RespCac
/// The methods in [ProxyHttp] are filters/callbacks which will be performed on all requests at their
/// particular stage (if applicable).
///
/// If any of the filters returns [Result::Err], the request will fail and the error will be logged.
/// If any of the filters returns [Result::Err], the request will fail, and the error will be logged.
#[cfg_attr(not(doc_async_trait), async_trait)]
pub trait ProxyHttp {
/// The per request object to share state across the different filters
@ -29,7 +29,7 @@ pub trait ProxyHttp {
/// Define how the `ctx` should be created.
fn new_ctx(&self) -> Self::CTX;
/// Define where the proxy should sent the request to.
/// Define where the proxy should send the request to.
///
/// The returned [HttpPeer] contains the information regarding where and how this request should
/// be forwarded to.
@ -44,7 +44,7 @@ pub trait ProxyHttp {
/// In this phase, users can parse, validate, rate limit, perform access control and/or
/// return a response for this request.
///
/// If the user already sent a response to this request, a `Ok(true)` should be returned so that
/// If the user already sent a response to this request, an `Ok(true)` should be returned so that
/// the proxy would exit. The proxy continues to the next phases when `Ok(false)` is returned.
///
/// By default this filter does nothing and returns `Ok(false)`.
@ -156,7 +156,7 @@ pub trait ProxyHttp {
/// Modify the response header from the upstream
///
/// The modification is before caching so any change here will be stored in cache if enabled.
/// The modification is before caching, so any change here will be stored in the cache if enabled.
///
/// Responses served from cache won't trigger this filter. If the cache needed revalidation,
/// only the 304 from upstream will trigger the filter (though it will be merged into the
@ -172,7 +172,7 @@ pub trait ProxyHttp {
/// Modify the response header before it is send to the downstream
///
/// The modification is after caching. This filter is called for all responses including
/// responses served from cache..
/// responses served from cache.
async fn response_filter(
&self,
_session: &mut Session,
@ -351,7 +351,7 @@ pub trait ProxyHttp {
/// This callback is invoked every time request related error log needs to be generated
///
/// Users can define what is the important to be written about this request via the returned string.
/// Users can define what is important to be written about this request via the returned string.
fn request_summary(&self, session: &Session, _ctx: &Self::CTX) -> String {
session.as_ref().request_summary()
}
@ -359,7 +359,7 @@ pub trait ProxyHttp {
/// Whether the request should be used to invalidate(delete) the HTTP cache
///
/// - `true`: this request will be used to invalidate the cache.
/// - `false`: this request is a treated as an normal request
/// - `false`: this request is a treated as a normal request
fn is_purge(&self, _session: &Session, _ctx: &Self::CTX) -> bool {
false
}

View file

@ -85,7 +85,7 @@ impl std::error::Error for Elapsed {}
///
/// The timer is created the first time the `future` is pending. This avoids unnecessary timer
/// creation and cancellation on busy IOs with a good chance to be already ready (e.g., reading
/// data from TCP where the recv buffer already has a lot data to read right away).
/// data from TCP where the recv buffer already has a lot of data to read right away).
pub fn tokio_timeout<T>(duration: Duration, future: T) -> Timeout<T, TokioTimeout>
where
T: Future,

View file

@ -126,7 +126,7 @@ impl TimerManager {
Self::default()
}
// this thread sleep a resolution time and fire all Timers that a due to fire
// This thread sleeps for a resolution time and then fires all the timers that are due to fire
pub(crate) fn clock_thread(&self) {
loop {
std::thread::sleep(RESOLUTION_DURATION);