Cluster configuration (proto)

config.cluster.v3.Cluster

[config.cluster.v3.Cluster proto]

Configuration for a single upstream cluster.

{
  "transport_socket_matches": [],
  "name": ...,
  "alt_stat_name": ...,
  "type": ...,
  "cluster_type": {...},
  "eds_cluster_config": {...},
  "connect_timeout": {...},
  "per_connection_buffer_limit_bytes": {...},
  "lb_policy": ...,
  "load_assignment": {...},
  "health_checks": [],
  "max_requests_per_connection": {...},
  "circuit_breakers": {...},
  "upstream_http_protocol_options": {...},
  "common_http_protocol_options": {...},
  "http_protocol_options": {...},
  "http2_protocol_options": {...},
  "typed_extension_protocol_options": {...},
  "dns_refresh_rate": {...},
  "dns_jitter": {...},
  "dns_failure_refresh_rate": {...},
  "respect_dns_ttl": ...,
  "dns_lookup_family": ...,
  "dns_resolvers": [],
  "use_tcp_for_dns_lookups": ...,
  "dns_resolution_config": {...},
  "typed_dns_resolver_config": {...},
  "wait_for_warm_on_init": {...},
  "outlier_detection": {...},
  "cleanup_interval": {...},
  "upstream_bind_config": {...},
  "lb_subset_config": {...},
  "ring_hash_lb_config": {...},
  "maglev_lb_config": {...},
  "original_dst_lb_config": {...},
  "least_request_lb_config": {...},
  "round_robin_lb_config": {...},
  "common_lb_config": {...},
  "transport_socket": {...},
  "metadata": {...},
  "protocol_selection": ...,
  "upstream_connection_options": {...},
  "close_connections_on_host_health_failure": ...,
  "ignore_health_on_host_removal": ...,
  "filters": [],
  "load_balancing_policy": {...},
  "lrs_report_endpoint_metrics": [],
  "track_timeout_budgets": ...,
  "upstream_config": {...},
  "track_cluster_stats": {...},
  "preconnect_policy": {...},
  "connection_pool_per_downstream_connection": ...
}
transport_socket_matches

(repeated config.cluster.v3.Cluster.TransportSocketMatch) Configuration to use different transport sockets for different endpoints. The entry of envoy.transport_socket_match in the LbEndpoint.Metadata is used to match against the transport sockets as they appear in the list. If a match is not found, the search continues in LocalityLbEndpoints.Metadata. The first match is used. For example, with the following match

transport_socket_matches:
- name: "enableMTLS"
  match:
    acceptMTLS: true
  transport_socket:
    name: envoy.transport_sockets.tls
    config: { ... } # tls socket configuration
- name: "defaultToPlaintext"
  match: {}
  transport_socket:
    name: envoy.transport_sockets.raw_buffer

Connections to the endpoints whose metadata value under envoy.transport_socket_match having “acceptMTLS”/”true” key/value pair use the “enableMTLS” socket configuration.

If a socket match with empty match criteria is provided, that always match any endpoint. For example, the “defaultToPlaintext” socket match in case above.

If an endpoint metadata’s value under envoy.transport_socket_match does not match any TransportSocketMatch, the locality metadata is then checked for a match. Barring any matches in the endpoint or locality metadata, the socket configuration fallbacks to use the tls_context or transport_socket specified in this cluster.

This field allows gradual and flexible transport socket configuration changes.

The metadata of endpoints in EDS can indicate transport socket capabilities. For example, an endpoint’s metadata can have two key value pairs as “acceptMTLS”: “true”, “acceptPlaintext”: “true”. While some other endpoints, only accepting plaintext traffic has “acceptPlaintext”: “true” metadata information.

Then the xDS server can configure the CDS to a client, Envoy A, to send mutual TLS traffic for endpoints with “acceptMTLS”: “true”, by adding a corresponding TransportSocketMatch in this field. Other client Envoys receive CDS without transport_socket_match set, and still send plain text traffic to the same cluster.

This field can be used to specify custom transport socket configurations for health checks by adding matching key/value pairs in a health check’s transport socket match criteria field.

name

(string, REQUIRED) Supplies the name of the cluster which must be unique across all clusters. The cluster name is used when emitting statistics if alt_stat_name is not provided. Any : in the cluster name will be converted to _ when emitting statistics.

alt_stat_name

(string) An optional alternative to the cluster name to be used for observability. This name is used emitting stats for the cluster and access logging the cluster name. This will appear as additional information in configuration dumps of a cluster’s current status as observability_name and as an additional tag “upstream_cluster.name” while tracing. Note: Any : in the name will be converted to _ when emitting statistics. This should not be confused with Router Filter Header.

type

(config.cluster.v3.Cluster.DiscoveryType) The service discovery type to use for resolving the cluster.

Only one of type, cluster_type may be set.

cluster_type

(config.cluster.v3.Cluster.CustomClusterType) The custom cluster type.

Only one of type, cluster_type may be set.

eds_cluster_config

(config.cluster.v3.Cluster.EdsClusterConfig) Configuration to use for EDS updates for the Cluster.

connect_timeout

(Duration) The timeout for new network connections to hosts in the cluster. If not set, a default value of 5s will be used.

per_connection_buffer_limit_bytes

(UInt32Value) Soft limit on size of the cluster’s connections read and write buffers. If unspecified, an implementation defined default is applied (1MiB).

Attention

This field should be configured in the presence of untrusted upstreams.

Example configuration for untrusted environments:

per_connection_buffer_limit_bytes: 32768.0
lb_policy

(config.cluster.v3.Cluster.LbPolicy) The load balancer type to use when picking a host in the cluster.

load_assignment

(config.endpoint.v3.ClusterLoadAssignment) Setting this is required for specifying members of STATIC, STRICT_DNS or LOGICAL_DNS clusters. This field supersedes the hosts field in the v2 API.

Attention

Setting this allows non-EDS cluster types to contain embedded EDS equivalent endpoint assignments.

health_checks

(repeated config.core.v3.HealthCheck) Optional active health checking configuration for the cluster. If no configuration is specified no health checking will be done and all cluster members will be considered healthy at all times.

max_requests_per_connection

(UInt32Value) Optional maximum requests for a single upstream connection. This parameter is respected by both the HTTP/1.1 and HTTP/2 connection pool implementations. If not specified, there is no limit. Setting this parameter to 1 will effectively disable keep alive.

Attention

This field has been deprecated in favor of the max_requests_per_connection field.

circuit_breakers

(config.cluster.v3.CircuitBreakers) Optional circuit breaking for the cluster.

upstream_http_protocol_options

(config.core.v3.UpstreamHttpProtocolOptions) HTTP protocol options that are applied only to upstream HTTP connections. These options apply to all HTTP versions. This has been deprecated in favor of upstream_http_protocol_options in the http_protocol_options message. upstream_http_protocol_options can be set via the cluster’s extension_protocol_options. See upstream_http_protocol_options for example usage.

common_http_protocol_options

(config.core.v3.HttpProtocolOptions) Additional options when handling HTTP requests upstream. These options will be applicable to both HTTP1 and HTTP2 requests. This has been deprecated in favor of common_http_protocol_options in the http_protocol_options message. common_http_protocol_options can be set via the cluster’s extension_protocol_options. See upstream_http_protocol_options for example usage.

http_protocol_options

(config.core.v3.Http1ProtocolOptions) Additional options when handling HTTP1 requests. This has been deprecated in favor of http_protocol_options fields in the http_protocol_options message. http_protocol_options can be set via the cluster’s extension_protocol_options. See upstream_http_protocol_options for example usage.

http2_protocol_options

(config.core.v3.Http2ProtocolOptions) Even if default HTTP2 protocol options are desired, this field must be set so that Envoy will assume that the upstream supports HTTP/2 when making new HTTP connection pool connections. Currently, Envoy only supports prior knowledge for upstream connections. Even if TLS is used with ALPN, http2_protocol_options must be specified. As an aside this allows HTTP/2 connections to happen over plain text. This has been deprecated in favor of http2_protocol_options fields in the http_protocol_options message. http2_protocol_options can be set via the cluster’s extension_protocol_options. See upstream_http_protocol_options for example usage.

Attention

This field should be configured in the presence of untrusted upstreams.

Example configuration for untrusted environments:

http2_protocol_options:
  initial_connection_window_size: 1048576.0
  initial_stream_window_size: 65536.0
typed_extension_protocol_options

(repeated map<string, Any>) The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections. The key should match the extension filter name, such as “envoy.filters.network.thrift_proxy”. See the extension’s documentation for details on specific options.

dns_refresh_rate

(Duration) If the DNS refresh rate is specified and the cluster type is either STRICT_DNS, or LOGICAL_DNS, this value is used as the cluster’s DNS refresh rate. The value configured must be at least 1ms. If this setting is not specified, the value defaults to 5000ms. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored. This field is deprecated in favor of using the cluster_type extension point and configuring it with DnsCluster. If cluster_type is configured with DnsCluster, this field will be ignored.

dns_jitter

(Duration) DNS jitter can be optionally specified if the cluster type is either STRICT_DNS, or LOGICAL_DNS. DNS jitter causes the cluster to refresh DNS entries later by a random amount of time to avoid a stampede of DNS requests. This value sets the upper bound (exclusive) for the random amount. There will be no jitter if this value is omitted. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored. This field is deprecated in favor of using the cluster_type extension point and configuring it with DnsCluster. If cluster_type is configured with DnsCluster, this field will be ignored.

dns_failure_refresh_rate

(config.cluster.v3.Cluster.RefreshRate) If the DNS failure refresh rate is specified and the cluster type is either STRICT_DNS, or LOGICAL_DNS, this is used as the cluster’s DNS refresh rate when requests are failing. If this setting is not specified, the failure refresh rate defaults to the DNS refresh rate. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored. This field is deprecated in favor of using the cluster_type extension point and configuring it with DnsCluster. If cluster_type is configured with DnsCluster, this field will be ignored.

respect_dns_ttl

(bool) Optional configuration for setting cluster’s DNS refresh rate. If the value is set to true, cluster’s DNS refresh rate will be set to resource record’s TTL which comes from DNS resolution. This field is deprecated in favor of using the cluster_type extension point and configuring it with DnsCluster. If cluster_type is configured with DnsCluster, this field will be ignored.

dns_lookup_family

(config.cluster.v3.Cluster.DnsLookupFamily) The DNS IP address resolution policy. If this setting is not specified, the value defaults to AUTO. For logical and strict dns cluster, this field is deprecated in favor of using the cluster_type extension point and configuring it with DnsCluster. If cluster_type is configured with DnsCluster, this field will be ignored.

dns_resolvers

(repeated config.core.v3.Address) If DNS resolvers are specified and the cluster type is either STRICT_DNS, or LOGICAL_DNS, this value is used to specify the cluster’s dns resolvers. If this setting is not specified, the value defaults to the default resolver, which uses /etc/resolv.conf for configuration. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored. This field is deprecated in favor of dns_resolution_config which aggregates all of the DNS resolver configuration in a single message.

use_tcp_for_dns_lookups

(bool) Always use TCP queries instead of UDP queries for DNS lookups. This field is deprecated in favor of dns_resolution_config which aggregates all of the DNS resolver configuration in a single message.

dns_resolution_config

(config.core.v3.DnsResolutionConfig) DNS resolution configuration which includes the underlying dns resolver addresses and options. This field is deprecated in favor of typed_dns_resolver_config.

typed_dns_resolver_config

(config.core.v3.TypedExtensionConfig) DNS resolver type configuration extension. This extension can be used to configure c-ares, apple, or any other DNS resolver types and the related parameters. For example, an object of CaresDnsResolverConfig can be packed into this typed_dns_resolver_config. This configuration replaces the dns_resolution_config configuration. During the transition period when both dns_resolution_config and typed_dns_resolver_config exists, when typed_dns_resolver_config is in place, Envoy will use it and ignore dns_resolution_config. When typed_dns_resolver_config is missing, the default behavior is in place. Also note that this field is deprecated for logical dns and strict dns clusters and will be ignored when cluster_type is configured with DnsCluster.

Tip

This extension category has the following known extensions:

wait_for_warm_on_init

(BoolValue) Optional configuration for having cluster readiness block on warm-up. Currently, only applicable for STRICT_DNS, or LOGICAL_DNS, or Redis Cluster. If true, cluster readiness blocks on warm-up. If false, the cluster will complete initialization whether or not warm-up has completed. Defaults to true.

outlier_detection

(config.cluster.v3.OutlierDetection) If specified, outlier detection will be enabled for this upstream cluster. Each of the configuration values can be overridden via runtime values.

cleanup_interval

(Duration) The interval for removing stale hosts from a cluster type ORIGINAL_DST. Hosts are considered stale if they have not been used as upstream destinations during this interval. New hosts are added to original destination clusters on demand as new connections are redirected to Envoy, causing the number of hosts in the cluster to grow over time. Hosts that are not stale (they are actively used as destinations) are kept in the cluster, which allows connections to them remain open, saving the latency that would otherwise be spent on opening new connections. If this setting is not specified, the value defaults to 5000ms. For cluster types other than ORIGINAL_DST this setting is ignored.

upstream_bind_config

(config.core.v3.BindConfig) Optional configuration used to bind newly established upstream connections. This overrides any bind_config specified in the bootstrap proto. If the address and port are empty, no bind will be performed.

lb_subset_config

(config.cluster.v3.Cluster.LbSubsetConfig) Configuration for load balancing subsetting.

ring_hash_lb_config

(config.cluster.v3.Cluster.RingHashLbConfig) Optional configuration for the Ring Hash load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config, round_robin_lb_config may be set.

maglev_lb_config

(config.cluster.v3.Cluster.MaglevLbConfig) Optional configuration for the Maglev load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config, round_robin_lb_config may be set.

original_dst_lb_config

(config.cluster.v3.Cluster.OriginalDstLbConfig) Optional configuration for the Original Destination load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config, round_robin_lb_config may be set.

least_request_lb_config

(config.cluster.v3.Cluster.LeastRequestLbConfig) Optional configuration for the LeastRequest load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config, round_robin_lb_config may be set.

round_robin_lb_config

(config.cluster.v3.Cluster.RoundRobinLbConfig) Optional configuration for the RoundRobin load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config, round_robin_lb_config may be set.

common_lb_config

(config.cluster.v3.Cluster.CommonLbConfig) Common configuration for all load balancer implementations.

transport_socket

(config.core.v3.TransportSocket) Optional custom transport socket implementation to use for upstream connections. To setup TLS, set a transport socket with name envoy.transport_sockets.tls and UpstreamTlsContexts in the typed_config. If no transport socket configuration is specified, new connections will be set up with plaintext.

metadata

(config.core.v3.Metadata) The Metadata field can be used to provide additional information about the cluster. It can be used for stats, logging, and varying filter behavior. Fields should use reverse DNS notation to denote which entity within Envoy will need the information. For instance, if the metadata is intended for the Router filter, the filter name should be specified as envoy.filters.http.router.

protocol_selection

(config.cluster.v3.Cluster.ClusterProtocolSelection) Determines how Envoy selects the protocol used to speak to upstream hosts. This has been deprecated in favor of setting explicit protocol selection in the http_protocol_options message. http_protocol_options can be set via the cluster’s extension_protocol_options.

upstream_connection_options

(config.cluster.v3.UpstreamConnectionOptions) Optional options for upstream connections.

close_connections_on_host_health_failure

(bool) If an upstream host becomes unhealthy (as determined by the configured health checks or outlier detection), immediately close all connections to the failed host.

Note

This is currently only supported for connections created by tcp_proxy.

Note

The current implementation of this feature closes all connections immediately when the unhealthy status is detected. If there are a large number of connections open to an upstream host that becomes unhealthy, Envoy may spend a substantial amount of time exclusively closing these connections, and not processing any other traffic.

ignore_health_on_host_removal

(bool) If set to true, Envoy will ignore the health value of a host when processing its removal from service discovery. This means that if active health checking is used, Envoy will not wait for the endpoint to go unhealthy before removing it.

filters

(repeated config.cluster.v3.Filter) An (optional) network filter chain, listed in the order the filters should be applied. The chain will be applied to all outgoing connections that Envoy makes to the upstream servers of this cluster.

load_balancing_policy

(config.cluster.v3.LoadBalancingPolicy) If this field is set and is supported by the client, it will supersede the value of lb_policy.

lrs_report_endpoint_metrics

(repeated string) A list of metric names from ORCA load reports to propagate to LRS.

If not specified, then ORCA load reports will not be propagated to LRS.

For map fields in the ORCA proto, the string will be of the form <map_field_name>.<map_key>. For example, the string named_metrics.foo will mean to look for the key foo in the ORCA named_metrics field.

The special map key * means to report all entries in the map (e.g., named_metrics.* means to report all entries in the ORCA named_metrics field). Note that this should be used only with trusted backends.

The metric names in LRS will follow the same semantics as this field. In other words, if this field contains named_metrics.foo, then the LRS load report will include the data with that same string as the key.

track_timeout_budgets

(bool) If track_timeout_budgets is true, the timeout budget histograms will be published for each request. These show what percentage of a request’s per try and global timeout was used. A value of 0 would indicate that none of the timeout was used or that the timeout was infinite. A value of 100 would indicate that the request took the entirety of the timeout given to it.

Attention

This field has been deprecated in favor of timeout_budgets, part of track_cluster_stats.

upstream_config

(config.core.v3.TypedExtensionConfig) Optional customization and configuration of upstream connection pool, and upstream type.

Currently this field only applies for HTTP traffic but is designed for eventual use for custom TCP upstreams.

For HTTP traffic, Envoy will generally take downstream HTTP and send it upstream as upstream HTTP, using the http connection pool and the codec from http2_protocol_options

For routes where CONNECT termination is configured, Envoy will take downstream CONNECT requests and forward the CONNECT payload upstream over raw TCP using the tcp connection pool.

The default pool used is the generic connection pool which creates the HTTP upstream for most HTTP requests, and the TCP upstream if CONNECT termination is configured.

If users desire custom connection pool or upstream behavior, for example terminating CONNECT only if a custom filter indicates it is appropriate, the custom factories can be registered and configured here.

track_cluster_stats

(config.cluster.v3.TrackClusterStats) Configuration to track optional cluster stats.

preconnect_policy

(config.cluster.v3.Cluster.PreconnectPolicy) Preconnect configuration for this cluster.

connection_pool_per_downstream_connection

(bool) If connection_pool_per_downstream_connection is true, the cluster will use a separate connection pool for every downstream connection

config.cluster.v3.Cluster.TransportSocketMatch

[config.cluster.v3.Cluster.TransportSocketMatch proto]

TransportSocketMatch specifies what transport socket config will be used when the match conditions are satisfied.

{
  "name": ...,
  "match": {...},
  "transport_socket": {...}
}
name

(string, REQUIRED) The name of the match, used in stats generation.

match

(Struct) Optional metadata match criteria. The connection to the endpoint with metadata matching what is set in this field will use the transport socket configuration specified here. The endpoint’s metadata entry in envoy.transport_socket_match is used to match against the values specified in this field.

transport_socket

(config.core.v3.TransportSocket) The configuration of the transport socket.

config.cluster.v3.Cluster.CustomClusterType

[config.cluster.v3.Cluster.CustomClusterType proto]

Extended cluster type.

{
  "name": ...,
  "typed_config": {...}
}
name

(string, REQUIRED) The type of the cluster to instantiate. The name must match a supported cluster type.

typed_config

(Any) Cluster specific configuration which depends on the cluster being instantiated. See the supported cluster for further documentation.

config.cluster.v3.Cluster.EdsClusterConfig

[config.cluster.v3.Cluster.EdsClusterConfig proto]

Only valid when discovery type is EDS.

{
  "eds_config": {...},
  "service_name": ...
}
eds_config

(config.core.v3.ConfigSource) Configuration for the source of EDS updates for this Cluster.

service_name

(string) Optional alternative to cluster name to present to EDS. This does not have the same restrictions as cluster name, i.e. it may be arbitrary length. This may be a xdstp:// URL.

config.cluster.v3.Cluster.LbSubsetConfig

[config.cluster.v3.Cluster.LbSubsetConfig proto]

Optionally divide the endpoints in this cluster into subsets defined by endpoint metadata and selected by route and weighted cluster metadata.

{
  "fallback_policy": ...,
  "default_subset": {...},
  "subset_selectors": [],
  "locality_weight_aware": ...,
  "scale_locality_weight": ...,
  "panic_mode_any": ...,
  "list_as_any": ...,
  "metadata_fallback_policy": ...
}
fallback_policy

(config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetFallbackPolicy) The behavior used when no endpoint subset matches the selected route’s metadata. The value defaults to NO_FALLBACK.

default_subset

(Struct) Specifies the default subset of endpoints used during fallback if fallback_policy is DEFAULT_SUBSET. Each field in default_subset is compared to the matching LbEndpoint.Metadata under the envoy.lb namespace. It is valid for no hosts to match, in which case the behavior is the same as a fallback_policy of NO_FALLBACK.

subset_selectors

(repeated config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector) For each entry, LbEndpoint.Metadata’s envoy.lb namespace is traversed and a subset is created for each unique combination of key and value. For example:

{ "subset_selectors": [
    { "keys": [ "version" ] },
    { "keys": [ "stage", "hardware_type" ] }
]}

A subset is matched when the metadata from the selected route and weighted cluster contains the same keys and values as the subset’s metadata. The same host may appear in multiple subsets.

locality_weight_aware

(bool) If true, routing to subsets will take into account the localities and locality weights of the endpoints when making the routing decision.

There are some potential pitfalls associated with enabling this feature, as the resulting traffic split after applying both a subset match and locality weights might be undesirable.

Consider for example a situation in which you have 50/50 split across two localities X/Y which have 100 hosts each without subsetting. If the subset LB results in X having only 1 host selected but Y having 100, then a lot more load is being dumped on the single host in X than originally anticipated in the load balancing assignment delivered via EDS.

scale_locality_weight

(bool) When used with locality_weight_aware, scales the weight of each locality by the ratio of hosts in the subset vs hosts in the original subset. This aims to even out the load going to an individual locality if said locality is disproportionately affected by the subset predicate.

panic_mode_any

(bool) If true, when a fallback policy is configured and its corresponding subset fails to find a host this will cause any host to be selected instead.

This is useful when using the default subset as the fallback policy, given the default subset might become empty. With this option enabled, if that happens the LB will attempt to select a host from the entire cluster.

list_as_any

(bool) If true, metadata specified for a metadata key will be matched against the corresponding endpoint metadata if the endpoint metadata matches the value exactly OR it is a list value and any of the elements in the list matches the criteria.

metadata_fallback_policy

(config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetMetadataFallbackPolicy) Fallback mechanism that allows to try different route metadata until a host is found. If load balancing process, including all its mechanisms (like fallback_policy) fails to select a host, this policy decides if and how the process is repeated using another metadata.

The value defaults to METADATA_NO_FALLBACK.

config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector proto]

Specifications for subsets.

{
  "keys": [],
  "single_host_per_subset": ...,
  "fallback_policy": ...,
  "fallback_keys_subset": []
}
keys

(repeated string) List of keys to match with the weighted cluster metadata.

single_host_per_subset

(bool) Selects a mode of operation in which each subset has only one host. This mode uses the same rules for choosing a host, but updating hosts is faster, especially for large numbers of hosts.

If a match is found to a host, that host will be used regardless of priority levels.

When this mode is enabled, configurations that contain more than one host with the same metadata value for the single key in keys will use only one of the hosts with the given key; no requests will be routed to the others. The cluster gauge lb_subsets_single_host_per_subset_duplicate indicates how many duplicates are present in the current configuration.

fallback_policy

(config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector.LbSubsetSelectorFallbackPolicy) The behavior used when no endpoint subset matches the selected route’s metadata.

fallback_keys_subset

(repeated string) Subset of keys used by KEYS_SUBSET fallback policy. It has to be a non empty list if KEYS_SUBSET fallback policy is selected. For any other fallback policy the parameter is not used and should not be set. Only values also present in keys are allowed, but fallback_keys_subset cannot be equal to keys.

Enum config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector.LbSubsetSelectorFallbackPolicy

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector.LbSubsetSelectorFallbackPolicy proto]

Allows to override top level fallback policy per selector.

NOT_DEFINED

(DEFAULT) ⁣If NOT_DEFINED top level config fallback policy is used instead.

NO_FALLBACK

⁣If NO_FALLBACK is selected, a result equivalent to no healthy hosts is reported.

ANY_ENDPOINT

⁣If ANY_ENDPOINT is selected, any cluster endpoint may be returned (subject to policy, health checks, etc).

DEFAULT_SUBSET

⁣If DEFAULT_SUBSET is selected, load balancing is performed over the endpoints matching the values from the default_subset field.

KEYS_SUBSET

⁣If KEYS_SUBSET is selected, subset selector matching is performed again with metadata keys reduced to fallback_keys_subset. It allows for a fallback to a different, less specific selector if some of the keys of the selector are considered optional.

Enum config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetFallbackPolicy

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetFallbackPolicy proto]

If NO_FALLBACK is selected, a result equivalent to no healthy hosts is reported. If ANY_ENDPOINT is selected, any cluster endpoint may be returned (subject to policy, health checks, etc). If DEFAULT_SUBSET is selected, load balancing is performed over the endpoints matching the values from the default_subset field.

NO_FALLBACK

(DEFAULT)

ANY_ENDPOINT

DEFAULT_SUBSET

Enum config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetMetadataFallbackPolicy

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetMetadataFallbackPolicy proto]

METADATA_NO_FALLBACK

(DEFAULT) ⁣No fallback. Route metadata will be used as-is.

FALLBACK_LIST

⁣A special metadata key fallback_list will be used to provide variants of metadata to try. Value of fallback_list key has to be a list. Every list element has to be a struct - it will be merged with route metadata, overriding keys that appear in both places. fallback_list entries will be used in order until a host is found.

fallback_list key itself is removed from metadata before subset load balancing is performed.

Example:

for metadata:

version: 1.0
fallback_list:
  - version: 2.0
    hardware: c64
  - hardware: c32
  - version: 3.0

at first, metadata:

{"version": "2.0", "hardware": "c64"}

will be used for load balancing. If no host is found, metadata:

{"version": "1.0", "hardware": "c32"}

is next to try. If it still results in no host, finally metadata:

{"version": "3.0"}

is used.

config.cluster.v3.Cluster.SlowStartConfig

[config.cluster.v3.Cluster.SlowStartConfig proto]

Configuration for slow start mode.

{
  "slow_start_window": {...},
  "aggression": {...},
  "min_weight_percent": {...}
}
slow_start_window

(Duration) Represents the size of slow start window. If set, the newly created host remains in slow start mode starting from its creation time for the duration of slow start window.

aggression

(config.core.v3.RuntimeDouble) This parameter controls the speed of traffic increase over the slow start window. Defaults to 1.0, so that endpoint would get linearly increasing amount of traffic. When increasing the value for this parameter, the speed of traffic ramp-up increases non-linearly. The value of aggression parameter should be greater than 0.0. By tuning the parameter, is possible to achieve polynomial or exponential shape of ramp-up curve.

During slow start window, effective weight of an endpoint would be scaled with time factor and aggression: new_weight = weight * max(min_weight_percent, time_factor ^ (1 / aggression)), where time_factor=(time_since_start_seconds / slow_start_time_seconds).

As time progresses, more and more traffic would be sent to endpoint, which is in slow start window. Once host exits slow start, time_factor and aggression no longer affect its weight.

min_weight_percent

(type.v3.Percent) Configures the minimum percentage of origin weight that avoids too small new weight, which may cause endpoints in slow start mode receive no traffic in slow start window. If not specified, the default is 10%.

config.cluster.v3.Cluster.RoundRobinLbConfig

[config.cluster.v3.Cluster.RoundRobinLbConfig proto]

Specific configuration for the RoundRobin load balancing policy.

{
  "slow_start_config": {...}
}
slow_start_config

(config.cluster.v3.Cluster.SlowStartConfig) Configuration for slow start mode. If this configuration is not set, slow start will not be not enabled.

config.cluster.v3.Cluster.LeastRequestLbConfig

[config.cluster.v3.Cluster.LeastRequestLbConfig proto]

Specific configuration for the LeastRequest load balancing policy.

{
  "choice_count": {...},
  "active_request_bias": {...},
  "slow_start_config": {...}
}
choice_count

(UInt32Value) The number of random healthy hosts from which the host with the fewest active requests will be chosen. Defaults to 2 so that we perform two-choice selection if the field is not set.

active_request_bias

(config.core.v3.RuntimeDouble) The following formula is used to calculate the dynamic weights when hosts have different load balancing weights:

weight = load_balancing_weight / (active_requests + 1)^active_request_bias

The larger the active request bias is, the more aggressively active requests will lower the effective weight when all host weights are not equal.

active_request_bias must be greater than or equal to 0.0.

When active_request_bias == 0.0 the Least Request Load Balancer doesn’t consider the number of active requests at the time it picks a host and behaves like the Round Robin Load Balancer.

When active_request_bias > 0.0 the Least Request Load Balancer scales the load balancing weight by the number of active requests at the time it does a pick.

The value is cached for performance reasons and refreshed whenever one of the Load Balancer’s host sets changes, e.g., whenever there is a host membership update or a host load balancing weight change.

Note

This setting only takes effect if all host weights are not equal.

slow_start_config

(config.cluster.v3.Cluster.SlowStartConfig) Configuration for slow start mode. If this configuration is not set, slow start will not be not enabled.

config.cluster.v3.Cluster.RingHashLbConfig

[config.cluster.v3.Cluster.RingHashLbConfig proto]

Specific configuration for the RingHash load balancing policy.

{
  "minimum_ring_size": {...},
  "hash_function": ...,
  "maximum_ring_size": {...}
}
minimum_ring_size

(UInt64Value) Minimum hash ring size. The larger the ring is (that is, the more hashes there are for each provided host) the better the request distribution will reflect the desired weights. Defaults to 1024 entries, and limited to 8M entries. See also maximum_ring_size.

hash_function

(config.cluster.v3.Cluster.RingHashLbConfig.HashFunction) The hash function used to hash hosts onto the ketama ring. The value defaults to XX_HASH.

maximum_ring_size

(UInt64Value) Maximum hash ring size. Defaults to 8M entries, and limited to 8M entries, but can be lowered to further constrain resource use. See also minimum_ring_size.

Enum config.cluster.v3.Cluster.RingHashLbConfig.HashFunction

[config.cluster.v3.Cluster.RingHashLbConfig.HashFunction proto]

The hash function used to hash hosts onto the ketama ring.

XX_HASH

(DEFAULT) ⁣Use xxHash, this is the default hash function.

MURMUR_HASH_2

⁣Use MurmurHash2, this is compatible with std:hash<string> in GNU libstdc++ 3.4.20 or above. This is typically the case when compiled on Linux and not macOS.

config.cluster.v3.Cluster.MaglevLbConfig

[config.cluster.v3.Cluster.MaglevLbConfig proto]

Specific configuration for the Maglev load balancing policy.

{
  "table_size": {...}
}
table_size

(UInt64Value) The table size for Maglev hashing. Maglev aims for “minimal disruption” rather than an absolute guarantee. Minimal disruption means that when the set of upstream hosts change, a connection will likely be sent to the same upstream as it was before. Increasing the table size reduces the amount of disruption. The table size must be prime number limited to 5000011. If it is not specified, the default is 65537.

config.cluster.v3.Cluster.OriginalDstLbConfig

[config.cluster.v3.Cluster.OriginalDstLbConfig proto]

Specific configuration for the Original Destination load balancing policy.

This extension has the qualified name envoy.clusters.original_dst

Note

This extension is intended to be robust against both untrusted downstream and upstream traffic.

Tip

This extension extends and can be used with the following extension category:

{
  "use_http_header": ...,
  "http_header_name": ...,
  "upstream_port_override": {...},
  "metadata_key": {...}
}
use_http_header

(bool) When true, a HTTP header can be used to override the original dst address. The default header is x-envoy-original-dst-host.

Attention

This header isn’t sanitized by default, so enabling this feature allows HTTP clients to route traffic to arbitrary hosts and/or ports, which may have serious security consequences.

Note

If the header appears multiple times only the first value is used.

http_header_name

(string) The http header to override destination address if use_http_header. is set to true. If the value is empty, x-envoy-original-dst-host will be used.

upstream_port_override

(UInt32Value) The port to override for the original dst address. This port will take precedence over filter state and header override ports

metadata_key

(type.metadata.v3.MetadataKey) The dynamic metadata key to override destination address. First the request metadata is considered, then the connection one.

config.cluster.v3.Cluster.CommonLbConfig

[config.cluster.v3.Cluster.CommonLbConfig proto]

Common configuration for all load balancer implementations.

{
  "healthy_panic_threshold": {...},
  "zone_aware_lb_config": {...},
  "locality_weighted_lb_config": {...},
  "update_merge_window": {...},
  "ignore_new_hosts_until_first_hc": ...,
  "close_connections_on_host_set_change": ...,
  "consistent_hashing_lb_config": {...},
  "override_host_status": {...}
}
healthy_panic_threshold

(type.v3.Percent) Configures the healthy panic threshold. If not specified, the default is 50%. To disable panic mode, set to 0%.

Note

The specified percent will be truncated to the nearest 1%.

zone_aware_lb_config

(config.cluster.v3.Cluster.CommonLbConfig.ZoneAwareLbConfig)

Only one of zone_aware_lb_config, locality_weighted_lb_config may be set.

locality_weighted_lb_config

(config.cluster.v3.Cluster.CommonLbConfig.LocalityWeightedLbConfig)

Only one of zone_aware_lb_config, locality_weighted_lb_config may be set.

update_merge_window

(Duration) If set, all health check/weight/metadata updates that happen within this duration will be merged and delivered in one shot when the duration expires. The start of the duration is when the first update happens. This is useful for big clusters, with potentially noisy deploys that might trigger excessive CPU usage due to a constant stream of healthcheck state changes or metadata updates. The first set of updates to be seen apply immediately (e.g.: a new cluster). Please always keep in mind that the use of sandbox technologies may change this behavior.

If this is not set, we default to a merge window of 1000ms. To disable it, set the merge window to 0.

Note: merging does not apply to cluster membership changes (e.g.: adds/removes); this is because merging those updates isn’t currently safe. See https://github.com/envoyproxy/envoy/pull/3941.

ignore_new_hosts_until_first_hc

(bool) If set to true, Envoy will exclude new hosts when computing load balancing weights until they have been health checked for the first time. This will have no effect unless active health checking is also configured.

close_connections_on_host_set_change

(bool) If set to true, the cluster manager will drain all existing connections to upstream hosts whenever hosts are added or removed from the cluster.

consistent_hashing_lb_config

(config.cluster.v3.Cluster.CommonLbConfig.ConsistentHashingLbConfig) Common Configuration for all consistent hashing load balancers (MaglevLb, RingHashLb, etc.)

override_host_status

(config.core.v3.HealthStatusSet) This controls what hosts are considered valid when using host overrides, which is used by some filters to modify the load balancing decision.

If this is unset then [UNKNOWN, HEALTHY, DEGRADED] will be applied by default. If this is set with an empty set of statuses then host overrides will be ignored by the load balancing.

config.cluster.v3.Cluster.CommonLbConfig.ZoneAwareLbConfig

[config.cluster.v3.Cluster.CommonLbConfig.ZoneAwareLbConfig proto]

Configuration for zone aware routing.

{
  "routing_enabled": {...},
  "min_cluster_size": {...},
  "fail_traffic_on_panic": ...
}
routing_enabled

(type.v3.Percent) Configures percentage of requests that will be considered for zone aware routing if zone aware routing is configured. If not specified, the default is 100%. * runtime values. * Zone aware routing support.

min_cluster_size

(UInt64Value) Configures minimum upstream cluster size required for zone aware routing If upstream cluster size is less than specified, zone aware routing is not performed even if zone aware routing is configured. If not specified, the default is 6. * runtime values. * Zone aware routing support.

fail_traffic_on_panic

(bool) If set to true, Envoy will not consider any hosts when the cluster is in panic mode. Instead, the cluster will fail all requests as if all hosts are unhealthy. This can help avoid potentially overwhelming a failing service.

config.cluster.v3.Cluster.CommonLbConfig.LocalityWeightedLbConfig

[config.cluster.v3.Cluster.CommonLbConfig.LocalityWeightedLbConfig proto]

Configuration for locality weighted load balancing

config.cluster.v3.Cluster.CommonLbConfig.ConsistentHashingLbConfig

[config.cluster.v3.Cluster.CommonLbConfig.ConsistentHashingLbConfig proto]

Common Configuration for all consistent hashing load balancers (MaglevLb, RingHashLb, etc.)

{
  "use_hostname_for_hashing": ...,
  "hash_balance_factor": {...}
}
use_hostname_for_hashing

(bool) If set to true, the cluster will use hostname instead of the resolved address as the key to consistently hash to an upstream host. Only valid for StrictDNS clusters with hostnames which resolve to a single IP address.

hash_balance_factor

(UInt32Value) Configures percentage of average cluster load to bound per upstream host. For example, with a value of 150 no upstream host will get a load more than 1.5 times the average load of all the hosts in the cluster. If not specified, the load is not bounded for any upstream host. Typical value for this parameter is between 120 and 200. Minimum is 100.

Applies to both Ring Hash and Maglev load balancers.

This is implemented based on the method described in the paper https://arxiv.org/abs/1608.01350. For the specified hash_balance_factor, requests to any upstream host are capped at hash_balance_factor/100 times the average number of requests across the cluster. When a request arrives for an upstream host that is currently serving at its max capacity, linear probing is used to identify an eligible host. Further, the linear probe is implemented using a random jump in hosts ring/table to identify the eligible host (this technique is as described in the paper https://arxiv.org/abs/1908.08762 - the random jump avoids the cascading overflow effect when choosing the next host in the ring/table).

If weights are specified on the hosts, they are respected.

This is an O(N) algorithm, unlike other load balancers. Using a lower hash_balance_factor results in more hosts being probed, so use a higher value if you require better performance.

config.cluster.v3.Cluster.RefreshRate

[config.cluster.v3.Cluster.RefreshRate proto]

{
  "base_interval": {...},
  "max_interval": {...}
}
base_interval

(Duration, REQUIRED) Specifies the base interval between refreshes. This parameter is required and must be greater than zero and less than max_interval.

max_interval

(Duration) Specifies the maximum interval between refreshes. This parameter is optional, but must be greater than or equal to the base_interval if set. The default is 10 times the base_interval.

config.cluster.v3.Cluster.PreconnectPolicy

[config.cluster.v3.Cluster.PreconnectPolicy proto]

{
  "per_upstream_preconnect_ratio": {...},
  "predictive_preconnect_ratio": {...}
}
per_upstream_preconnect_ratio

(DoubleValue) Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Preconnecting will only be done if the upstream is healthy and the cluster has traffic.

For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream.

In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections preconnected. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 preconnected connections would be in the queue in case of unexpected disconnects where the connection could not be reused.

If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for a new connection.

This is limited somewhat arbitrarily to 3 because preconnecting too aggressively can harm latency more than the preconnecting helps.

predictive_preconnect_ratio

(DoubleValue) Indicates how many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_preconnect_ratio this preconnects across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection.

Preconnecting will be limited to one preconnect per configured upstream in the cluster and will only be done if there are healthy upstreams and the cluster has traffic.

For example if preconnecting is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be preconnected - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream.

If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_preconnect_ratio is not set), there will be a latency hit for connection establishment.

If both this and preconnect_ratio are set, Envoy will make sure both predicted needs are met, basically preconnecting max(predictive-preconnect, per-upstream-preconnect), for each upstream.

Enum config.cluster.v3.Cluster.DiscoveryType

[config.cluster.v3.Cluster.DiscoveryType proto]

Refer to service discovery type for an explanation on each type.

STATIC

(DEFAULT) ⁣Refer to the static discovery type for an explanation.

STRICT_DNS

⁣Refer to the strict DNS discovery type for an explanation.

LOGICAL_DNS

⁣Refer to the logical DNS discovery type for an explanation.

EDS

⁣Refer to the service discovery type for an explanation.

ORIGINAL_DST

⁣Refer to the original destination discovery type for an explanation.

Enum config.cluster.v3.Cluster.LbPolicy

[config.cluster.v3.Cluster.LbPolicy proto]

Refer to load balancer type architecture overview section for information on each type.

ROUND_ROBIN

(DEFAULT) ⁣Refer to the round robin load balancing policy for an explanation.

LEAST_REQUEST

⁣Refer to the least request load balancing policy for an explanation.

RING_HASH

⁣Refer to the ring hash load balancing policy for an explanation.

RANDOM

⁣Refer to the random load balancing policy for an explanation.

MAGLEV

⁣Refer to the Maglev load balancing policy for an explanation.

CLUSTER_PROVIDED

⁣This load balancer type must be specified if the configured cluster provides a cluster specific load balancer. Consult the configured cluster’s documentation for whether to set this option or not.

LOAD_BALANCING_POLICY_CONFIG

⁣Use the new load_balancing_policy field to determine the LB policy. This has been deprecated in favor of using the load_balancing_policy field without setting any value in lb_policy.

Enum config.cluster.v3.Cluster.DnsLookupFamily

[config.cluster.v3.Cluster.DnsLookupFamily proto]

When V4_ONLY is selected, the DNS resolver will only perform a lookup for addresses in the IPv4 family. If V6_ONLY is selected, the DNS resolver will only perform a lookup for addresses in the IPv6 family. If AUTO is specified, the DNS resolver will first perform a lookup for addresses in the IPv6 family and fallback to a lookup for addresses in the IPv4 family. This is semantically equivalent to a non-existent V6_PREFERRED option. AUTO is a legacy name that is more opaque than necessary and will be deprecated in favor of V6_PREFERRED in a future major version of the API. If V4_PREFERRED is specified, the DNS resolver will first perform a lookup for addresses in the IPv4 family and fallback to a lookup for addresses in the IPv6 family. i.e., the callback target will only get v6 addresses if there were NO v4 addresses to return. If ALL is specified, the DNS resolver will perform a lookup for both IPv4 and IPv6 families, and return all resolved addresses. When this is used, Happy Eyeballs will be enabled for upstream connections. Refer to Happy Eyeballs Support for more information. For cluster types other than STRICT_DNS and LOGICAL_DNS, this setting is ignored.

AUTO

(DEFAULT)

V4_ONLY

V6_ONLY

V4_PREFERRED

ALL

Enum config.cluster.v3.Cluster.ClusterProtocolSelection

[config.cluster.v3.Cluster.ClusterProtocolSelection proto]

USE_CONFIGURED_PROTOCOL

(DEFAULT) ⁣Cluster can only operate on one of the possible upstream protocols (HTTP1.1, HTTP2). If http2_protocol_options are present, HTTP2 will be used, otherwise HTTP1.1 will be used.

USE_DOWNSTREAM_PROTOCOL

⁣Use HTTP1.1 or HTTP2, depending on which one is used on the downstream connection.

config.cluster.v3.LoadBalancingPolicy

[config.cluster.v3.LoadBalancingPolicy proto]

Extensible load balancing policy configuration.

Every LB policy defined via this mechanism will be identified via a unique name using reverse DNS notation. If the policy needs configuration parameters, it must define a message for its own configuration, which will be stored in the config field. The name of the policy will tell clients which type of message they should expect to see in the config field.

Note that there are cases where it is useful to be able to independently select LB policies for choosing a locality and for choosing an endpoint within that locality. For example, a given deployment may always use the same policy to choose the locality, but for choosing the endpoint within the locality, some clusters may use weighted-round-robin, while others may use some sort of session-based balancing.

This can be accomplished via hierarchical LB policies, where the parent LB policy creates a child LB policy for each locality. For each request, the parent chooses the locality and then delegates to the child policy for that locality to choose the endpoint within the locality.

To facilitate this, the config message for the top-level LB policy may include a field of type LoadBalancingPolicy that specifies the child policy.

{
  "policies": []
}
policies

(repeated config.cluster.v3.LoadBalancingPolicy.Policy) Each client will iterate over the list in order and stop at the first policy that it supports. This provides a mechanism for starting to use new LB policies that are not yet supported by all clients.

config.cluster.v3.LoadBalancingPolicy.Policy

[config.cluster.v3.LoadBalancingPolicy.Policy proto]

{
  "typed_extension_config": {...}
}
typed_extension_config

(config.core.v3.TypedExtensionConfig)

config.cluster.v3.UpstreamConnectionOptions

[config.cluster.v3.UpstreamConnectionOptions proto]

{
  "tcp_keepalive": {...},
  "set_local_interface_name_on_upstream_connections": ...,
  "happy_eyeballs_config": {...}
}
tcp_keepalive

(config.core.v3.TcpKeepalive) If set then set SO_KEEPALIVE on the socket to enable TCP Keepalives.

set_local_interface_name_on_upstream_connections

(bool) If enabled, associates the interface name of the local address with the upstream connection. This can be used by extensions during processing of requests. The association mechanism is implementation specific. Defaults to false due to performance concerns.

happy_eyeballs_config

(config.cluster.v3.UpstreamConnectionOptions.HappyEyeballsConfig) Configurations for happy eyeballs algorithm. Add configs for first_address_family_version and first_address_family_count when sorting destination ip addresses.

config.cluster.v3.UpstreamConnectionOptions.HappyEyeballsConfig

[config.cluster.v3.UpstreamConnectionOptions.HappyEyeballsConfig proto]

{
  "first_address_family_version": ...,
  "first_address_family_count": {...}
}
first_address_family_version

(config.cluster.v3.UpstreamConnectionOptions.FirstAddressFamilyVersion) Specify the IP address family to attempt connection first in happy eyeballs algorithm according to RFC8305#section-4.

first_address_family_count

(UInt32Value) Specify the number of addresses of the first_address_family_version being attempted for connection before the other address family.

Enum config.cluster.v3.UpstreamConnectionOptions.FirstAddressFamilyVersion

[config.cluster.v3.UpstreamConnectionOptions.FirstAddressFamilyVersion proto]

DEFAULT

(DEFAULT) ⁣respect the native ranking of destination ip addresses returned from dns resolution

V4

V6

config.cluster.v3.TrackClusterStats

[config.cluster.v3.TrackClusterStats proto]

{
  "timeout_budgets": ...,
  "request_response_sizes": ...,
  "per_endpoint_stats": ...
}
timeout_budgets

(bool) If timeout_budgets is true, the timeout budget histograms will be published for each request. These show what percentage of a request’s per try and global timeout was used. A value of 0 would indicate that none of the timeout was used or that the timeout was infinite. A value of 100 would indicate that the request took the entirety of the timeout given to it.

request_response_sizes

(bool) If request_response_sizes is true, then the histograms tracking header and body sizes of requests and responses will be published.

per_endpoint_stats

(bool) If true, some stats will be emitted per-endpoint, similar to the stats in admin /clusters output.

This does not currently output correct stats during a hot-restart.

This is not currently implemented by all stat sinks.

These stats do not honor filtering or tag extraction rules in StatsConfig (but fixed-value tags are supported). Admin endpoint filtering is supported.

This may not be used at the same time as load_stats_config.