How-To Geek

Using kubernetes annotations, labels, and selectors.

Annotations, labels, and selectors are used to manage metadata attached to your Kubernetes objects.

Quick Links

Annotations.

Annotations, labels, and selectors are used to manage metadata attached to your Kubernetes objects. Annotations and labels define the data while selectors provide a way to query it.

Here are the differences between the three concepts, what they're designed for, and how you can use them to manage your resources.

The Kubernetes documentation defines annotations as "arbitrary non-identifying metadata" which you add to your objects. Their status as "non-identifying" means they aren't used internally by Kubernetes as part of its object selection system.

This means annotations are best used for data that's independent of the object and its role in your cluster. You could use them to add information about the tool that created an object, define who's responsible for its management, or add tags to be picked up by external tools.

There's no restrictions on what you use annotations for. As long as your data can be expressed as a key-value pair, you can create an annotation that encapsulates it. The model lets you store useful data directly alongside your objects, instead of having to refer to external documentation or databases.

Setting Annotations

Annotations are defined as part of a resource's metadata field. They're simple key-value pairs. The key needs to be 63 characters or less and can include alphanumeric characters, dashes, underscores, and dots. It must start and end with an alphanumeric character.

Keys also support an optional prefix which must be a valid DNS subdomain. Prefixes are used to namespace your annotation keys, avoiding collisions between common annotations like

. When a prefix is used, a slash character separates it from the key.

apiVersion: v1

name: pod-with-annotations

annotations:

unprefixed-annotation: "value"

cloudsavvyit.com/prefixed-annotation: "another value"

# <omitted>

This example demonstrates both prefixed and unprefixed annotations.

You can retrieve the annotations that have been set on an object using Kubectl . There's no built-in command so you need to get the JSON or YAML definition of the object, then extract the value of the

field. Here's an example that displays the annotations associated with the Pod shown above:

kubectl get pod pod-with-annotations -o jsonpath='{.metadata.annotations}'

Labels are another form of metadata which you can attach to your resources. The documentation describes the role of labels as "identifying attributes of objects that are meaningful and relevant to users" but independent of the properties of the core system.

Whereas annotations are intentionally purposeless, capable of representing any arbitrary data, labels are meant for more formal situations. They're commonly used to represent processes and organizational structures. You might use labels to denote a resource's release status (such as beta or stable ) or the development stage it maps to ( build or qa ).

Labels can be used as selectors when referencing objects. This is a key difference compared to annotations which aren't supported as selectors. Here's a Pod which selects Nodes that have the node-environment: production label:

name: pod-with-node-selector

containers:

nodeSelector:

node-environment: production

Setting Labels

Labels are attached to objects in the same way as annotations. Add a labels field to the object's metadata , then populate it with key-value pairs. Labels possess the same constraints around key names and prefixes.

name: pod-with-labels

cloudsavvyit.com/environment: stable

You can retrieve an object's labels using Kubectl with the same technique as shown earlier. Get the object's JSON representation, then extract the labels field:

kubectl get pod pod-with-labels -o jsonpath='{.metadata.labels}'

Kubectl also supports a --show-labels flag to include labels in human-readable output tables:

kubectl get pods --show-labels

Selectors are used within Kubernetes object definitions to reference other objects. Different types of selectors are available to pull in objects that possess certain characteristics.

In the example above, we used a selector to identify Nodes with a particular label. Here's a Deployment object that uses an explicit label selector to identify the Pods it should manage:

apiVersion: apps/v1

kind: Deployment

name: deployment

matchLabels:

cloudsavvyit.com/app: selectors-demo

replicas: 3

The Deployment's template will create Pods that have the cloudsavvyit.com/app label set. These will be matched by the selector so they become part of the Deployment.

Selectors in Kubectl

You can use a form of selector to filter the objects that are returned by Kubectl:

kubectl get pods -l cloudsavvyit.com/app=selectors-demo

Label-based queries support several kinds of comparison operator. These include equality-based and set-based comparisons:

  • = - The label value is equal to a given value.
  • == - The label value is strictly equal to a given value.
  • != - The label value is not equal to a given value.
  • in - The label value is one of a set of given values.
  • notin - The label value is not in a set of given values.
  • exists - The label exists on the object.

Here's an example of using the in operator to query objects that are in the staging or production environments:

kubectl get pods -l "environment in (dev, production)"

Annotations and Labels are two ways to add metadata to your Kubernetes objects. Annotations are for non-identifying data that won't be referenced by Kubernetes. Labels are used to identify objects so that they can be selected by other Kubernetes resources.

It's best to use an annotation if you won't be querying for objects with the key-value pair. Use a label if you'll be referencing it within another resource or using it to filter Kubectl output in your terminal.

When working with labels, several forms of selector are available to help you access the data you need. Annotations are a little trickier to access as they're not meant to be queried but you can still list them by viewing the JSON representation of a Kubernetes object.

  • Authentication
  • Custom NGINX upstream hashing
  • Custom NGINX load balancing
  • Custom NGINX upstream vhost
  • Client Certificate Authentication
  • Backend Certificate Authentication
  • Configuration snippet
  • Custom HTTP Errors
  • Rate Limiting
  • Global Rate Limiting
  • Permanent Redirect
  • Permanent Redirect Code
  • Temporal Redirect
  • SSL Passthrough
  • Server-side HTTPS enforcement through redirect
  • Redirect from/to www
  • Denylist source range
  • Whitelist source range
  • Custom timeouts
  • Proxy redirect
  • Custom max body size
  • Proxy cookie domain
  • Proxy cookie path
  • Proxy buffering
  • Proxy buffers Number
  • Proxy buffer size
  • Proxy max temp file size
  • Proxy HTTP version
  • SSL ciphers
  • Connection proxy header
  • Enable Access Log
  • Enable Rewrite Log
  • Enable Opentracing
  • Opentracing Trust Incoming Span
  • Enable Opentelemetry
  • Opentelemetry Trust Incoming Span
  • X-Forwarded-Prefix Header
  • ModSecurity
  • Backend Protocol
  • Stream snippet
  • Custom NGINX template
  • Command line arguments
  • Custom errors
  • Default backend
  • Exposing TCP and UDP services
  • Exposing FCGI services
  • Regular expressions in paths
  • External Articles
  • Miscellaneous
  • Prometheus and Grafana installation
  • Multiple Ingress controllers
  • Docker registry
  • Multi TLS certificate termination
  • TLS termination
  • Pod Security Policy (PSP)
  • Open Policy Agent rules
  • Canary Deployments

Annotations ¶

You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.

Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. "true" , "false" , "100" .

The annotation prefix can be changed using the --annotations-prefix command line argument , but the default is nginx.ingress.kubernetes.io , as described in the table below.

In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: "true" is set:

nginx.ingress.kubernetes.io/canary-by-header : The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always , it will be routed to the canary. When the header is set to never , it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.

nginx.ingress.kubernetes.io/canary-by-header-value : The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with nginx.ingress.kubernetes.io/canary-by-header . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.

nginx.ingress.kubernetes.io/canary-by-header-pattern : This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.

nginx.ingress.kubernetes.io/canary-by-cookie : The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always , it will be routed to the canary. When the cookie is set to never , it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.

nginx.ingress.kubernetes.io/canary-weight : The integer based (0 - ) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of <weight-total> means implies all requests will be sent to the alternative service specified in the Ingress. <weight-total> defaults to 100, and can be increased via nginx.ingress.kubernetes.io/canary-weight-total .

nginx.ingress.kubernetes.io/canary-weight-total : The total weight of traffic. If unspecified, it defaults to 100.

Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight

Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance , nginx.ingress.kubernetes.io/upstream-hash-by , and annotations related to session affinity . If you want to restore the original behavior of canaries when session affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior annotation with value legacy on the canary ingress definition.

Known Limitations

Currently a maximum of one canary ingress can be applied per Ingress rule.

In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.

If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for / .

Please check the rewrite example.

Session Affinity ¶

The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie .

The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.

The annotation nginx.ingress.kubernetes.io/affinity-canary-behavior defines the behavior of canaries when session affinity is enabled. Setting this to sticky (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to legacy will restore original canary behavior, when session affinity was ignored.

If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie , then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.

Please check the affinity example.

Cookie affinity ¶

If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'.

The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.

Use nginx.ingress.kubernetes.io/session-cookie-domain to set the Domain attribute of the sticky cookie.

Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None , Lax , and Strict . Some browsers reject cookies with SameSite=None , including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: "true" .

Authentication ¶

It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.

The annotations are: nginx.ingress.kubernetes.io/auth-type: [basic|digest]

Indicates the HTTP Authentication Type: Basic or Digest Access Authentication .

The name of the Secret that contains the usernames and passwords which are granted access to the path s defined in the Ingress rules. This annotation also accepts the alternative form "namespace/secretName", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.

The auth-secret can have two forms:

  • auth-file - default, an htpasswd file in the key auth within the secret
  • auth-map - the keys of the secret are the usernames, and the values are the hashed passwords

Please check the auth example.

Custom NGINX upstream hashing ¶

NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.

There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.

To enable consistent hashing for a backend:

nginx.ingress.kubernetes.io/upstream-hash-by : the nginx variable, text value or any combination thereof to use for consistent hashing. For example: nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri" or nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri$host" or nginx.ingress.kubernetes.io/upstream-hash-by: "${request_uri}-text-value" to consistently hash upstream requests by the current request URI.

"subset" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset : "true". This maps requests to subset of nodes instead of a single one. nginx.ingress.kubernetes.io/upstream-hash-by-subset-size determines the size of each subset (default 3).

Please check the chashsubset example.

Custom NGINX load balancing ¶

This is similar to load-balance in ConfigMap , but configures load balancing algorithm per ingress.

Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.

Custom NGINX upstream vhost ¶

This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host , which forms part of the location block. This is useful if you need to call the upstream server by something other than $host .

Client Certificate Authentication ¶

It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.

Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.

To enable, add the annotation nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName . This secret must have a file named ca.crt containing the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress.

You can further customize client certificate authentication and behavior with these annotations:

  • nginx.ingress.kubernetes.io/auth-tls-verify-depth : The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)
  • on : Request a client certificate that must be signed by a certificate that is included in the secret key ca.crt of the secret specified by nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName . Failed certificate verification will result in a status code 400 (Bad Request) (default)
  • off : Don't request client certificates and don't do client certificate verification.
  • optional : Do optional client certificate validation against the CAs from auth-tls-secret . The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
  • optional_no_ca : Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from auth-tls-secret . Certificate verification result is sent to the upstream service.
  • nginx.ingress.kubernetes.io/auth-tls-error-page : The URL/Page that user should be redirected in case of a Certificate Authentication Error
  • nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream : Indicates if the received certificates should be passed or not to the upstream server in the header ssl-client-cert . Possible values are "true" or "false" (default).
  • nginx.ingress.kubernetes.io/auth-tls-match-cn : Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with "CN=", example: "CN=myvalidclient" . If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: "CN=(option1|option2|myvalidclient)" . In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code.

The following headers are sent to the upstream service according to the auth-tls-* annotations:

  • ssl-client-issuer-dn : The issuer information of the client certificate. Example: "CN=My CA"
  • ssl-client-subject-dn : The subject information of the client certificate. Example: "CN=My Client"
  • ssl-client-verify : The result of the client verification. Possible values: "SUCCESS", "FAILED: "
  • ssl-client-cert : The full client certificate in PEM format. Will only be sent when nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream is set to "true". Example: -----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A

Please check the client-certs example.

TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior.

Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/

Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls

Backend Certificate Authentication ¶

It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.

  • nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName : Specifies a Secret with the certificate tls.crt , key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form "namespace/secretName".
  • nginx.ingress.kubernetes.io/proxy-ssl-verify : Enables or disables verification of the proxied HTTPS server certificate. (default: off)
  • nginx.ingress.kubernetes.io/proxy-ssl-verify-depth : Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
  • nginx.ingress.kubernetes.io/proxy-ssl-ciphers : Specifies the enabled ciphers for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
  • nginx.ingress.kubernetes.io/proxy-ssl-name : Allows to set proxy_ssl_name . This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
  • nginx.ingress.kubernetes.io/proxy-ssl-protocols : Enables the specified protocols for requests to a proxied HTTPS server.
  • nginx.ingress.kubernetes.io/proxy-ssl-server-name : Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.

Configuration snippet ¶

Using this annotation you can add additional configuration to the NGINX location. For example:

Since version 1.9.0, "configuration-snippet" annotation is disabled by default and has to be explicitly enabled, see allow-snippet-annotations . Enabling it can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. See CVE-2021-25742 and the related issue on github for more information.

Custom HTTP Errors ¶

Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors , but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.

Example usage: nginx.ingress.kubernetes.io/custom-http-errors: "404,415"

Disable Proxy intercept Errors ¶

Like the disable-proxy-intercept-errors value in the ConfigMap, this annotation allows to disable NGINX proxy-intercept-errors when custom-http-errors are set, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of errors codes and there are UseCases where NGINX shall not intercept all errors returned from upstream. If disable-proxy-intercept-errors is also specified globally, the annotation will override the global value for the given ingress' hostname and path.

Example usage: nginx.ingress.kubernetes.io/disable-proxy-intercept-errors: "false"

Default Backend ¶

This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has multiple ports , the first one is the one which will receive the backend traffic.

This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the custom-http-errors annotation are set.

Enable CORS ¶

To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: "true" . This will add a section in the server location enabling this functionality.

CORS can be controlled with the following annotations:

nginx.ingress.kubernetes.io/cors-allow-methods : Controls which methods are accepted.

This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).

  • Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
  • Example: nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"

nginx.ingress.kubernetes.io/cors-allow-headers : Controls which headers are accepted.

This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.

  • Default: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
  • Example: nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO"

nginx.ingress.kubernetes.io/cors-expose-headers : Controls which headers are exposed to response.

This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.

  • Default: empty
  • Example: nginx.ingress.kubernetes.io/cors-expose-headers: "*, X-CustomResponseHeader"

nginx.ingress.kubernetes.io/cors-allow-origin : Controls what's the accepted Origin for CORS.

This is a multi-valued field, separated by ','. It must follow this format: http(s)://origin-site.com or http(s)://origin-site.com:port

  • Example: nginx.ingress.kubernetes.io/cors-allow-origin: "https://origin-site.com:4443, http://origin-site.com, https://example.org:1199"

It also supports single level wildcard subdomains and follows this format: http(s)://*.foo.bar , http(s)://*.bar.foo:8080 or http(s)://*.abc.bar.foo:9000 - Example: nginx.ingress.kubernetes.io/cors-allow-origin: "https://*.origin-site.com:4443, http://*.origin-site.com, https://example.org:1199"

nginx.ingress.kubernetes.io/cors-allow-credentials : Controls if credentials can be passed during CORS operations.

  • Default: true
  • Example: nginx.ingress.kubernetes.io/cors-allow-credentials: "false"

nginx.ingress.kubernetes.io/cors-max-age : Controls how long preflight requests can be cached.

  • Default: 1728000
  • Example: nginx.ingress.kubernetes.io/cors-max-age: 600

For more information please see https://enable-cors.org

HTTP2 Push Preload. ¶

Enables automatic conversion of preload links specified in the “Link” response header fields into push requests.

  • nginx.ingress.kubernetes.io/http2-push-preload: "true"

Server Alias ¶

Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: "<alias 1>,<alias 2>" . This will create a server with the same configuration, but adding new values to the server_name directive.

A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.

For more information please see the server_name documentation .

Server snippet ¶

Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.

Since version 1.9.0, "server-snippet" annotation is disabled by default and has to be explicitly enabled, see allow-snippet-annotations . Enabling it can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. See CVE-2021-25742 and the related issue on github for more information.

This annotation can be used only once per host.

Client Body Buffer Size ¶

Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule.

The annotation value must be given in a format understood by Nginx.

  • nginx.ingress.kubernetes.io/client-body-buffer-size: "1000" # 1000 bytes
  • nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte
  • nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte
  • nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte
  • nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte

For more information please see https://nginx.org

External Authentication ¶

To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.

Additionally it is possible to set:

  • nginx.ingress.kubernetes.io/auth-keepalive : <Connections> to specify the maximum number of keepalive connections to auth-url . Only takes effect when no variables are used in the host part of the URL. Defaults to 0 (keepalive disabled).
Note: does not work with HTTP/2 listener because of a limitation in Lua subrequests . UseHTTP2 configuration should be disabled!
  • nginx.ingress.kubernetes.io/auth-keepalive-share-vars : Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to "true" X-Request-ID HTTP header will be the same for the backend and the auth request. Defaults to "false".
  • nginx.ingress.kubernetes.io/auth-keepalive-requests : <Requests> to specify the maximum number of requests that can be served through one keepalive connection. Defaults to 1000 and only applied if auth-keepalive is set to higher than 0 .
  • nginx.ingress.kubernetes.io/auth-keepalive-timeout : <Timeout> to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open. Defaults to 60 and only applied if auth-keepalive is set to higher than 0 .
  • nginx.ingress.kubernetes.io/auth-method : <Method> to specify the HTTP method to use.
  • nginx.ingress.kubernetes.io/auth-signin : <SignIn_URL> to specify the location of the error page.
  • nginx.ingress.kubernetes.io/auth-signin-redirect-param : <SignIn_URL> to specify the URL parameter in the error page which should contain the original URL for a failed signin request.
  • nginx.ingress.kubernetes.io/auth-response-headers : <Response_Header_1, ..., Response_Header_n> to specify headers to pass to backend once authentication request completes.
  • nginx.ingress.kubernetes.io/auth-proxy-set-headers : <ConfigMap> the name of a ConfigMap that specifies headers to pass to the authentication service
  • nginx.ingress.kubernetes.io/auth-request-redirect : <Request_Redirect_URL> to specify the X-Auth-Request-Redirect header value.
  • nginx.ingress.kubernetes.io/auth-cache-key : <Cache_Key> this enables caching for auth requests. specify a lookup key for auth responses. e.g. $remote_user$http_authorization . Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.
  • nginx.ingress.kubernetes.io/auth-cache-duration : <Cache_duration> to specify a caching time for auth responses based on their response codes, e.g. 200 202 30m . See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m . defaults to 200 202 401 5m .
  • nginx.ingress.kubernetes.io/auth-always-set-cookie : <Boolean_Flag> to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
  • nginx.ingress.kubernetes.io/auth-snippet : <Auth_Snippet> to specify a custom snippet to use with external authentication, e.g.
Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set

Since version 1.9.0, "auth-snippet" annotation is disabled by default and has to be explicitly enabled, see allow-snippet-annotations . Enabling it can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. See CVE-2021-25742 and the related issue on github for more information.

Please check the external-auth example.

Global External Authentication ¶

By default the controller redirects all requests to an existing service that provides authentication if global-auth-url is set in the NGINX ConfigMap. If you want to disable this behavior for a specific ingress, you can use the annotation nginx.ingress.kubernetes.io/enable-global-auth: "false" . nginx.ingress.kubernetes.io/enable-global-auth : indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to "true" .

For more information please see global-auth-url .

Rate Limiting ¶

These annotations define limits on connections and transmission rates. These can be used to mitigate DDoS Attacks .

  • nginx.ingress.kubernetes.io/limit-connections : number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
  • nginx.ingress.kubernetes.io/limit-rps : number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
  • nginx.ingress.kubernetes.io/limit-rpm : number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
  • nginx.ingress.kubernetes.io/limit-burst-multiplier : multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.
  • nginx.ingress.kubernetes.io/limit-rate-after : initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with proxy-buffering enabled.
  • nginx.ingress.kubernetes.io/limit-rate : number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with proxy-buffering enabled.
  • nginx.ingress.kubernetes.io/limit-whitelist : client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.

If you specify multiple annotations in a single Ingress rule, limits are applied in the order limit-connections , limit-rpm , limit-rps .

To configure settings globally for all Ingress rules, the limit-rate-after and limit-rate values may be set in the NGINX ConfigMap . The value set in an Ingress annotation will override the global setting.

The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value when use-forwarded-headers is enabled.

Global Rate Limiting ¶

Note: Be careful when configuring both (Local) Rate Limiting and Global Rate Limiting at the same time. They are two completely different rate limiting implementations. Whichever limit exceeds first will reject the requests. It might be a good idea to configure both of them to ease load on Global Rate Limiting backend in cases of spike in traffic.

The stock NGINX rate limiting does not share its counters among different NGINX instances. Given that most ingress-nginx deployments are elastic and number of replicas can change any day it is impossible to configure a proper rate limit using stock NGINX functionalities. Global Rate Limiting overcome this by using lua-resty-global-throttle . lua-resty-global-throttle shares its counters via a central store such as memcached . The obvious shortcoming of this is users have to deploy and operate a memcached instance in order to benefit from this functionality. Configure the memcached using these configmap settings .

Here are a few remarks for ingress-nginx integration of lua-resty-global-throttle :

  • We minimize memcached access by caching exceeding limit decisions. The expiry of cache entry is the desired delay lua-resty-global-throttle calculates for us. The Lua Shared Dictionary used for that is global_throttle_cache . Currently its size defaults to 10M. Customize it as per your needs using lua-shared-dicts . When we fail to cache the exceeding limit decision then we log an NGINX error. You can monitor for that error to decide if you need to bump the cache size. Without cache the cost of processing a request is two memcached commands: GET , and INCR . With the cache it is only INCR .
  • Log NGINX variable $global_rate_limit_exceeding 's value to have some visibility into what portion of requests are rejected (value y ), whether they are rejected using cached decision (value c ), or if they are not rejected (default value n ). You can use log-format-upstream to include that in access logs.
  • In case of an error it will log the error message and fail open .

The annotations below creates Global Rate Limiting instance per ingress. That means if there are multiple paths configured under the same ingress, the Global Rate Limiting will count requests to all the paths under the same counter. Extract a path out into its own ingress if you need to isolate a certain path.

nginx.ingress.kubernetes.io/global-rate-limit : Configures maximum allowed number of requests per window. Required.

  • nginx.ingress.kubernetes.io/global-rate-limit-window : Configures a time window (i.e 1m ) that the limit is applied. Required.
  • nginx.ingress.kubernetes.io/global-rate-limit-key : Configures a key for counting the samples. Defaults to $remote_addr . You can also combine multiple NGINX variables here, like ${remote_addr}-${http_x_api_client} which would mean the limit will be applied to requests coming from the same API client (indicated by X-API-Client HTTP request header) with the same source IP address.
  • nginx.ingress.kubernetes.io/global-rate-limit-ignored-cidrs : comma separated list of IPs and CIDRs to match client IP against. When there's a match request is not considered for rate limiting.

Permanent Redirect ¶

This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.

Permanent Redirect Code ¶

This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.

Temporal Redirect ¶

This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)

SSL Passthrough ¶

The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide.

SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.

Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.

Service Upstream ¶

By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.

The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

This can be desirable for things like zero-downtime deployments . See issue #257 .

Known Issues ¶

If the service-upstream annotation is specified the following things should be taken into consideration:

  • Sticky Sessions will not work as only round-robin load balancing is supported.
  • The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.

Server-side HTTPS enforcement through redirect ¶

By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: "false" in the NGINX ConfigMap .

To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.

When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.

To preserve the trailing slash in the URI with ssl-redirect , set nginx.ingress.kubernetes.io/preserve-trailing-slash: "true" annotation for that particular resource.

Redirect from/to www ¶

In some scenarios is required to redirect from www.domain.com to domain.com or vice versa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: "true"

If at some point a new Ingress is created with a host equal to one of the options (like domain.com ) the annotation will be omitted.

For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.

Denylist source range ¶

You can specify blocked client IP source ranges through the nginx.ingress.kubernetes.io/denylist-source-range annotation. The value is a comma separated list of CIDRs , e.g. 10.0.0.0/24,172.10.0.1 .

To configure this setting globally for all Ingress rules, the denylist-source-range value may be set in the NGINX ConfigMap .

Adding an annotation to an Ingress rule overrides any global restriction.

Whitelist source range ¶

You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs , e.g. 10.0.0.0/24,172.10.0.1 .

To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap .

Custom timeouts ¶

Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:

  • nginx.ingress.kubernetes.io/proxy-connect-timeout
  • nginx.ingress.kubernetes.io/proxy-send-timeout
  • nginx.ingress.kubernetes.io/proxy-read-timeout
  • nginx.ingress.kubernetes.io/proxy-next-upstream
  • nginx.ingress.kubernetes.io/proxy-next-upstream-timeout
  • nginx.ingress.kubernetes.io/proxy-next-upstream-tries
  • nginx.ingress.kubernetes.io/proxy-request-buffering

Note: All timeout values are unitless and in seconds e.g. nginx.ingress.kubernetes.io/proxy-read-timeout: "120" sets a valid 120 seconds proxy read timeout.

Proxy redirect ¶

The annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response

Setting "off" or "default" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to , otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.

By default the value of each annotation is "off".

Custom max body size ¶

For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size .

To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation:

Proxy cookie domain ¶

Sets a text that should be changed in the domain attribute of the "Set-Cookie" header fields of a proxied server response.

To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap .

Proxy cookie path ¶

Sets a text that should be changed in the path attribute of the "Set-Cookie" header fields of a proxied server response.

To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap .

Proxy buffering ¶

Enable or disable proxy buffering proxy_buffering . By default proxy buffering is disabled in the NGINX config.

To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation:

Proxy buffers Number ¶

Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4

To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffers-number : "4"

Proxy buffer size ¶

Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as "4k"

To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffer-size : "8k"

Proxy max temp file size ¶

When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file setting the proxy_max_temp_file_size . The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.

The zero value disables buffering of responses to temporary files.

To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-max-temp-file-size : "1024m"

Proxy HTTP version ¶

Using this annotation sets the proxy_http_version that the Nginx reverse proxy will use to communicate with the backend. By default this is set to "1.1".

SSL ciphers ¶

Specifies the enabled ciphers .

Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host.

The following annotation will set the ssl_prefer_server_ciphers directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.

Connection proxy header ¶

Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation:

Enable Access Log ¶

Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation:

Enable Rewrite Log ¶

Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:

Enable Opentracing ¶

Opentracing can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off tracing of external health check endpoints)

Opentracing Trust Incoming Span ¶

The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)

Enable Opentelemetry ¶

Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)

Opentelemetry Trust Incoming Span ¶

X-forwarded-prefix header ¶.

To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used:

ModSecurity ¶

ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap . Note this will enable ModSecurity for all paths, and each path must be disabled manually.

It can be enabled using the following annotation: nginx.ingress.kubernetes.io/enable-modsecurity : "true" ModSecurity will run in "Detection-Only" mode using the recommended configuration .

You can enable the OWASP Core Rule Set by setting the following annotation: nginx.ingress.kubernetes.io/enable-owasp-core-rules : "true"

You can pass transactionIDs from nginx by setting up the following: nginx.ingress.kubernetes.io/modsecurity-transaction-id : "$request_id"

You can also add your own set of modsecurity rules via a snippet: nginx.ingress.kubernetes.io/modsecurity-snippet : | SecRuleEngine On SecDebugLog /tmp/modsec_debug.log

Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:

nginx 0.24.1 and below nginx.ingress.kubernetes.io/modsecurity-snippet : | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf nginx 0.25.0 and above nginx.ingress.kubernetes.io/modsecurity-snippet : | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf

Since version 1.9.0, "modsecurity-snippet" annotation is disabled by default and has to be explicitly enabled, see allow-snippet-annotations . Enabling it can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. See CVE-2021-25742 and the related issue on github for more information.

Backend Protocol ¶

Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and FCGI

By default NGINX uses HTTP .

Use Regex ¶

When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie , nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex.

Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false .

The following will indicate that regular expression paths are being used: nginx.ingress.kubernetes.io/use-regex : "true"

The following will indicate that regular expression paths are not being used: nginx.ingress.kubernetes.io/use-regex : "false"

When this annotation is set to true , the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

Please read about ingress path matching before using this modifier.

By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.

Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in "test" backends.

The mirror backend can be set by applying:

By default the request-body is sent to the mirror backend, but can be turned off by applying:

Also by default header Host for mirrored requests will be set the same as a host part of uri in the "mirror-target" annotation. You can override it by "mirror-host" annotation:

Note: The mirror directive will be applied to all paths within the ingress resource.

The request sent to the mirror is linked to the original request. If you have a slow mirror backend, then the original request will throttle.

For more information on the mirror module see ngx_http_mirror_module

Stream snippet ¶

Using the annotation nginx.ingress.kubernetes.io/stream-snippet it is possible to add custom stream configuration.

Since version 1.9.0, "stream-snippet" annotation is disabled by default and has to be explicitly enabled, see allow-snippet-annotations . Enabling it can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. See CVE-2021-25742 and the related issue on github for more information.

gradient

Kubernetes labels, selectors, and annotations

Cameron Pavey

Kubernetes has many moving parts, and it is essential to wrap your head around quite a few of them if you want to work within Kubernetes efficiently. One of these important aspects is “metadata,” namely labels , selectors , and annotations . These three types of metadata each have their role to play when configuring and working with Kubernetes, whether it is stitching multiple resources together or just providing some more context for developers and DevOps engineers.

In this article, you will see some examples of different types of metadata in action and understand how to work with them—adding, editing, and removing them in various ways—as well as some of the benefits they can each provide.

Kubernetes labels

Labels are a type of metadata in Kubernetes that take on the form of a key-value pair attached to objects such as pods and services. Labels are often used to describe identifying aspects of the object, possibly for use by the user at a later stage. However, like other metadata, labels do not directly change any functionality as they imply no semantics to Kubernetes by default. One of the nice things about labels is that they let you map your own data structures onto objects in a loosely coupled fashion.

For example, your team might have different “release” types, such as “alpha,” “beta,” and “stable.” This would be a solid use case for labels, allowing you to indicate which of these release types a given object falls under. Although selectors can use labels for identification purposes, it is important to remember that they are not unique, as many objects carry the same labels.

You can add labels to your resources in a few different ways. The first and most common way is to add them directly to your config files to set them when the resource is created or updated. To do this, you can specify label values at metadata.labels like so:

The other way to work with labels is via the kubectl CLI tool. This is handy for making small tweaks to your resources, but it is important to remember that the changes will not be reflected back to your config files automatically.

To add a label to an existing resource, you can use the following command:

You can also remove the label using this command:

Finally, you can also use the edit command to change your running configurations in a more imperative way using kubectl edit pod/metadata-demo . This will open your CLI editor of choice and allow you to add and remove labels and other details. When you save and exit, the changes will apply.

Kubernetes selectors

As their name suggests, label selectors allow you to identify the objects you have tagged with particular labels. Label selectors can either be equality-based or set-based. Equality-based label selectors work by specifying an exact value that you want to match against. If you provide multiple selectors, all of them must be satisfied to qualify as a match.

Set-based selectors work in a similar fashion, except you can specify multiple values in a single selector, and only one of them needs to match for the object to qualify.

Selectors come in handy for a few different things. Probably the most common usage is for grouping the correct resources for something like a service. Consider the following configuration file where a deployment will create a pod with a particular label, and that label is then used by a service to determine its association:

The significant part of this config is the selector field for the service, which tells the service which pods it should associate with and send traffic to.

Selectors are also commonly used for more “human” operations via the command-line tool. In clusters with many resources running, it can be beneficial to use the selectors to discriminate and quickly identify the resources you are interested in. For example, suppose you wanted to find all of the resources in the above configuration file. In that case, you could use the aforementioned set-based selectors to see everything with one of the matching labels, like so:

If you are only looking for a specific label, the syntax is similar, if simpler:

Kubernetes annotations

Annotations are another type of metadata you can use in Kubernetes. While labels can be used to identify and select objects, annotations cannot. Their intended use is to store arbitrary, non-identifying information about objects. This data is often used to provide context about objects to the human operators of the system. One good example of how you can use annotations is the a8r projects , which establish a convention for “using annotations to help developers manage Kubernetes services.” This includes annotations such as a8r.io/description and a8r.io/bugs , among others. These can be used to store an unstructured text description of the service for humans, as well as a link to an external bug-tracker for the service, respectively.

Because annotations have fewer restrictions on them than labels do, you can store special characters not permitted by labels, as well as large, structured values if your use case requires it. This can be seen when you use the kubectl edit command. For example, when editing a deployment configuration, you can see that the previous configuration state is stored in an annotation as structured data:

Like labels, you can add annotations in a few ways, namely via config files or the kubectl command line. Say, for example, you wanted to add some of those a8r.io annotations onto the config example above. That might look something like this if you want to do it as config:

Or, it might look like this, if you want to do it via the CLI:

Finally, there is always the option of using the kubectl edit command to alter the configuration on the fly.

While annotations do not inherently imply semantics to the Kubernetes core, it is still possible for them to affect operation in some cases. A good example of this is with the NGINX Ingress controller (among others). The NGINX Ingress controller allows you to add Kubernetes annotations onto your ingress objects to affect their behavior. Most of these map cleanly to the configuration options available in NGINX, and as such, it is a nice way to allow mapping NGINX specific concepts onto your Kubernetes resources. The NGINX Ingress controller is then able to read these annotations and apply them as needed. An example of this is the nginx.ingress.kubernetes.io/rewrite-target annotation . Much like the a8r.io annotations discussed above, this NGINX annotation is prefixed with a specific scope to avoid conflicts with other annotations that may be similarly named.

There are multiple types of metadata in Kubernetes, with just as many ways to work with them and even more use cases. Metadata is essential to managing larger deployments and keeping everything organized, as it gives you a way to impose your own organizational model onto the Kubernetes resources in a loosely coupled fashion, without directly implying semantics.

As exemplified in features like human-readable annotations, this can provide a lot of value for other humans interacting with the system, as it guides them through the resources and paints a picture of how things work together. Metadata isn’t only useful for humans though, it also provides Kubernetes with a way to group, organize, and associate resources, allowing for larger and more complex structures to exist than would be reasonably viable without it.

If you're looking for an internal tooling platform to help manage your data and applications, try using Airplane. Airplane is the developer platform for building custom internal tools. The basic building blocks of Airplane are Tasks , which are single or multi-step functions that anyone on your team can use. Airplane Views is a React-based platform that allows users to build custom UIs quickly.

With Airplane, building internal tools for your engineering workflows becomes easy. Airplane is also code-first: everything you build can be version controlled, integrated into your codebase, and connected to third parties.

If you'd like to test it out, sign up for a free account or book a demo .

Cameron Pavey

Subscribe to new blog posts from Airplane.

How to use NGINX Prometheus exporter

How to use NGINX Prometheus exporter

Kumar Harsh

Collecting logs from AWS Fargate

Oghenevwede Emeni

This website uses cookies. By continuing to browse, you agree to our Privacy Policy.

Best practices guide for kubernetes labels and annotations.

Aviad Shikloshi, Software Engineering Team Lead

Kubernetes is the de facto container-management technology in the cloud world due to its scalability and reliability. It also provides a very flexible and developer-friendly API, which is the foundation of its control plane.

The effectiveness of the Kubernetes API comes from how it manages the Kubernetes resources via metadata: labels and annotations. Metadata is essential for grouping resources, redirecting requests and managing deployments. In addition, is is also used to troubleshoot Kubernetes applications .

In this blog post, you will learn the basics and best practices of using labels and annotations.

Kubernetes Labels

Kubernetes labels are the metadata information attached to the Kubernetes resources to group, view, and operate. Labels are in the format of key and value string pairs, where each key should be unique.

Let’s take a look at them in action:

In the previous command, you retrieved the labels of the minikube node, which include information related to the operating system, hostname, and the minikube version running on the node. You can use the labels for retrieving and filtering the data from the Kubernetes API.

Let’s assume you want to get all the pods running the Kubernetes dashboard . You can use the selector k8s-app=kubernetes-dashboard over labels with the following command:

The hidden gem of Kubernetes labels is that they are heavily used with the Kubernetes itself, such as scheduling pods to nodes, managing replicas of deployments, and network routing of services.

Let’s look at some labels and how they are used as selectors in Kubernetes by checking the spec of the kubernetes-dashboard service:

Kubernetes uses the labels defined in the selector section to distribute the incoming requests to the kubernetes-dashboard service. With a similar approach, replica sets track the number of pods to maintain replicas running on the cluster. Now let’s check the selector of the replica set for the dashboard:

The matchLabels indicate that there will be enough pods with the mentioned labels in the cluster. When you release a new version, it will create a new pod-template-hash , and replica set controllers will create new pods instead.

Kubernetes Annotations

Kubernetes annotations are the second way of attaching metadata to the Kubernetes resources. They are pairs of key and value strings that are similar to labels, but which store arbitrary non-identifying data. For instance, you can keep the contact details of the responsible people in the deployment annotations. Similarly, you can attach logging, monitoring, or auditing information for the resources in the annotations format.

The main difference between annotations and labels is that annotations are not used to filter, group, or operate on the resources. Rather, they are used to easily access additional information about the Kubernetes resources.

For instance, CRI socket or volume controller annotations show how the node works, instead of its characteristics, in the following example:

Client tools and Kubernetes users can retrieve the metadata and operate accordingly. You can imagine the data kept in annotations to be stored in Excel sheets or databases; however, they are attached to the resources. Therefore, there is no selector implementation like labels in the Kubernetes API.

Best Practices

Now that we’ve covered the fundamentals of Kubernetes labels and annotations, it’s time to explore the best practices for using them most beneficially.

Use the Correct Syntax

Annotations and labels are key-value pairs. Keys consists of two parts: an optional (but highly suggested) prefix and name:

  • Prefix: If specified, the prefix should be a DNS subdomain no longer than 253 characters and ending with a slash. For example: k8s.komodor.com/
  • Name: This is required and limited to 63 characters.

When the prefix is omitted, you can assume that labels or annotations are private for your cluster and user. When the prefix and name are used together, you should store the data to be used with multiple clients, similar to the following:

  • app.kubernetes.io/version
  • app.kubernetes.io/component
  • helm.sh/chart

Using the correct syntax for labels and annotations makes it easier to communicate within your team and use the cluster with client tools and libraries such as kubectl, Helm, and operators. Therefore, it is suggested to choose a prefix for your company and sub-prefixes for your projects. This company-wide consensus will help you utilize labels and annotations to their full power.

Learn When to Use Labels and Annotations

As mentioned earlier, the main difference between labels and annotations is whether they are identifiers or not. If you want to attach information to group resources and filter, you should keep the data as labels. Use annotations if the metadata is not an identifier, but rather additional data related to the Kubernetes resources.

For instance, the following pod has two labels and two annotations:

In the demo pod, labels classify it as being an nginx application running in production. Annotations show the owner and communication data. If you plan to group pods by owners in the future, it is suggested to move komodor.com/owner to labels.

Using labels and annotations with the correct use cases is vital to have an easy-to-operate cluster with automated tools. Therefore, ensure that your labels and annotations are not overlapping in terms of data and usage.

Exploit the Standard Labels and Annotations

Kubernetes reserves all the labels and annotations with the key kubernetes.io domain name and keeps a list of well-known ones in the official documentation . You may have seen some of them in the Kubernetes dashboard or resource definitions, such as:

The main advantage of this metadata is that the Kubernetes machinery automatically fills values of the standard labels and annotations. Thus, it is suggested to use the well-known labels and annotations in your daily operations and client tools, such as Helm, Terraform, or kubectl.

Use Labels for Release Management

Releasing distributed microservices applications to the cloud is not straightforward, as you have an excessively high number of small applications—each with its own version. Therefore, most developers only change the version of a single application out of a hundred and test the rest of the system. Fortunately, you can use labels for grouping and filtering the applications running on Kubernetes.

Let’s assume you have a backend service that has multiple pods running behind it with the labels version:v1 and app:backend . You can deploy a new set of backend instances to the cluster and change the service label selector to version:v2 and app:backend . Now, all requests coming to the backend service will reach v2 instances. Luckily, switching back to v1 is pretty easy, as you only need to change the service specification.

This procedure is also known as the Blue/Green deployment strategy. In addition, you can easily implement A/B testing and canary release strategies with the help of Kubernetes labels.

Learn How To Manipulate Labels for Troubleshooting

The last best practice is for the Kubernetes operators who need to debug applications running inside the cluster. Let’s assume you have a deployment with the following selector labels:

  • app.kubernetes.io/name: my-complex-app
  • app.kubernetes.io/instance: prod-1
  • app.kubernetes.io/version: "1.1.0"

All pods of the deployment will also have the same set of labels. Unfortunately, you cannot change and modify the pods, but you can change the labels of the selector in order to not match current pods. It will make the running pods orphaned, and you can exec into them for debugging.

Kubernetes will create new pods with the new labels, and your production setup will continue living as expected—with an additional pod that you’ll want to analyze further for troubleshooting. You can interfere with the operations of Kubernetes and troubleshoot your applications when you know how labels are designed and used by Kubernetes.

In this blog post, we covered the fundamentals of Kubernetes labels and annotations through examples and best practices that are essential to bringing the power of metadata tools to light. Using the correct syntax with the intended aim will make your labels and annotations more meaningful and maintainable. In addition, you can exploit the standard labels of Kubernetes with prepopulated data in your applications. Finally, labels are helpful for cloud-native release management and application debugging.

In order to gain overall control and visibility into your Kubernetes clusters, check out Komodor and our Kubernetes-native troubleshooting solution. This will simplify the complex and distributed environment of Kubernetes and help you understand what is actually happening in your clusters.

Sign up for a free trial to see how you troubleshootiing intelligently while leveraging your existing stack can make a difference.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Latest Blogs

The kubernetes for humans podcast 🎙️.

The Kubernetes for Humans Podcast is hosted by Komodor's own co-founding CTO Itiel Shwartz, and revolves around Platform Engineering, DevOps culture, Kubernetes at scale, Cloud-Native challenges, industry trends, future predictions, and of course - the intersection between humans and technology.

Automating Kubernetes Deployments with GitHub Actions

This tutorial takes you through the steps to automate a Kubernetes deployment with GitHub Actions.

Unveiling Komodor’s Network Mapping Capability

Thanks to the power of the open-source community, and our friends over at Otterize, we have now enhanced our K8s offering for developers with another visual aid to streamline operations and troubleshooting.

Sign up for FREE

and start using Komodor in seconds!

Cloud Training Program

Learn Cloud With Us

Kubernetes Labels | Labels And Annotations In Kubernetes – Everything You Need To Know

' src=

May 10, 2023 by Piyush Jain Leave a Comment

Kubernetes is a container orchestration tool, provides a platform for automating deployment, scaling, and management of our containers. Kubernetes Labels and Kubernetes Annotations are one of the main components. They both provide a way for adding additional metadata to our Kubernetes Objects.

Are you new to Kubernetes? Check out our blog Kubernetes for Beginners  to know in detail.

In this blog, we will be covering:

  • What is Kubernetes Labels?
  • Creating Labels

Label Selectors

  • Service and Labels
  • What are Annotations?
  • Creating Annotations
  • Kubernetes Labels vs Annotations

What Is Kubernetes Labels?

Labels in Kubernetes are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users but are not used by the Kubernetes itself. Labels are fundamental qualities of the object that will be used for grouping, viewing, and operating. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.

Labels have a simple syntax, where both the key and value are represented by strings. Label keys can be broken down into two parts: an optional prefix and a name , separated by a slash(/). The prefix can be a DNS sub-domain with a 253-character limit. The key name is required and must be shorter than 63 characters , beginning and ending with an alphanumeric character ( a-z0-9A-Z ) with dashes ( – ), underscores ( _ ), dots ( . ), and alphanumerics between.

Label values  are strings with a maximum length of 63 characters . The contents of the labels follow the same rules as for label keys. Here are some examples of labels in Kubernetes 1) key – kubernetes.io/cluster-service value – true 2) key – appVersion value – 1.0.0

Labels in Kubernetes

Also Read:  Our blog post on P ersistent Storage in Kubernetes . Click here

Creating Labels In Kubernetes

Labels are attached to the Kubernetes objects and always defined in the metadata section of the configuration files. The commonly used key and label conventions are:

Also Read:  our blog post on Kubernetes Scheduler

Here’s the configuration file for a Pod that has two labels “ environment: dev ” and “ app: <name of the app> “

Save the file as pod.yaml and then run the following command:

To see the labels use the following command:

Labels

Read More : About Kubernetes cluster setup : A Complete Step-by-Step Guide

Label selectors are used for filter Kubernetes objects based on a set of labels. Selectors use a simple Boolean language. There are two kinds of selectors: Equality based and Set based.

Equality based

Equality-based selectors allow filtering by label keys and values. Three kinds of operators are used: =, ==, != Example: If we wanted to list all the Pods that had the environment label set to dev, we can use the selector flag:

Equality based selector

Check Out:  Our blog post on M ulti Container Pod Kubernetes . Click here

Set-based label selectors allow filtering keys according to a set of values. Three kinds of operators used: in, notin, exists Example: If we wanted to list all the Pods that had the environment label set to dev or production:

set based selector

Install Docker   on windows, ubuntu, mac with easy steps.

S ervices And Labels

A Service sits in front of our Pods and distributes requests to them. They route traffic across the Pods. Services are the abstraction that allows pods to die and replicate in Kubernetes without impacting the application. They use Kubernetes Labels and Selectors to match a set of Pods.

Service and Labels

Also read: our blog on Kubernetes Networking Services

What Are Kubernetes Annotations?

Annotations in Kubernetes provide a place to store non-identifying metadata for Kubernetes Objects which can be used to get a more elaborate context for an object. Annotations are also key/value pairs like Labels. Annotation keys use the same format as Label keys.

Why Annotations Are Used?

  • Keep track of a “reason” for the latest upgrade to an object.
  • Communicate a specialized scheduling policy to a specialized scheduler.
  • Extend data about the last tool to update the resource and how it was updated (used for detecting changes by other tools and doing a smart merge).
  • Attach build, release, or image information that isn’t appropriate for labels.
  • Enable the Deployment object to keep track of ReplicaSets that it is managing for rollouts.
  • Phone or pager numbers of persons responsible, or directory entries that specify where that information can be found, such as a team web site.

Also read: What is the difference between Container vs VM

Creating Annotations In Kubernetes

Annotations are defined in the common metadata section in every Kubernetes object. Here’s the configuration file for a Pod that has the annotation.

Save the above file with annotations.yaml and run the following command:

Creating pods with annotations

Also Check:  Our blog post on Kubernetes pod . Click here

To see the annotations, use the following command:

Creating pods with annotations

Also Read:  what is helm Kubernetes ?

Difference Between Labels And Annotations

Kubernetes Labels and Kubernetes Annotations are used to add the metadata to our Kubernetes objects. But there is a difference between both of them. Kubernetes Labels allow us to do a grouping of our objects so that we can perform queries for viewing and operating.

Kubernetes Annotations are used for adding non-identifying metadata to Kubernetes objects. This metadata information is only for the user. Annotations can hold any kind of information that is useful and can provide context to DevOps teams. Examples include phone numbers of persons responsible for the object or tool information for debugging purposes.

Related Post

  • Kubernetes Architecture | An Introduction to Kubernetes Components
  • Kubernetes for Beginners
  • Visit our YouTube channel on “Docker & Kubernetes”
  • Certified Kubernetes Administrator (CKA) Certification Exam: Everything You Must Know
  • Certified Kubernetes Administrator (CKA) Certification: Step By Step Activity Guides/Hands-On Lab Exercise

Join FREE Masterclass

Discover the Power of Kubernetes, Docker & DevOps – Join Our Free Masterclass . Unlock the secrets of Kubernetes, Docker, and DevOps in our exclusive, no-cost masterclass. Take the first step towards building highly sought-after skills and securing lucrative job opportunities. Click on the below image  to Register Our FREE Masterclass Now!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

k21_logo

"Learn Cloud From Experts"

oracle

  • Partner with Us
  • Terms and Conditions
  • Privacy Policy
  • Docker and Kubernetes Job Oriented Program
  • AWS Job Oriented Program
  • Azure Job Oriented Program
  • Oracle Cloud Job Oriented Program
  • Terraform Job Oriented

Get in touch with us

8 Magnolia Pl, Harrow HA2 6DS, United Kingdom

Email : [email protected]

annotations list kubernetes

  • Documentation
  • Case Studies
  • 日本語 Japanese
  • Bahasa Indonesia
  • GETTING STARTED

Edit This Page

Annotations

You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata.

Attaching metadata to objects

Syntax and character set.

  • What's next

You can use either labels or annotations to attach metadata to Kubernetes objects. Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.

Annotations, like labels, are key/value maps:

Here are some examples of information that could be recorded in annotations:

Fields managed by a declarative configuration layer. Attaching these fields as annotations distinguishes them from default values set by clients or servers, and from auto-generated fields and fields set by auto-sizing or auto-scaling systems.

Build, release, or image information like timestamps, release IDs, git branch, PR numbers, image hashes, and registry address.

Pointers to logging, monitoring, analytics, or audit repositories.

Client library or tool information that can be used for debugging purposes: for example, name, version, and build information.

User or tool/system provenance information, such as URLs of related objects from other ecosystem components.

Lightweight rollout tool metadata: for example, config or checkpoints.

Phone or pager numbers of persons responsible, or directory entries that specify where that information can be found, such as a team web site.

Directives from the end-user to the implementations to modify behavior or engage non-standard features.

Instead of using annotations, you could store this type of information in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, and the like.

Annotations are key/value pairs. Valid annotation keys have two segments: an optional prefix and name, separated by a slash ( / ). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character ( [a-z0-9A-Z] ) with dashes ( - ), underscores ( _ ), dots ( . ), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots ( . ), not longer than 253 characters in total, followed by a slash ( / ).

If the prefix is omitted, the annotation Key is presumed to be private to the user. Automated system components (e.g. kube-scheduler , kube-controller-manager , kube-apiserver , kubectl , or other third-party automation) which add annotations to end-user objects must specify a prefix.

The kubernetes.io/ and k8s.io/ prefixes are reserved for Kubernetes core components.

For example, here’s the configuration file for a Pod that has the annotation imageregistry: https://hub.docker.com/ :

What's next

Learn more about Labels and Selectors .

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow . Open an issue in the GitHub repo if you want to report a problem or suggest an improvement .

GoLinuxCloud

Kubernetes labels, selectors & annotations with examples

Table of Contents

Kubernetes provides two basic ways to document your infrastructure— labels and annotations . We have used labels in some of the examples in previous articles, but here I will explain the usage of labels and other related terminologies.

  • Labels give us another level of categorization, which becomes very helpful in terms of everyday operations and management.
  • Labels are attached to Kubernetes objects and are simple key: value pairs.
  • You will see them on pods, replication controllers, replica sets, services, and so on.
  • Labels themselves and the keys/values inside of them are based on a constrained set of variables, so that queries against them can be evaluated efficiently using optimized algorithms and data structures.
  • Labels are used for organization and selection of subsets of objects, and can be added to objects at creation time and/or modified at any time during cluster operations.

Let’s use an easy example to demonstrate. Suppose you wanted to identify a pod as being part of the front-end tier of your application. You might create a label named tier and assign it a value of frontend—like so:

The text “ tier ” is the key, and the text “ frontend ” is the value.

Labels are queryable — which makes them especially useful in organizing things. The mechanism for this query is a label selector . A label selector is a string that identifies which labels you are trying to match. There are currently two types of selectors: equality-based and set-based selectors .

Equality-based selector

An equality-based test is just a “ IS/IS NOT ” test. For example:

will return all pods that have a label with the key “tier” and the value “frontend”. On the other hand, if we wanted to get all the pods that were not in the frontend tier, we would say:

You can also combine requirements with commas like so:

This would return all pods that were part of the game named super-shooter-2 but were not in its frontend tier.

Set-based selectors

Set-based tests, on the other hand, are of the “ IN/NOT IN ” variety. For example:

The first test returns pods that have the environment label and a value of either production or qa . The next test returns all the pods not in the frontend or backend tiers. Finally, the third test will return all pods that have the partition label—no matter what value it contains.

Like equality-based tests, these can also be combined with commas to perform an AND operation like so:

This test returns all pods that are in either the production or qa environment, also not in either the frontend or backend tiers, and have a partition label of some kind.

Annotations

Annotations are bits of useful information you might want to store about a pod (or cluster, node, etc.) that you will not have to query against. They are also key/value pairs and have the same rules as labels.

Examples of things you might put there are the pager contact, the build date, or a pointer to more information someplace else—like a URL.

Labels are used to store identifying information about a thing that you might need to query against. Annotations are used to store other arbitrary information that would be handy to have close but won’t need to be filtered or searched.

Assigning a label to a Deployment

Method-1: assign labels while creating a new object.

It is always a good idea to use YAML template from any of the existing resource object. Since we plan to create a deployment, I can use a YAML template from any of the existing template but if you don't have any existing template then you can create one using --dry-run and export the template into another YAML file:

So we now have a template file for a new deployment with following content:

We can perform clean up in this file and remove some unwanted content. You can see that by default kubectl has created and assigned a label app: label-nginx-example , I will replace that and assign a new label to our deployment as app: prod .

Next let us create a new deployment:

List the available deployments with their labels:

List the available pods with their labels:

Method-2: Assign a new label to existing pod runtime as a patch

In this example we will assign new label "tier: frontend" to our existing Pods from the deployment label-nginx-example which we created in the previous example. To achieve this we need to create a spec file with the required properties:

Next patch the deployment with this YAML file:

Now you can use kubectl describe to check if our label was applied to the deployment:

This would apply the label to the Pod:

Method-3: Assign a new label to existing deployments runtime using kubectl

In this example we can assign a new label runtime using kubectl command to our deployment. I have another deployment nginx-deploy on my cluster, so I will assign label tier: backend to this deployment:

Verify if the label was applied successfully:

Using labels to list resource objects

Now I have already used some of these commands in previous example but let me summarise all here again for your reference.

To list all the pods with their label details:

To list all the deployments with their label details:

To list all resources with assigned labels:

To list all the deployments using type: dev label:

To list all the pods using app: prod label:

Using selector to list resource objects

In this section we will use selectors to list the deployments and pods. The pod's selector determines where the ReplicaSet is running, so in our examples we have defined selector when creating a deployment.

But to demonstrate, I will create another deployment here with two labels and use one of the label as selector:

Now we will create this deployment:

List the applied labels to the newly created pods and deployment:

So, all the pods from our lab-nginx deployment has two labels but is using app=dev as the selector so we can use this to filter the list of pods:

But will we get any output if we use tier: backend as the selector:

No resources found as we are using app=dev as our selector for the pods part of lab-nginx deployment.

To list all the deployments with selector app: prod

In the last example we used equality-based selector to filter the available deployments, now we will use set-based selector:

Removing labels

We can also remove a label from a resource object using following syntax:

For example to remove label app: dev from pods created by lab-nginx deployment:

As soon as we remove the selector label from the Pod, replicaset will create another Pod to fulfil the replica requirement of the deployment. Since lab-nginx expects two replica pods with label app=dev , another one will be created as soon as we remove label from existing Pod:

So as expected, you can see that as soon as I removed the app label from exiting lab-nginx Pod, a new one is getting created.

Similarly, we can also remove label from a deployment:

Now we remove app label from this deployment:

Verify the available set of label of your deployment:

In this Kubernetes Tutorial we learned about the usage of labels, selector and annotation using different examples. To summarise, labels and annotation help you organize the Pods once your cluster size grows in size and scope. These are mostly used with replication controllers and replica sets in a deployment. You also learned that we can assign/modify/remove labels from different Kubernetes resource runtime.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to [email protected]

Thank You for your support!!

annotations list kubernetes

Daniele Polencic

In Kubernetes, you can use labels to assign key-value pairs to any resources.

Labels are ubiquitous and necessary to everyday operations such as creating services.

However, how should you name and use those labels?

Any resource in Kubernetes can have labels.

Some labels are vital (e.g. service’s selector, operators, etc.), and others are useful to tag resources (e.g. labelling a deployment).

Kubectl offers a --show-labels flag to help you list resources and their labels.

If you list pods, deployments and services in an empty cluster, you might notice that Kubernetes uses the component=<name> label to tag pods.

Kubernetes recommends six labels for your resources:

Let’s look at an excellent example of using those labels: the Prometheus Helm chart.

The charts install five pods (i.e. server, alter manager, node exporter, push gateway and kube state metrics).

Notice how not all labels are applied to all pods.

Labelling resources properly helps you make sense of what’s deployed.

For example, you can filter results with kubectl:

The command above only lists pod in staging and dev.

If those labels are not what you are after, you can always create your own.

A <prefix>/<name> key is recommended — e.g. company.com/database .

The following labels could be used in a multitenant cluster:

  • Business unit
  • Development team
  • Application
  • Shared services
  • Environment
  • Asset classification

Alongside labels, you have annotations.

Whereas labels are used to select resources, annotations decorate resources with metadata.

You cannot select resources with annotations.

Administrators can assign annotations to any workload.

However, more often, Kubernetes and operators decorate resources with extra annotations.

A good example is the annotation kubernetes.io/ingress-bandwidth to assign bandwidth to pods.

The official documentation has a list of well-known labels and annotations.

Here are some examples:

  • kubectl.kubernetesׄ.io/default-container
  • topology.kubernetes.io/region
  • node.kubernetes.io/instance-type
  • kubernetes.io/egress-bandwidth

Annotations are used extensively in operators.

Look at all the annotations you can use with the ingress-nginx controller.

Unfortunately, using operators/cloud providers/etc. annotations is not always a good idea if you wish to stay vendor-neutral.

However, sometimes it’s also the only option (e.g. having an AWS ALB deployed in the correct subnet when using a service of type LoadBalancer).

Here are a few links if you want to learn more:

  • The Guide to Kubernetes Labels
  • Well-Known Labels, Annotations and Taints
  • Recommended Labels
  • Label standard and best practices for Kubernetes security

And finally, if you’ve enjoyed this thread, you might also like:

  • The Kubernetes workshops that we run at Learnk8s.
  • This collection of past threads.
  • The Kubernetes newsletter I publish every week.

Daniele Polencic

Written by Daniele Polencic

Teaching containers and Kubernetes at learnk8s.io

More from Daniele Polencic and ITNEXT

What happens when you create a pod in Kubernetes

What happens when you create a pod in Kubernetes

Component-based Approach. Fighting Complexity in Android Applications

Artur Artikov

Component-based Approach. Fighting Complexity in Android Applications

Imagine starting the development of a new android application. at this stage, major problems are unlikely. you have implemented only the….

SOLID in React: the good, the bad, and the awesome

Igor Snitkin

SOLID in React: the good, the bad, and the awesome

Let’s talk about solid principles from the perspective of the react application. if you’re not sure what solid principles are, you might….

Multi-tenancy in Kubernetes

Multi-tenancy in Kubernetes

Recommended from medium.

Understanding Kubernetes Node-to-Node Communication: A Deep Dive

Extio Technology

Understanding Kubernetes Node-to-Node Communication: A Deep Dive

Introduction.

3 years managing Kubernetes clusters, my 10 lessons.

3 years managing Kubernetes clusters, my 10 lessons.

Over the past three years, i’ve navigated the sometimes turbulent waters of managing kubernetes clusters. this journey, filled with….

annotations list kubernetes

General Coding Knowledge

annotations list kubernetes

Business 101

Databricks role-based and specialty certification line-up.

New_Reading_List

annotations list kubernetes

Productivity

Kubernetes for the poor

Kubernetes for the poor

Kubernetes is great, since the last few years, it has proved that it is the best container orchestration software on the market..

100 Kubernetes Diagnostics Commands with Kubectl

100 Kubernetes Diagnostics Commands with Kubectl

Here is a list of 100 kubectl commands that can be useful for diagnosing issues in a kubernetes cluster. these were prepared as a study aid….

DevOps and Kubernetes: We’ve Been Doing It Wrong

DevOps and Kubernetes: We’ve Been Doing It Wrong

Platform engineering as a replacement for devops has become a hot topic, with provocative critics stoking the controversy by pronouncing….

Mutha Nagavamsi

Mutha Nagavamsi

Connection pooling in Kubernetes

Let’s address the elephant in the room. 🐘.

Text to speech

kubectl Cheat Sheet

This page contains a list of commonly used kubectl commands and flags.

Kubectl autocomplete

You can also use a shorthand alias for kubectl that also works with completion:

Require kubectl version 1.23 or above.

A note on --all-namespaces

Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces :

Kubectl context and configuration

Set which Kubernetes cluster kubectl communicates with and modifies configuration information. See Authenticating Across Clusters with kubeconfig documentation for detailed config file information.

Kubectl apply

apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply . This is the recommended way of managing Kubernetes applications on production. See Kubectl Book .

Creating objects

Kubernetes manifests can be defined in YAML or JSON. The file extension .yaml , .yml , and .json can be used.

Viewing and finding resources

Updating resources, patching resources, editing resources.

Edit any API resource in your preferred editor.

Scaling resources

Deleting resources, interacting with running pods, copying files and directories to and from containers, interacting with deployments and services, interacting with nodes and cluster, resource types.

List all supported resource types along with their shortnames, API group , whether they are namespaced , and kind :

Other operations for exploring API resources:

Formatting output

To output details to your terminal window in a specific format, add the -o (or --output ) flag to a supported kubectl command.

Examples using -o=custom-columns :

More examples in the kubectl reference documentation .

Kubectl output verbosity and debugging

Kubectl verbosity is controlled with the -v or --v flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described here .

What's next

Read the kubectl overview and learn about JsonPath .

See kubectl options.

Also read kubectl Usage Conventions to understand how to use kubectl in reusable scripts.

See more community kubectl cheatsheets .

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow . Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement .

  • Cloud Computing
  • Amazon Web Services
  • Microsoft Azure
  • Google Cloud Platform
  • Operating System
  • Computer Network

annotations list kubernetes

  • Explore Our Geeks Community
  • Kubernetes - Kubectl
  • Kubernetes - Create Config Map From Files
  • Kubernetes - Autoscaling
  • How to Use Kubernetes Horizontal Pod Autoscaler?
  • Kubernetes - Kubectl Create and Kubectl Apply
  • Kubernetes - Run a Command in Pod's Containers
  • Kubernetes - Create ConfigMap From YAML File
  • Kubernetes - Monitoring
  • How To Use Kubernetes Taints and Tolerations?
  • Helm - Kubernetes Package Manager and Installation in Local Machine
  • Kubernetes - Jobs
  • Kuberneters - Difference Between Replicaset and Replication Controller
  • Kubernetes - Dashboard Setup
  • Node Affinity in Kubernetes
  • How To Use Kubernetes RBAC (Role-Based Access Control)?
  • Kubernetes - Creating Multiple Container in a Pod
  • How To Use Kubernetes Labels and Selectors?
  • How To Use Kubernetes Network Policies?
  • How To Deploy Kubernetes on CentOS?

How to Use Kubernetes Annotations?

Annotations are key-value pairs that are used to attach non-identifying metadata to Kubernetes objects. Various tools that are built over Kubernetes use this metadata attached by annotations to perform actions or enhance resource management. Labels and Annotations are used to attach metadata to Kubernetes objects. This makes annotations important for working with Kubernetes clusters. To learn more about Kubernetes cluster and its architecture refer to Kubernetes – Architecture .

Types of Information That Can Be Stored in Annotations

  • Fields managed by declarative config layer.
  • information related to Build and release.
  • Name and version details.
  • registry address, branch, Pull Request number, and image hashes.
  • Pointers to various repositories for logging, monitoring, analytics, and audit.

Kubernetes Objects

Objects are the fundamental units in Kubernetes that represent the desired state of the cluster. The desired state means what containers should be running, on what nodes those containers should be running, what resources should be used for those containers, and some other features like policies, upgrades, fault tolerance, etc. To know more about Kubernetes Objects read this GeeksforGeeks article: Kubernetes – Services

Kubernetes Metadata

Metadata is the part of Kubernetes configuration file that consists of Labels, Resources and Attributes. In UI, this metadata is displayed by the container details.In Kubernetes configuration files, mostly metadata consists of “name” and “label”. Example can be given as follows. To know more about kubernetes deployment refer to Kubernetes – Deployments. Even kubernetes replica-set uses concept annotations.

nginx-deployment.yaml

Kubernetes Metadata includes the following:

  • Kubernetes Attributes

How To Write Annotations

Annotations just like labels are key value pairs. Therefore for writing an annotation, we add a “key” and a corresponding “value” to the key. Kubernetes only allows keys and the values to be in string. therefore, any other data type (boolean, int, etc.) must be avoided. the syntax for annotations looks like the following.

1. Annotation Keys: For creating valid annotation keys, it must have a name, having a prefix is optional. you can use alphanumeric characters for the names and prefixes. You can use dashes (“-“), underscores (“_”) and dots (“.”) as separations. name and prefix are separated by a slash (“/”).According to the Kubernetes convention, private user keys don’t have prefix, while the system components in Kubernetes that add annotations to the end-user objects always holds a prefix. Examples can be “kube-scheduler” or “kube-controller-manager”.

An example of Annotation key that has a name and a prefix is following:

Here @1shubham7 is my GitHub username. An example of Annotation key that has a name but not a prefix is following:

An example service file using Annotations looks like this:

Service

a8r.io/owner represents the GitHub username, email address linked to the GitHub account, or unstructured owner description.

2. Values: Annotation Values are the values or information associated with the corresponding Annotation key. In the above example, Annotation value for a8r.io/owner is “@1shubham7”.

Convention For Annotations In Kubernetes Services

There is a Convention for Annotations in Kubernetes in order to ensure consistency and readability. A documented set of convention is given below:

Step By Step Guide To Use Annotations

For using annotations in a Kubernetes services file, follow the given steps:

Step 1. Create a file and name it example.yaml. Enter the following command in your terminal to create a new file called

Example.yaml:

Step 2. Open a text editor for adding code to the file. Use can use the following command in your terminal to open vim text editor.

Step 3. Add the following code to the configuration file. Here we have created a services file and we are using the annotation key “a8r.io/owner” which corresponds to the value ‘@1shubham7″ which is a username.

Step 4. enter the following command in your terminal in order to get all the services that have been created in your local machine.

If you have never created any services, then only auto-generated Kubernetes service will be shown in the output.

get services

Step 5. apply the configuration file to create a Kubernetes service named example

Now when you enter the following command in your terminal in order to get all the services that have been created in your local machine:

you will the example service in the output along with other services:

Kubectl get services

Step 6. Enter the following command in your terminal to display detailed information about the example service:

This will output a detailed information about the example service. In the “Annotations” part you will find the annotations created by in previous steps:

describe service

Or you can also check annotations by using the following command in your terminal:

This will give you an output similar to this:

kubectl get services

Here we can see that we have an annotation key (“a8r.io/owner”) and a value (“@1shubham7”) corresponding to it.

How To Add Annotations Using Cli Commands

We discussed above about how to add annotations in a configuration file, now we will be discussing how to add annotations in the CLI itself. note that we will continue the same example.yaml service file from above and add annotations to the example service created above. In order to add annotations with the CLI commands, follow the given steps:

Step 1. Enter the following command in your terminal to find current annotations:

This will give a list of the current annotations. The output will be:

describe service which is required

This is the annotation we created in the step by step guide above.

Step 2. “kubectl annotate” is used to add annotations in the service directly using CLI. Enter the following command in the terminal:

The annotation has been added to the example service. This will give you a similar output:

Adding the annotation

Step 3. You can view the current annotations by the command we used before:

Since we have added a new annotation, we will get the new annotation in the output:

describe anno

We can also find annotations by the “kubectl describe service” command. you can do that by adding the following command in the terminal:

describe service

Performing CRUD In Kubernetes Annotations

In this section we will discuss how to create, update and delete Kubernetes annotations. All these operations are performed by the “annotate” command in the CLI. We will learn how to add an annotation, update a single annotation, update all annotations in a namespace and how to delete an annotation.

Adding An Annotation

For adding an annotation, we simply use the “annotate” command as we used earlier, Enter the following command in your terminal:

and you can check the information about the service by “kubectl describe” command like before:

This will give an output wiith an additional annotation in our example service:

Checking Services

Updating An Annotation

For updating an annotation we the –override flag in the annotate command

For simpler explanation, since we are using a service, we can use this command for updating an annotation

For example, in order to update the description annotation, enter the following command in your terminal

And now if you check the information for the service by using “kubectl describe” command like before:

You will get the following output with the updated annotation:

Kubectl describe

Updating Annotation In All The Services

To update annotation in all the services in a namespace we use the –all flag. We use the following command syntax in the terminal:

Similarly to Update the annotations of all the pods, we use the command given below.

Updating Annotation In All The Pods

Similar to updating annotations in all the services together, to update annotations in all the pods in a namespace we use the –all flag. We use the following command syntax in the terminal:

Deleting An Annotation

For deleting an annotation, we use a dash (“ – “) at the end of the annotation key. For deleting the description annotation, enter the following command in your terminal:

This will delete the description annotation and now when you use

for checking the example service information. You will see we have only two annotations left and the description annotation is deleted. You will see a similar output:

Deleting Annotation

Benefits of using Annotations

  • Human Service Discovery: We often realise the need for human service discovery when the production breaks or when we have better metrics to implement. Using annotations early helps in having human service discovery later.
  • Building Versions: Using annotations is an essential part of the building an effective “version 0” of the services.
  • Documentations: Annotations also helps in documenting the services so that is easier for others to understand the services.
  • Tooling: Various Tools and Kubernetes client side libraries uses the metadata attached by Annotations.
  • Versioning: As we discussed in the Introduction of the article, annotations can be used for providing version information, registry address, branch and Pull Request number in Git and image hashes.
  • Integration With External Systems: Annotations can be used to store references or metadata required by external systems or services. This helps in integration with CI/CD pipelines , external databases, or configuration management systems.

Difference Between Annotations And Labels In Kubernetes

Similarities between annotations and labels in kubernetes.

  • Both annotations and labels are key-value maps. In both Annotation and labels we have a key and a corresponding value attached with it. These keys and values in both annotations and labels are strings and cannot be of any other datatype (boolean, int etc.)
  • Both annotations and labels are used to attach metadata to Kubernetes resources. They are used to provide context, documentation, or categorization for resources. This metadata is then used later on by libraries and tools build upon Kubernetes.
  • Both annotations and labels offer flexibility for storing data. We can use annotations and labels both to store custom metadata respective to our needs.

In this article we discussed about Annotations in Kubernetes. we started with the definition of annotations that is – Annotations are the key value pairs associated with Kubernetes objects that are used to attach non-identifying metadata to the objects. then we discussed topics like Kubernetes Objects and Metadata. We also discussed how to use Kubernetes Annotations which was the center theme of this article. We discussed a step by step guide to create annotations with examples and proper code snippets. We also learned about adding, updating and deleting annotations which is mostly done using the “kubectl annotate” command. After that we discussed Benefits or annotation, differences and similarities between annotations and labels in Kubernetes. Finally, we discussed some Frequently asked questions on annotations.

FAQs On Kubernetes Annotations

1. is it necessary to add annotations in a service file.

No, it is not necessary to add Annotations in a service file

2. Can We Add Annotations On The Services In Cli?

Yes, we can add annotations on the services in CLI by using command “kubectl annotate” but it is not the recommended method.

3. Are Annotations Same As Labels?

No, Annotations and labels are different and serve a different purpose.

Please Login to comment...

Similar read thumbnail

  • Kubernetes-Basics

Please write us at contrib[email protected] to report any issue with the above content

Improve your Coding Skills with Practice

 alt=

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

View all API Specs

  • Documentation API Specs Kong Gateway Lightweight, fast, and flexible cloud-native API gateway Kong Konnect Single platform for SaaS end-to-end connectivity Kong Mesh Enterprise service mesh based on Kuma and Envoy decK Helps manage Kong’s configuration in a declarative fashion Kong Ingress Controller Works inside a Kubernetes cluster and configures Kong to proxy traffic Kong Gateway Operator Manage your Kong deployments on Kubernetes using YAML Manifests Insomnia Collaborative API development platform Kuma Open-source distributed control plane with a bundled Envoy Proxy integration
  • Kong Academy

Kubernetes annotations and labels

This page provides a complete list of all the annotations you can specify when you run Kong Mesh in Kubernetes mode.

kuma.io/sidecar-injection

Enable or disable sidecar injection.

Used on the namespace it will inject the sidecar in all pods created in the namespace:

Used on a deployment using pod template it will inject the sidecar in all pods managed by this deployment:

Labeling pods or deployments will take precedence on the namespace annotation.

Annotations

Kuma.io/mesh.

Associate Pods with a particular Mesh. Annotation value must be the name of a Mesh resource.

It can be used on an entire namespace:

It can be used on a pod:

Annotating pods or deployments will take precedence on the namespace annotation.

kuma.io/gateway

Lets you specify the Pod should run in gateway mode. Inbound listeners are not generated.

kuma.io/ingress

Marks the Pod as the Zone Ingress. Needed for multizone communication – provides the entry point for traffic from other zones.

kuma.io/ingress-public-address

Specifies the public address for Ingress. If not provided, Kong Mesh picks the address from the Ingress Service.

kuma.io/ingress-public-port

Specifies the public port for Ingress. If not provided, Kong Mesh picks the port from the Ingress Service.

kuma.io/direct-access-services

Defines a comma-separated list of Services that can be accessed directly.

When you provide this annotation, Kong Mesh generates a listener for each IP address and redirects traffic through a direct-access cluster that’s configured to encrypt connections.

These listeners are needed because transparent proxy and mTLS assume a single IP per cluster (for example, the ClusterIP of a Kubernetes Service). If you pass requests to direct IP addresses, Envoy considers them unknown destinations and manages them in passthrough mode – which means they’re not encrypted with mTLS. The direct-access cluster enables encryption anyway.

WARNING : You should specify this annotation only if you really need it. Generating listeners for every endpoint makes the xDS snapshot very large.

kuma.io/virtual-probes

Enables automatic converting of HttpGet probes to virtual probes. The virtual probe is served on a sub-path of the insecure port specified with kuma.io/virtual-probes-port – for example, :8080/health/readiness -> :9000/8080/health/readiness , where 9000 is the value of the kuma.io/virtual-probes-port annotation.

kuma.io/virtual-probes-port

Specifies the insecure port for listening on virtual probes.

kuma.io/sidecar-env-vars

Semicolon ( ; ) separated list of environment variables for the Kong Mesh sidecar.

kuma.io/container-patches

Specifies the list of names of ContainerPatch resources to be applied on kuma-init and kuma-sidecar containers.

More information about how to use ContainerPatch you can find at Custom Container Configuration .

It can be used on a resource describing workload (i.e. Deployment , DaemonSet or Pod ):

prometheus.metrics.kuma.io/port

Lets you override the Mesh -wide default port that Prometheus should scrape metrics from.

prometheus.metrics.kuma.io/path

Lets you override the Mesh -wide default path that Prometheus should scrape metrics from.

kuma.io/builtindns

Tells the sidecar to use its builtin DNS server.

kuma.io/builtindnsport

Port the builtin DNS server should listen on for DNS queries.

kuma.io/ignore

A boolean to mark a resource as ignored by Kong Mesh. It currently only works for services. This is useful when transitioning to Kong Mesh or to temporarily ignore some entities.

traffic.kuma.io/exclude-inbound-ports

List of inbound ports to exclude from traffic interception by the Kong Mesh sidecar.

traffic.kuma.io/exclude-outbound-ports

List of outbound ports to exclude from traffic interception by the Kong Mesh sidecar.

kuma.io/transparent-proxying-experimental-engine

Enable or disable experimental transparent proxy engine on Pod. Default is disabled .

kuma.io/envoy-admin-port

Specifies the port for Envoy Admin API. If not set, default admin port 9901 will be used.

kuma.io/service-account-token-volume

Volume (specified in the pod spec) containing a service account token for Kong Mesh to inject into the sidecar.

kuma.io/transparent-proxying-reachable-services

A comma separated list of kuma.io/service to indicate which services this communicates with. For more details see the reachable services docs .

kuma.io/transparent-proxying-ebpf

When transparent proxy is installed with ebpf mode, you can disable it for particular workloads if necessary.

For more details see the transparent proxying with ebpf docs .

kuma.io/transparent-proxying-ebpf-bpf-fs-path

Path to BPF FS if different than default ( /sys/fs/bpf )

kuma.io/transparent-proxying-ebpf-cgroup-path

cgroup2 path if different than default ( /sys/fs/cgroup )

kuma.io/transparent-proxying-ebpf-programs-source-path

Custom path for ebpf programs to be loaded when installing transparent proxy

kuma.io/transparent-proxying-ebpf-tc-attach-iface

Name of the network interface which should be used to attach to it TC-related eBPF programs. By default Kong Mesh will use first, non-loopback interface it’ll find.

kuma.io/wait-for-dataplane-ready

Define if you want the kuma-sidecar container to wait for the dataplane to be ready before starting app container. Read relevant Data plane on Kubernetes section for more information.

prometheus.metrics.kuma.io/aggregate-<name>-enabled

Define if kuma-dp should scrape metrics from the application that has been defined in the Mesh configuration. Default value: true . For more details see the applications metrics docs

prometheus.metrics.kuma.io/aggregate-<name>-path

Define path, which kuma-dp sidecar has to scrape for prometheus metrics. Default value: /metrics . For more details see the applications metrics docs

prometheus.metrics.kuma.io/aggregate-<name>-port

Define port, which kuma-dp sidecar has to scrape for prometheus metrics. For more details see the applications metrics docs

kuma.io/transparent-proxying-inbound-v6-port

Define the port to use for IPv6 traffic. To turn off IPv6 set this to 0.

kuma.io/sidecar-drain-time

Allows specifying drain time of Kong Mesh DP sidecar. The default value is 30s. The default could be changed using the control-plane configuration or KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_DRAIN_TIME env.

kuma.io/init-first

Allows specifying that the Kong Mesh init container should run first (ahead of any other init containers). The default is false if omitted. Setting this to true may be desirable for security, as it would prevent network access for other init containers. The order is not guaranteed, as other mutating admission webhooks may further manipulate this ordering.

 alt=

You are using an outdated browser. Please upgrade your browser to improve your experience.

Install Contour for Ingress Control

This topic gives an overview of the Contour package, which you can install in Tanzu Kubernetes Grid (TKG) workload clusters to provide ingress control services for the cluster.

Contour is a Kubernetes ingress controller that uses the Envoy reverse HTTP proxy. Contour with Envoy is commonly used with other packages, such as External DNS, Prometheus, and Harbor.

The Contour package includes the Contour ingress controller and the Envoy reverse HTTP proxy.

Installation : Install the Contour package in one of the following ways, based on its deployment option:

Supervisor Service : Install and Configure Contour as a Supervisor Service

TKG on Supervisor :

  • Install Contour Using the Tanzu CLI
  • Install Contour Using Kubectl

Standalone management cluster : Install Contour in Workload Clusters Deployed by a Standalone Management Cluster

Contour Components

The Contour package installs on the cluster the two containers listed in the table. For more information, see https://projectcontour.io/ . The containers are pulled from the VMware public registry specified in the Package Repository.

Contour Data Values

Below is an example contour-data-values.yaml .

The only customization is that the Envoy service is of type LoadBalancer (the default is NodePort). This means that the Envoy service will be accessible from outside of the cluster for ingress.

Contour Package Configuration Parameters

You can customize your configuration by editing the default values in the Contour package configuration file.

The table below contains information about the values that you can customize in the contour-data-values.yaml file and how they can be used to modify the default behavior of Contour when deployed into a workload cluster.

If you reconfigure your Contour settings after the initial deployment, you must follow the steps in Update a Running Contour Deployment to apply the new configuration to the cluster.

* new parameter in Contour v1.25.2, not in v1.24.5.

Contour Config File Contents

As described above, the package configuration field contour.configFileContents can be used to specify the desired content for the Contour config file . The Contour package will use the contents of the contour.configFileContents field to create a ConfigMap which is mounted into the Contour pods as a volume. The format and exhaustive list of options for this config file are provided in the open-source Contour documentation .

For example, to customize the Contour config file to require TLS 1.3, use a data values file like the following:

Some of the commonly used Contour config file settings are described below for convenience:

Route Timeout for File Downloads

By default, Envoy has a 15-second timeout for backend services to return a response. If you are using Contour for file transfer, or for other services that are slow to respond, you may need to adjust this value.

To set a custom response timeout, configure your HTTPProxy like the following:

If you are using an Ingress resource instead, you can add the projectcontour.io/response-timeout annotation like the following:

See the open-source Contour documentation for HTTPProxy response timeouts and Ingress annotations for more information.

annotations list kubernetes

RTFM! DevOps[at]UA

annotations list kubernetes

Kubernetes: ensuring High Availability for Pods

annotations list kubernetes

Discover more from RTFM! DevOps[at]UA

Setting up high availability for kubernetes pods with deployment replicas, pod topology spread constraints, poddisruptionbudget and annotations for karpenter.

annotations list kubernetes

We have a Kubernetes cluster, where WorkerNodes are scaled by Karpenter , and Karpenter has the disruption.consolidationPolicy=WhenUnderutilized parameter for its NodePool, and this means, that Karpenter will try to "consolidate" the placement of pods on Nodes in order to maximize the use of CPU and Memory resources.

In general, everything works, but this leads to the fact that WorkerNodes are sometimes often recreated, and this causes our Pods to be "migrated" to other nodes.

So the task now is to make sure that scaling and the consolidation process do not cause interruptions in the operation of our services.

Actually, this topic is not so much about Karpenter itself as it is about ensuring the stability of Pods in Kubernetes in general. But I faced with this during Karpenter use, so we will talk a little about it as well.

Karpenter Disruption Flow

To better understand what's happening with our Pods, let's take a quick look at how Karpenter removes a WorkerNode from the pool. See Termination Controller .

After Karpenter discovered that there were nodes that needed to be terminated, he:

adds a finalizer on a Kubernetes WorkerNode

adds the  karpenter.sh/disruption:NoSchedule taint on such a Node so that Kubernetes does not create new Pods on this Node

if necessary, creates a new Node to which it will move the Pods from the Node that will be taken out of service (or uses an existing Node if it can accept additional Pods according to their requests )

performs Pod Eviction of the Pods from the Node (see Safely Drain a Node and API-initiated Eviction )

after all Pods except DaemonSets are removed from the Node, Karpenter deletes the corresponding NodeClaim

removes the finalizer from the Node, which allows Kubernetes to delete the Node

Kubernetes Pod Eviction Flow

And briefly, the process of how Kubernetes itself performs the Pods Eviction:

The Server API receives an Eviction request and checks whether this Pod can be evicted (for example, whether its eviction will not violate the restrictions of a PodDisruptionBudget - we will speak about PodDisruptionBudgets later in this post)

marks the resource of this Pod for deletion

kubelet starts the gracefully shut down process - that is, sends the SIGTERM signal

Kubernetes removes the IP of this Pod from the list of endpoints

if the Pod has not stopped within the specified time, then kubelet sends a SIGKILL signal to kill the process immediately

kubelet sends a signal to the Server API that the Pod can be removed from the list of objects

API Server removes the Pod from the database

See How API-initiated eviction works and Pod Lifecycle – Termination of Pods .

Kubernetes Pod High Availability Options

So, what can we do with Pods to make our service work without interruption, regardless of the Karpenter's activities?

have at least 2 Pods on critical services

to have Pod Topology Spread Constraints so that Pods are placed on different WorkerNodes - then if one Node with the first Pod is killed, another Pod on another Wode will stay alive

have a PodDisruptionBudget so that at least 1 Pod is always alive - this will prevent Karpenter from evicting all the Pods at once, because it monitors compliance with the PDB

and to guarantee that Pod Eviction will not be performed, we can set the karpenter.sh/do-not-disrupt Pod annotation - then Karpenter will ignore this Pod (and, accordingly, the Node on which such a Pod will be run)

Let's take a look at these options in more detail.

Kubernetes Deployment replicas

The simplest and most obvious solution is to have at least 2 simultaneously working Pods.

Although this does not guarantee that Kubernetes will not evict them at the same time, it is a minimum condition for further actions.

So either run kubectl scale deployment --replicas=2 manually, or update the replicas field in a Deployment/StatefulSets/ReplicaSet (see Workload Resources ):

Kubernetes PodDisruptionBudget

With the PodDisruptionBudget, we can set a rule for the minimum number of available or maximum number of unavailable Pods. The value can be either a number or a percentage of the total number of Pods in the replicas of a  Deployment/StatefulSets/ReplicaSet.

In the case of a Deployment that has two Pods and has a topologySpreadConstraints on different WorkerNodes, this will ensure that Karpenter will not perform Node Drain on two WorkerNodes at the same time. Instead, it will "relocate" one Pod first, kill its Node, and then repeat the process for the other Node.

See Specifying a Disruption Budget for your Application .

Let's create a PDB for our Deployment:

Deploy and check:

The karpenter.sh/do-not-disrupt annotation

In addition to the settings on the Kubernetes side, we can explicitly prohibit the deletion of a Pod by Karpenter itself by adding the karpenter.sh/do-not-disrupt annotation (previously, before Beta , these were the karpenter.sh/do-not-evict and karpenter.sh/do-not-consolidate annotations).

This may be necessary, for example, for Pods that are to be run in a single instance (like VictoriaMetrics VMSingle instance) and that you do not want to stop.

To do this, add an annotation to the template of this Pod:

See Pod-Level Controls .

In general, these seem to be all the main solutions that will help ensure the continuous operation of the Pods.

Originally published at RTFM: Linux, DevOps, and system administration .

Thank you for reading RTFM! DevOps[at]UA. This post is public so feel free to share it.

Leave a comment

Thanks for reading RTFM! DevOps[at]UA! Subscribe for free to receive new posts and support my work.

annotations list kubernetes

Ready for more?

MarketSplash

How To Manage Pod Lifecycle In Kubernetes

Managing pod lifecycles in Kubernetes is a crucial skill for developers. This article guides you through creating, monitoring, and troubleshooting pods, ensuring your applications run smoothly in a Kubernetes environment. Learn practical tips for efficient pod management.

Navigating the intricacies of pod lifecycle management in Kubernetes can be a complex yet essential skill for developers. This article breaks down the key concepts and best practices, ensuring a smoother, more efficient workflow. From creation to termination, understanding these fundamentals is crucial for anyone working in a Kubernetes environment.

annotations list kubernetes

Understanding Pod Lifecycle Basics

Creating and scheduling pods, monitoring pod status and health, handling pod termination and cleanup, best practices for pod lifecycle management, troubleshooting common pod lifecycle issues, frequently asked questions, pod creation, pod scheduling, monitoring pod status, pod lifecycle events.

Certainly, I'll include all relevant subheadings in the outline. Here's the revised section with additional subheadings:

In Kubernetes, a Pod is the smallest deployable unit that can be created and managed. It's crucial to understand the lifecycle of a Pod for effective application management in a Kubernetes environment.

A Pod's life begins with its creation and progresses through several phases. These phases include:

  • Pending : The Pod has been accepted by the Kubernetes system, but one or more of its containers are not yet running.
  • Running : All containers in the Pod have been created, and at least one is running.
  • Succeeded : All containers in the Pod have terminated successfully and will not be restarted.
  • Failed : All containers in the Pod have terminated, and at least one container has failed.
  • Unknown : The state of the Pod cannot be determined.

Creating a Pod involves defining a YAML file that specifies the Pod's configuration. Here's a basic example:

Once a Pod is defined, Kubernetes schedules it to run on a node. The scheduler determines the best node based on resource availability and constraints.

To check the status of a Pod, use the kubectl get pods command. This provides information about the Pod's phase, IP address, and node it's running on.

Kubernetes provides lifecycle events that can be used to manage the behavior of containers within a Pod. These include:

  • PostStart : This event occurs after a container is created.
  • PreStop : This event occurs before a container is terminated.

Here's an example of defining lifecycle hooks in a Pod's configuration:

Defining A Pod

Deploying the pod, scheduling pods, node selectors, taints and tolerations.

I apologize for the oversight. Here's the corrected section with all the relevant subtitles included in both the text and the list:

Creating and scheduling Pods in Kubernetes is a foundational skill for any developer working in a containerized environment. This section delves into the practical steps and best practices.

The first step in creating a Pod is defining it in a YAML File . This file contains all the necessary specifications for your Pod, including the container image, ports, and volume mounts.

With the YAML file ready, deploy the Pod using the kubectl apply command. This tells Kubernetes to create the Pod according to the specifications in the YAML file.

Kubernetes automatically schedules Pods on Nodes based on resource availability. However, you can influence this decision using node selectors, taints, and tolerations.

Node selectors allow you to specify the node where a Pod should be placed. This is done by labeling nodes and then specifying the label in the Pod's YAML file.

Taints and tolerations work together to ensure that Pods are not scheduled onto inappropriate nodes. Taints are applied to nodes, and tolerations are applied to Pods.

Using Kubectl For Basic Monitoring

Detailed pod information, checking pod logs, probes for health checks.

Effective management of Kubernetes involves keeping a close eye on the Status and Health of Pods. This section covers the essential commands and practices for monitoring.

The kubectl get pods command is the first step in monitoring. It provides a snapshot of all Pods in the current namespace, showing their status.

For more detailed information, use kubectl describe pod <pod-name> . This command offers insights into the Pod's events, configuration, and status.

Logs are crucial for understanding the behavior of applications running in Pods. Use kubectl logs <pod-name> to retrieve log data.

Kubernetes uses Liveness and Readiness Probes to check the health of containers in a Pod. These probes help Kubernetes make decisions about restarting containers and routing traffic.

Liveness Probes

Liveness probes determine if a container is running. If a probe fails, Kubernetes restarts the container.

Readiness Probes

Readiness probes determine if a container is ready to serve traffic. If a probe fails, Kubernetes stops routing traffic to the container but does not restart it.

Monitoring Pod status and health is a continuous process that ensures the smooth operation of applications in a Kubernetes environment.

Graceful Pod Termination

Customizing termination grace period, prestop hooks, cleaning up resources.

In Kubernetes, managing the Termination and Cleanup of Pods is crucial for maintaining a healthy and efficient cluster. This section focuses on the key steps and commands involved.

When terminating a Pod, Kubernetes follows a graceful shutdown process. This process allows applications to save state and finish current tasks. Initiate Pod termination using the kubectl delete command.

You can customize the Grace Period by specifying the --grace-period flag in the delete command. This is useful for applications that need more time to shut down.

Use PreStop Hooks to run specific commands or scripts before a Pod is terminated. This is defined in the Pod's YAML configuration.

After Pod termination, it's important to clean up associated resources like Persistent Volumes and Services . This prevents resource leakage and conflicts.

Handling Pod termination and cleanup efficiently ensures the smooth operation and resource optimization of your Kubernetes environment.

Define Resource Limits

Use liveness and readiness probes, implement rolling updates, monitor and log effectively.

Effective management of Pod lifecycles in Kubernetes is key to ensuring high availability and efficient resource utilization. This section highlights the best practices to follow.

Always Define Resource Limits and requests for your Pods. This practice prevents overconsumption of resources on a node, ensuring stable operation of all Pods.

Implement Liveness and Readiness Probes to manage container states effectively. These probes help Kubernetes understand when to restart a container and when to send traffic to a Pod.

For deployments, use Rolling Updates to update Pods without downtime. This strategy gradually replaces old Pods with new ones, maintaining application availability.

Regularly Monitor and Log Pod activities. This helps in identifying issues early and understanding the behavior of applications in the cluster.

Adhering to these best practices in Pod lifecycle management can significantly enhance the reliability and efficiency of your Kubernetes environment.

Pod Fails To Start

Pod stuck in pending state, frequent pod restarts, high resource usage.

In the Kubernetes ecosystem, encountering issues with Pod lifecycles is common. Understanding how to Troubleshoot Effectively is crucial for maintaining a healthy cluster.

One common issue is when a Pod Fails to Start . This can be due to various reasons like configuration errors or resource constraints.

A Pod stuck in the Pending State often indicates scheduling issues, possibly due to insufficient resources or taints and tolerations mismatch.

Frequent Restarts of a Pod can be caused by application crashes or liveness probe failures. Examining logs and probe configurations is key.

Pods consuming High Resources can affect the performance of other applications. Monitoring resource usage is essential for optimization.

Troubleshooting these common issues effectively ensures the stability and performance of your Kubernetes applications.

How Do I Check the Health of a Pod?

To check the health of a Pod, use kubectl get pod <pod-name> . This will show the status of the Pod. For more detailed health information, set up liveness and readiness probes in your Pod's configuration.

Can I Change the Configuration of a Running Pod?

Directly changing the configuration of a running Pod is not possible. Instead, you need to update the Pod's YAML file and then apply the changes using kubectl apply -f <file.yaml> . This may result in the Pod being recreated with the new configuration.

What Happens When a Pod is Terminated?

When a Pod is terminated, Kubernetes sends a SIGTERM signal to the containers, allowing them to gracefully shut down. If the containers don't shut down within the grace period, a SIGKILL signal is sent to forcefully stop them.

How Can I Monitor Resource Usage of Pods?

Use the kubectl top pod command to monitor the CPU and memory usage of your Pods. This helps in identifying Pods that are using excessive resources and may need optimization.

Let's test you

What is the Primary Role of a Readiness Probe in a Kubernetes Pod?

Subscribe to our newsletter, subscribe to be notified of new content on marketsplash..

Kubernetes annotations and labels

This page provides a complete list of all the annotations you can specify when you run Kuma in Kubernetes mode.

kuma.io/sidecar-injection

Enable or disable sidecar injection.

Used on the namespace it will inject the sidecar in all pods created in the namespace:

Used on a deployment using pod template it will inject the sidecar in all pods managed by this deployment:

Labeling pods or deployments will take precedence on the namespace annotation.

Annotations

Kuma.io/mesh.

Associate Pods with a particular Mesh. Annotation value must be the name of a Mesh resource.

It can be used on an entire namespace:

It can be used on a pod:

Annotating pods or deployments will take precedence on the namespace annotation.

kuma.io/gateway

Lets you specify the Pod should run in gateway mode. Inbound listeners are not generated.

kuma.io/ingress

Marks the Pod as the Zone Ingress. Needed for multizone communication – provides the entry point for traffic from other zones.

kuma.io/ingress-public-address

Specifies the public address for Ingress. If not provided, Kuma picks the address from the Ingress Service.

kuma.io/ingress-public-port

Specifies the public port for Ingress. If not provided, Kuma picks the port from the Ingress Service.

kuma.io/direct-access-services

Defines a comma-separated list of Services that can be accessed directly.

When you provide this annotation, Kuma generates a listener for each IP address and redirects traffic through a direct-access cluster that’s configured to encrypt connections.

These listeners are needed because transparent proxy and mTLS assume a single IP per cluster (for example, the ClusterIP of a Kubernetes Service). If you pass requests to direct IP addresses, Envoy considers them unknown destinations and manages them in passthrough mode – which means they’re not encrypted with mTLS. The direct-access cluster enables encryption anyway.

WARNING : You should specify this annotation only if you really need it. Generating listeners for every endpoint makes the xDS snapshot very large.

kuma.io/virtual-probes

Enables automatic converting of HttpGet probes to virtual probes. The virtual probe is served on a sub-path of the insecure port specified with kuma.io/virtual-probes-port – for example, :8080/health/readiness -> :9000/8080/health/readiness , where 9000 is the value of the kuma.io/virtual-probes-port annotation.

kuma.io/virtual-probes-port

Specifies the insecure port for listening on virtual probes.

kuma.io/sidecar-env-vars

Semicolon ( ; ) separated list of environment variables for the Kuma sidecar.

kuma.io/container-patches

Specifies the list of names of ContainerPatch resources to be applied on kuma-init and kuma-sidecar containers.

More information about how to use ContainerPatch you can find at Custom Container Configuration .

It can be used on a resource describing workload (i.e. Deployment , DaemonSet or Pod ):

prometheus.metrics.kuma.io/port

Lets you override the Mesh -wide default port that Prometheus should scrape metrics from.

prometheus.metrics.kuma.io/path

Lets you override the Mesh -wide default path that Prometheus should scrape metrics from.

kuma.io/builtindns

Tells the sidecar to use its builtin DNS server.

kuma.io/builtindnsport

Port the builtin DNS server should listen on for DNS queries.

kuma.io/ignore

A boolean to mark a resource as ignored by Kuma. It currently only works for services. This is useful when transitioning to Kuma or to temporarily ignore some entities.

traffic.kuma.io/exclude-inbound-ports

List of inbound ports to exclude from traffic interception by the Kuma sidecar.

traffic.kuma.io/exclude-outbound-ports

List of outbound ports to exclude from traffic interception by the Kuma sidecar.

kuma.io/transparent-proxying-experimental-engine

Enable or disable experimental transparent proxy engine on Pod. Default is disabled .

kuma.io/envoy-admin-port

Specifies the port for Envoy Admin API. If not set, default admin port 9901 will be used.

kuma.io/service-account-token-volume

Volume (specified in the pod spec) containing a service account token for Kuma to inject into the sidecar.

kuma.io/transparent-proxying-reachable-services

A comma separated list of kuma.io/service to indicate which services this communicates with. For more details see the reachable services docs .

kuma.io/transparent-proxying-ebpf

When transparent proxy is installed with ebpf mode, you can disable it for particular workloads if necessary.

For more details see the transparent proxying with ebpf docs .

kuma.io/transparent-proxying-ebpf-bpf-fs-path

Path to BPF FS if different than default ( /sys/fs/bpf )

kuma.io/transparent-proxying-ebpf-cgroup-path

cgroup2 path if different than default ( /sys/fs/cgroup )

kuma.io/transparent-proxying-ebpf-programs-source-path

Custom path for ebpf programs to be loaded when installing transparent proxy

kuma.io/transparent-proxying-ebpf-tc-attach-iface

Name of the network interface which should be used to attach to it TC-related eBPF programs. By default Kuma will use first, non-loopback interface it’ll find.

kuma.io/wait-for-dataplane-ready

Define if you want the kuma-sidecar container to wait for the dataplane to be ready before starting app container. Read relevant Data plane on Kubernetes section for more information.

prometheus.metrics.kuma.io/aggregate-<name>-enabled

Define if kuma-dp should scrape metrics from the application that has been defined in the Mesh configuration. Default value: true . For more details see the applications metrics docs

prometheus.metrics.kuma.io/aggregate-<name>-path

Define path, which kuma-dp sidecar has to scrape for prometheus metrics. Default value: /metrics . For more details see the applications metrics docs

prometheus.metrics.kuma.io/aggregate-<name>-port

Define port, which kuma-dp sidecar has to scrape for prometheus metrics. For more details see the applications metrics docs

kuma.io/transparent-proxying-inbound-v6-port

Define the port to use for IPv6 traffic. To turn off IPv6 set this to 0.

kuma.io/sidecar-drain-time

Allows specifying drain time of Kuma DP sidecar. The default value is 30s. The default could be changed using the control-plane configuration or KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_DRAIN_TIME env.

kuma.io/init-first

Allows specifying that the Kuma init container should run first (ahead of any other init containers). The default is false if omitted. Setting this to true may be desirable for security, as it would prevent network access for other init containers. The order is not guaranteed, as other mutating admission webhooks may further manipulate this ordering.

IMAGES

  1. Kubernetes annotations

    annotations list kubernetes

  2. Kubernetes Labels

    annotations list kubernetes

  3. Kubernetes annotations

    annotations list kubernetes

  4. Understanding the Kubernetes Architecture with a Use-Case

    annotations list kubernetes

  5. Kubernetes Annotations

    annotations list kubernetes

  6. Just-in-Time Kubernetes: Namespaces, Labels, Annotations, and Basic

    annotations list kubernetes

VIDEO

  1. 113

  2. Hibernate Tutorial: Annotations in Hibernate

  3. How to access kubernetes service host directly in python

  4. How to access a running kubernetes POD ?

  5. للمبتدئين AWS نظرة سريعة على خدمات

  6. إنشاء بينة تحتية باستخدام Terraform علي AWS

COMMENTS

  1. Annotations

    Creating a cluster with kubeadm Customizing components with the kubeadm API Options for Highly Available Topology Creating Highly Available Clusters with kubeadm Set up a High Availability etcd Cluster with kubeadm Configuring each kubelet in your cluster using kubeadm Dual-stack support with kubeadm Installing Kubernetes with kOps

  2. Well-Known Labels, Annotations and Taints

    Kubernetes reserves all labels and annotations in the kubernetes.io and k8s.io namespaces. This document serves both as a reference to the values and as a coordination point for assigning values. Labels, annotations and taints used on API objects apf.kubernetes.io/autoupdate-spec Type: Annotation Example: apf.kubernetes.io/autoupdate-spec: "true"

  3. Using Kubernetes Annotations, Labels, and Selectors

    The Kubernetes documentation defines annotations as "arbitrary non-identifying metadata" which you add to your objects. Their status as "non-identifying" means they aren't used internally by Kubernetes as part of its object selection system. This means annotations are best used for data that's independent of the object and its role in your cluster.

  4. Annotations

    The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: "true" is set: nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress.

  5. Understanding Labels and Annotations in Kubernetes

    1. Overview Labels and annotations are popular tagging mechanisms in Kubernetes. Further, labels and annotations are both key-value pairs that we can use to tag metadata for resources such as pods. However, there are a few key differences too. In this tutorial, our main objective is to compare labels with annotations in Kubernetes. 2.

  6. Kubernetes Labels, Selectors, and Annotations

    Cameron Pavey Nov 5, 2021 6 min read Kubernetes has many moving parts, and it is essential to wrap your head around quite a few of them if you want to work within Kubernetes efficiently. One of these important aspects is "metadata," namely labels, selectors, and annotations.

  7. Best Practices Guide for Kubernetes Labels and Annotations

    Kubernetes Annotations. Kubernetes annotations are the second way of attaching metadata to the Kubernetes resources. They are pairs of key and value strings that are similar to labels, but which store arbitrary non-identifying data. For instance, you can keep the contact details of the responsible people in the deployment annotations.

  8. Labels and Selectors

    Valid label value: must be 63 characters or less (can be empty), unless empty, must begin and end with an alphanumeric character ( [a-z0-9A-Z] ), could contain dashes ( - ), underscores ( _ ), dots (. ), and alphanumerics between.

  9. Annotations

    \n Disable Proxy intercept Errors \n. Like the disable-proxy-intercept-errors value in the ConfigMap, this annotation allows to disable NGINX proxy-intercept-errors when custom-http-errors are set, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service ...

  10. Ingress

    What is Ingress? Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Here is a simple example where an Ingress sends all its traffic to one Service: Figure. Ingress

  11. Get Deployment annotation from a Kubernetes Pod

    Each Kubernetes deployment gets this annotation: $ kubectl describe deployment/myapp Name: myapp Namespace: default CreationTimestamp: Sat, 24 Mar 2018 23:27:42 +0100 Labels: app=myapp Annotations: deployment.kubernetes.io/revision=5

  12. Kubernetes Labels

    What are Annotations? Creating Annotations Kubernetes Labels vs Annotations What Is Kubernetes Labels? Labels in Kubernetes are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users but are not used by the Kubernetes itself.

  13. Annotations

    You can use either labels or annotations to attach metadata to Kubernetes objects. Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can ...

  14. Kubernetes labels, selectors & annotations with examples

    Annotations Assigning a label to a Deployment Method-1: Assign labels while creating a new object Method-2: Assign a new label to existing pod runtime as a patch Method-3: Assign a new label to existing deployments runtime using kubectl Using labels to list resource objects Using selector to list resource objects Removing labels Conclusion

  15. Annotating Kubernetes Services for Humans

    Kubernetes annotations Kubernetes annotations are designed to solve exactly this problem. Oft-overlooked, Kubernetes annotations are designed to add metadata to Kubernetes objects. The Kubernetes documentation says annotations can "attach arbitrary non-identifying metadata to objects."

  16. Labels and annotations in Kubernetes

    Any resource in Kubernetes can have labels. Some labels are vital (e.g. service's selector, operators, etc.), and others are useful to tag resources (e.g. labelling a deployment). Kubectl offers a --show-labels flag to help you list resources and their labels. If you list pods, deployments and services in an empty cluster, you might notice ...

  17. kubectl Cheat Sheet

    Kubernetes Documentation Reference Command line tool (kubectl) kubectl Cheat Sheet kubectl Cheat Sheet This page contains a list of commonly used kubectl commands and flags. Note: These instructions are for Kubernetes v1.28. To check the version, use the kubectl version command. Kubectl autocomplete BASH

  18. How to Use Kubernetes Annotations?

    Annotations are key-value pairs that are used to attach non-identifying metadata to Kubernetes objects. Various tools that are built over Kubernetes use this metadata attached by annotations to perform actions or enhance resource management. Labels and Annotations are used to attach metadata to Kubernetes objects.

  19. Kubernetes Annotations Vs. Labels: 4 Major Differences

    Kubernetes annotations are a type of metadata that you attach to your Kubernetes objects, such as ReplicaSets and Pods. In particular, annotations are key-value maps. Annotations let you organize your application into sets of attributes that correspond to how you think about that application.

  20. Annotations

    \n external-dns.alpha.kubernetes.io/access \n. Specifies which set of node IP addresses to use for a Service of type NodePort. \n. If the value is public, use the Nodes' addresses of type ExternalIP, plus IPv6 addresses of type InternalIP. \n. If the value is private, use the Nodes' addresses of type InternalIP. \n. If the annotation is not present and there is at least one address of type ...

  21. Kubernetes annotations and labels

    Kubernetes annotations and labels This page provides a complete list of all the annotations you can specify when you run Kong Mesh in Kubernetes mode. Labels kuma.io/sidecar-injection. Enable or disable sidecar injection. Example. Used on the namespace it will inject the sidecar in all pods created in the namespace:

  22. Kubernetes + Nginx + Docker Desktop: Annotation Problem

    Not accept certificate with common name different to hostname. Ok this is normal. BUT, i cannot change CN, so i want use annotation of nginx for disabled check on CN. My problem is: Annotations don't work. Nginx.conf not update. This is my nginx (controller) version: My Docker Engine is: v.24.0.6. My Kubernetes version is: v.1.28.2.

  23. Install Contour for Ingress Control

    Install Contour for Ingress Control. This topic gives an overview of the Contour package, which you can install in Tanzu Kubernetes Grid (TKG) workload clusters to provide ingress control services for the cluster. Contour is a Kubernetes ingress controller that uses the Envoy reverse HTTP proxy. Contour with Envoy is commonly used with other ...

  24. Kubernetes: ensuring High Availability for Pods

    Kubernetes removes the IP of this Pod from the list of endpoints. if the Pod has not stopped within the specified time, then kubelet sends a SIGKILL signal to kill the process immediately. kubelet sends a signal to the Server API that the Pod can be removed from the list of objects. API Server removes the Pod from the database

  25. How To Manage Pod Lifecycle In Kubernetes

    In Kubernetes, managing the Termination and Cleanup of Pods is crucial for maintaining a healthy and efficient cluster. This section focuses on the key steps and commands involved. Graceful Pod Termination. When terminating a Pod, Kubernetes follows a graceful shutdown process. This process allows applications to save state and finish current ...

  26. Kubernetes annotations and labels

    Kubernetes annotations and labels. This page provides a complete list of all the annotations you can specify when you run Kuma in Kubernetes mode. Labels kuma.io/sidecar-injection. Enable or disable sidecar injection. Example. Used on the namespace it will inject the sidecar in all pods created in the namespace: