- Español – América Latina
- Português – Brasil
What is Kubernetes?
With the widespread adoption of containers among organizations, Kubernetes, the container-centric management software, has become the de facto standard to deploy and operate containerized applications. Google Cloud is the birthplace of Kubernetes—originally developed at Google and released as open source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open source community. Inspired by Google’s internal cluster management system, Borg , Kubernetes makes everything associated with deploying and managing your application easier. Providing automated container orchestration, Kubernetes improves your reliability and reduces the time and resources attributed to daily operations.
Learn more about GKE, Google Cloud’s managed Kubernetes.
Ready to get started? New customers get $300 in free credits to spend on Google Cloud.
Kubernetes (sometimes shortened to K8s with the 8 standing for the number of letters between the “K” and the “s”) is an open source system to deploy, scale, and manage containerized applications anywhere.
Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more—making it easier to manage applications.
What are the benefits of Kubernetes?
Kubernetes has built-in commands to handle a lot of the heavy lifting that goes into application management, allowing you to automate day-to-day operations. You can make sure applications are always running the way you intended them to run.
When you install Kubernetes, it handles the compute, networking, and storage on behalf of your workloads. This allows developers to focus on applications and not worry about the underlying environment.
Service health monitoring
Kubernetes continuously runs health checks against your services, restarting containers that fail, or have stalled, and only making available services to users when it has confirmed they are running.
Solve your business challenges with Google Cloud
Kubernetes vs. docker.
Often misunderstood as a choice between one or the other, Kubernetes and Docker are different yet complementary technologies for running containerized applications.
Docker lets you put everything you need to run your application into a box that can be stored and opened when and where it is required. Once you start boxing up your applications, you need a way to manage them; and that's what Kubernetes does.
Kubernetes is a Greek word meaning ‘captain’ in English. Like the captain is responsible for the safe journey of the ship in the seas, Kubernetes is responsible for carrying and delivering those boxes safely to locations where they can be used.
- Kubernetes can be used with or without Docker
- Docker is not an alternative to Kubernetes, so it’s less of a “Kubernetes vs. Docker” question. It’s about using Kubernetes with Docker to containerize your applications and run them at scale
- The difference between Docker and Kubernetes relates to the role each play in containerizing and running your applications
- Docker is an open industry standard for packaging and distributing applications in containers
- Kubernetes uses Docker to deploy, manage, and scale containerized applications
What is Kubernetes used for?
Increasing development velocity.
Kubernetes helps you to build cloud-native microservices-based apps. It also supports containerization of existing apps, thereby becoming the foundation of application modernization and letting you develop apps faster.
Deploying applications anywhere
Kubernetes is built to be used anywhere, allowing you to run your applications across on-site deployments and public clouds; as well as hybrid deployments in between. So you can run your applications where you need them.
Running efficient services
Kubernetes can automatically adjust the size of a cluster required to run a service. This enables you to automatically scale your applications, up and down, based on the demand and run them efficiently.
Related products and services
Created by the same developers that built Kubernetes, Google Kubernetes Engine (GKE) is an easy to use cloud based Kubernetes service for running containerized applications. GKE can help you implement a successful Kubernetes strategy for your applications in the cloud. With Anthos , Google offers a consistent Kubernetes experience for your applications across on-premises and multiple clouds. Using Anthos, you get a reliable, efficient, and trusted way to run Kubernetes clusters, anywhere.
Take the next step
Start building on Google Cloud with $300 in free credits and 20+ always free products.
Start your next project, explore interactive tutorials, and manage your account.
- Need help getting started? Contact sales
- Work with a trusted partner Find a partner
- Continue browsing See all products
- Get tips & best practices See tutorials
Kubernetes — also known as “k8s” or “kube” — is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.
Kubernetes was first developed by engineers at Google before being open sourced in 2014. It is a descendant of Borg, a container orchestration platform used internally at Google. Kubernetes is Greek for helmsman or pilot , hence the helm in the Kubernetes logo (link resides outside of ibm.com).
Today, Kubernetes and the broader container ecosystem are maturing into a general-purpose computing platform and ecosystem that rivals — if not surpasses — virtual machines (VMs) as the basic building blocks of modern cloud infrastructure and applications. This ecosystem enables organizations to deliver a high-productivity Platform-as-a-Service (PaaS) that addresses multiple infrastructure-related and operations-related tasks and issues surrounding cloud-native development so that development teams can focus solely on coding and innovation.
The following video provides a great introduction to Kubernetes basics:
For platform and DevOps engineers looking to operationalize speed to market while assuring application performance.
Containers are lightweight, executable application components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
Containers take advantage of a form of operating system (OS) virtualization that lets multiple applications share a single instance of an OS by isolating processes and controlling the amount of CPU, memory, and disk those processes can access. Because they are smaller, more resource-efficient and more portable than virtual machines (VMs), containers have become the de facto compute units of modern cloud-native applications .
In a recent IBM study (PDF, 1.4 MB) users reported several specific technical and business benefits resulting from their adoption of containers and related technologies.
Containers vs. virtual machines vs. traditional infrastructure
It may be easier or more helpful to understand containers as the latest point on the continuum of IT infrastructure automation and abstraction.
In traditional infrastructure, applications run on a physical server and grab all the resources they can get. This leaves you the choice of running multiple applications on a single server and hoping one doesn’t hog resources at the expense of the others or dedicating one server per application, which wastes resources and doesn’t scale.
Virtual machines (VMs) are servers abstracted from the actual computer hardware, enabling you to run multiple VMs on one physical server or a single VM that spans more than one physical server. Each VM runs its own OS instance, and you can isolate each application in its own VM, reducing the chance that applications running on the same underlying physical hardware will impact each other. VMs make better use of resources and are much easier and more cost-effective to scale than traditional infrastructure. And, they’re disposable — when you no longer need to run the application, you take down the VM.
For more information on VMs, see " What are virtual machines? "
Containers take this abstraction to a higher level—specifically, in addition to sharing the underlying virtualized hardware, they share an underlying, virtualized OS kernel as well. Containers offer the same isolation, scalability, and disposability of VMs, but because they don’t carry the payload of their own OS instance, they’re lighter weight (that is, they take up less space) than VMs. They’re more resource-efficient—they let you run more applications on fewer machines (virtual and physical), with fewer OS instances. Containers are more easily portable across desktop, data center, and cloud environments. And they’re an excellent fit for Agile and DevOps development practices.
" What are containers? " provides a complete explanation of containers and containerization . And the blog post " Containers vs. VMs: What's the difference? " gives a full rundown of the differences.
What is Docker?
Docker is the most popular tool for creating and running Linux® containers. While early forms of containers were introduced decades ago (with technologies such as FreeBSD Jails and AIX Workload Partitions), containers were democratized in 2013 when Docker brought them to the masses with a new developer-friendly and cloud-friendly implementation.
Docker began as an open source project, but today it also refers to Docker Inc., the company that produces Docker—a commercial container toolkit that builds on the open source project (and contributes those improvements back to the open source community).
Docker was built on traditional Linux container (LXC) technology, but enables more granular virtualization of Linux kernel processes and adds features to make containers easier for developers to build, deploy, manage, and secure.
While alternative container platforms exist today (such as Open Container Initiative (OCI), CoreOS, and Canonical (Ubuntu) LXD), Docker is so widely preferred that it is virtually synonymous with containers and is sometimes mistaken as a competitor to complimentary technologies such as Kubernetes (see the video “Kubernetes vs, Docker: It’s Not an Either/Or Question” further below).
As containers proliferated — today, an organization might have hundreds or thousands of them — operations teams needed to schedule and automate container deployment, networking , scalability, and availability. And so, the container orchestration market was born.
While other container orchestration options — most notably Docker Swarm and Apache Mesos — gained some traction early on, Kubernetes quickly became the most widely adopted (in fact, at one point, it was the fastest-growing project in the history of open source software).
Developers chose and continue to choose Kubernetes for its breadth of functionality, its vast and growing ecosystem of open source supporting tools, and its support and portability across cloud service providers. All leading public cloud providers — including Amazon Web Services (AWS), Google Cloud, IBM Cloud and Microsoft Azure — offer fully managed Kubernetes services .
What does Kubernetes do?
Kubernetes schedules and automates container-related tasks throughout the application lifecycle, including:
- Deployment : Deploy a specified number of containers to a specified host and keep them running in a desired state.
- Rollouts : A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
- Service discovery : Kubernetes can automatically expose a container to the internet or to other containers using a DNS name or IP address.
- Storage provisioning : Set Kubernetes to mount persistent local or cloud storage for your containers as needed.
- Load balancing : Based on CPU utilization or custom metrics, Kubernetes load balancing can distribute the workload across the network to maintain performance and stability.
- Autoscaling : When traffic spikes, Kubernetes autoscaling can spin up new clusters as needed to handle the additional workload.
- Self-healing for high availability : When a container fails, Kubernetes can restart or replace it automatically to prevent downtime. It can also take down containers that don’t meet your health-check requirements.
Kubernetes vs. Docker
If you’ve read this far, you already understand that while Kubernetes is an alternative to Docker Swarm , it is not (contrary to persistent popular misconception) an alternative or competitor to Docker itself.
In fact, if you’ve enthusiastically adopted Docker and are creating large-scale Docker-based container deployments, Kubernetes orchestration is a logical next step for managing these workloads.
To learn more, watch “Kubernetes vs. Docker: It’s Not an Either/Or Question”:
The chief components of Kubernetes architecture include the following:
Clusters and nodes (compute)
Clusters are the building blocks of Kubernetes architecture. The clusters are made up of nodes , each of which represents a single compute host (virtual or physical machine).
Each cluster consists of a master node that serves as the control plan for the cluster, and multiple worker nodes that deploy, run, and manage containerized applications. The master node runs a scheduler service that automates when and where the containers are deployed based on developer-set deployment requirements and available computing capacity. Each worker node includes the tool that is being used to manage the containers — such as Docker — and a software agent called a Kubelet that receives and executes orders from the master node.
Developers manage cluster operations using kubectl , a command-line interface (cli) that communicates directly with the Kubernetes API.
For a deeper dive into Kubernetes clusters, read: “ Kubernetes Clusters: Architecture for Rapid, Controlled Cloud App Delivery .”
Pods and deployments (software)
Pods are groups of containers that share the same compute resources and the same network. They are also the unit of scalability in Kubernetes: if a container in a pod is getting more traffic than it can handle, Kubernetes will replicate the pod to other nodes in the cluster. For this reason, it’s a good practice to keep pods compact so that they contain only containers that must share resources.
The deployment controls the creation and state of the containerized application and keeps it running. It specifies how many replicas of a pod should run on the cluster. If a pod fails, the deployment will create a new one.
For more on Kubernetes deployments, watch “Kubernetes Deployments: Get Started Fast”:
Kubernetes can deploy and scale pods, but it can’t manage or automate routing between them and doesn’t provide any tools to monitor, secure, or debug these connections. As the number of containers in a cluster grows, the number of possible connection paths between them escalates exponentially (for example, two containers have two potential connections, but 10 pods have 90), creating a potential configuration and management nightmare.
Enter Istio, an open source service mesh layer for Kubernetes clusters. To each Kubernetes cluster, Istio adds a sidecar container — essentially invisible to the programmer and the administrator — that configures, monitors, and manages interactions between the other containers.
With Istio, you set a single policy that configures connections between containers so that you don’t have to configure each connection individually. This makes connections between containers easier to debug.
Istio also provides a dashboard that DevOps teams and administrators can use to monitor latency, time-in-service errors, and other characteristics of the connections between containers. And, it builds in security — specifically, identity management that keeps unauthorized users from spoofing a service call between containers — and authentication, authorization and auditing (AAA) capabilities that security professionals can use to monitor the cluster.
Knative (pronounced ‘kay-native’) is an open source platform that sits on top of Kubernetes and provides two important classes of benefits for cloud-native development:
Knative provides an easy onramp to serverless computing
Serverless computing is a relatively new way of deploying code that makes cloud native applications more efficient and cost-effective. Instead of deploying an ongoing instance of code that sits idle while waiting for requests, serverless brings up the code as needed — scaling it up or down as demand fluctuates — and then takes the code down when not in use. Serverless prevents wasted computing capacity and power and reduces costs because you only pay to run the code when its actually running.
Knative enables developers to build a container once and run it as a software service or as a serverless function. It’s all transparent to the developer: Knative handles the details in the background, and the developer can focus on code.
Knative simplifies container development and orchestration
For developers, containerizing code requires lots of repetitive steps, and orchestrating containers requires lots of configuration and scripting (such as generating configuration files, installing dependencies, managing logging and tracing, and writing continuous integration / continuous deployment (CI/CD) scripts.)
Knative makes these tasks easier by automating them through three components:
Build: Knative’s Build component automatically transforms source code into a cloud-native container or function. Specifically, it pulls the code from repository, installs the required dependencies, builds the container image, and puts it in a container registry for other developers to use. Developers need to specify the location of these components so Knative can find them, but once that’s done, Knative automates the build.
Serve: The Serve component runs containers as scalable services; it can scale up to thousands of container instances or scale down to none (called scaling to zero ). In addition, Serve has two very useful features: configuration , which saves versions of a container (called snapshots ) every time you push the container to production and lets you run those versions concurrently; and service routing , which lets you direct different amounts of traffic to these versions. You can use these features together to gradually phase a container rollout or to stage a canary test of a containerized application before putting it into global production.
Event: Event enables specified events to trigger container-based services or functions. This is especially integral to Knative’s serverless capabilities; something needs to tell the system to bring up a function when needed. Event allows teams to express interest in types of events, and it then automatically connects to the event producer and routs the events to the container, eliminating the need to program these connections.
Kubernetes is one of the fastest-growing open source projects in history, and growth is accelerating. Adoption continues to soar among developers and the companies that employ them. A few data points worth noting:
- At this writing, over 120,190 commits have been made to the Kubernetes repository on GitHub (link resides outside ibm.com) — an increase of nearly 34,000 commits in the past 18 months — and there are more than 3,100 active contributors to the project. According to the Cloud Native Computing Foundation (CNCF) there have been more than 148,000 commits across all Kubernetes-related repositories (including Kubernetes Dashboard and Kubernetes MiniKube). You can read all the stats here (link resides outside ibm.com).
- More than 2,000 companies use Kubernetes in their production software stacks. These include world-known enterprises such as AirBnB, Ancestry, Bose, CapitalOne, Intuit, Nordstrom, Philips, Reddit, Slack, Spotify, Tinder, and, of course, IBM. Read these and other adoption case studies (link resides outside ibm.com)
- A 2021 survey cited in Container Journal (link resides outside ibm.com) found that 68% of IT professionals increased use of Kubernetes during the COVID-19 pandemic.
- According to ZipRecruiter (link resides outside ibm.com), the average annual salary (in North America) for a Kubernetes-related job is USD 147,732. At this writing, there are currently more than 57,000 Kubernetes-related positions listed on LinkedIn (link resides outside ibm.com), as compared to 21,000 positions listed just 18 months ago.
If you're ready to start working with Kubernetes or looking to build your skills with Kubernetes and Kubernetes ecosystem tools, try one of these tutorials:
- Kubernetes tutorials: Free hand-on labs with certification
- Kubernetes Tutorials: 5 Ways to Get You Building Fast
- 8 Kubernetes Tips and Tricks
- Deploy a microservices app on IBM Cloud by using Kubernetes
- Debug and log your Kubernetes applications
- Kubernetes Networking: A lab on basic networking concepts
- Istio 101: Lab for learning how to use Istio on Kubernetes (link resides outside ibm.com)
- Knative 101: Exercises designed to help you achieve an understanding of Knative
With Red Hat OpenShift on IBM Cloud, OpenShift developers have a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters.
Deploy and run apps consistently across on-premises, edge computing and public cloud environments from any cloud vendor, using a common set of cloud services including toolchains, databases and AI.
A fully managed serverless platform, IBM Cloud Code Engine lets you run your container, application code or batch job on a fully managed container runtime.
New IBM research documents the surging momentum of container and Kubernetes adoption.
Containers are part of an hybrid cloud strategy lets you build and manage workloads from anywhere.
Serverless is a cloud application development and execution model that lets developers build and run code without managing servers or paying for idle cloud infrastructure.
Red Hat OpenShift on IBM Cloud gives OpenShift developers a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters. Deploy highly available, fully managed Kubernetes clusters for your containerized applications with a single click. Because IBM manages OpenShift Container Platform (OCP), you'll have more time to focus on your core tasks.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
- Contact Sales
- Free account
- Cloud computing dictionary
What is Kubernetes?
Kubernetes is open-source orchestration software for deploying, managing, and scaling containers
Modern applications are increasingly built using containers, which are microservices packaged with their dependencies and configurations. Kubernetes (pronounced “koo-ber-net-ees”) is open-source software for deploying and managing those containers at scale—and it’s also the Greek word for helmsmen of a ship or pilot. Build, deliver, and scale containerized apps faster with Kubernetes, sometimes referred to as “k8s” or “k-eights.”
How Kubernetes works
As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes provides an open source API that controls how and where those containers will run.
Kubernetes orchestrates clusters of virtual machines and schedules containers to run on those virtual machines based on their available compute resources and the resource requirements of each container. Containers are grouped into pods, the basic operational unit for Kubernetes, and those pods scale to your desired state.
Kubernetes also automatically manages service discovery, incorporates load balancing, tracks resource allocation, and scales based on compute utilization. And, it checks the health of individual resources and enables apps to self-heal by automatically restarting or replicating containers.
- Watch how Kubernetes works
- See Common Kubernetes scenarios
Why use Kubernetes?
Keeping containerized apps up and running can be complex because they often involve many containers deployed across different machines. Kubernetes provides a way to schedule and deploy those containers—plus scale them to your desired state and manage their lifecycles. Use Kubernetes to implement your container-based applications in a portable, scalable, and extensible way.
Make workloads portable
Because container apps are separate from their infrastructure, they become portable when you run them on Kubernetes. Move them from local machines to production among on-premises, hybrid, and multiple cloud environments—all while maintaining consistency across environments.
Scale containers easily
Define complex containerized applications and deploy them across a cluster of servers or even multiple clusters with Kubernetes. As Kubernetes scales applications according to your desired state, it automatically monitors and maintains container health.
Build more extensible apps
A large open-source community of developers and companies actively builds extensions and plugins that add capabilities such as security, monitoring, and management to Kubernetes. Plus, the Certified Kubernetes Conformance Program requires every Kubernetes version to support APIs that make it easier to use those community offerings.
Get started with Kubernetes
See how to begin deploying and managing containerized applications.
Follow the learning path
Get hands-on experience with Kubernetes components, capabilities, and solutions.
Build on a complete Kubernetes platform
While Kubernetes itself offers portability, scalability, and extensibility, adding end-to-end development, operations, and security control allows you to deploy updates faster—without compromising security or reliability—and save time on infrastructure management. As you adopt Kubernetes, also consider implementing:
Infrastructure automation or serverless Kubernetes to eliminate routine tasks like provisioning, patching, and upgrading.
Tools for containerized app development and continuous integration and continuous deployment (CI/CD) workflows.
Services to manage security, governance, identity and access.
Harness Kubernetes with DevOps practices
As a Kubernetes app grows—adding containers, environments, and teams—release frequency tends to increase, along with developmental and operational complexity. Employing DevOps practices in Kubernetes environments allows you to move quickly at scale with enhanced security.
Deliver code faster with CI/CD
While containers provide a consistent application packaging format that eases the collaboration between development and operations teams, CI/CD can accelerate the move from code to container and to Kubernetes cluster in minutes by automating those tasks.
Manage resources effectively with infrastructure as code
Infrastructure as code establishes consistency and visibility of compute resources across teams—reducing the likelihood of human error. This practice works with the declarative nature of Kubernetes applications powered by Helm. Combining the two allows you to define apps, resources, and configurations in a reliable, trackable, and repeatable way.
Accelerate the feedback loop with constant monitoring
Shorten the time between bugs and fixes with a complete view of your resources, cluster, Kubernetes API, containers, and code—from container health monitoring to centralized logging. That view helps you prevent resource bottlenecks, trace malicious requests, and keep your Kubernetes applications healthy.
Balance speed and security with DevOps
Bring real-time observability into your DevOps workflow without sacrificing velocity. Apply compliance checks and reconfigurations automatically to secure your build and release pipeline—and your Kubernetes application as a result.
Example DevOps workflow with Kubernetes
- Rapidly iterate, test, and debug different parts of an application together in the same Kubernetes cluster.
- Merge and check code into a GitHub repository for continuous integration. Then, run automated builds and tests as a part of continuous delivery.
- Verify the source and integrity of container images. Images are held in quarantine until they pass scanning.
- Provision Kubernetes clusters with tools like Terraform. Helm charts installed by Terraform define the desired state of app resources and configurations.
- Enforce policies to govern deployments to the Kubernetes cluster.
- The release pipeline automatically executes pre-defined deployment strategy with each code.
- Add policy audit and automatic remediation to the CI/CD pipeline. For example, only the release pipeline has permission to create new pods in your Kubernetes environment.
- Enable app telemetry, container health monitoring, and real-time log analytics.
- Address issues with insights and inform plans for the next sprint.
Build on the strengths of Kubernetes with Azure
Automate provisioning, upgrading, monitoring, and scaling with the fully managed Microsoft Azure Kubernetes Service (AKS). Get serverless Kubernetes, a simpler development-to-production experience, and enterprise-grade security and governance.
Draw inspiration and innovation from the Kubernetes community
Kubernetes was created by—and thrives because of—the thousands of individuals and hundreds of organizations who have given their wisdom, code, and continuing support to the people who use it. Build the success of your software on top of their impassioned contributions.
Microsoft contributions to Kubernetes
Bringing open-source ingenuity to enterprises
To make Kubernetes easier for organizations to adopt—and easier for developers to use—Microsoft has tripled the number of employees who participate in the open source project in just three years. Now the third-leading corporate contributor, Microsoft works to make Kubernetes more enterprise-friendly and accessible by bringing the latest learnings and best practices from working with diverse customers to the Kubernetes community.
FAQs – Kubernetes
Follow this curated journey to begin learning Kubernetes.
Kubernetes is useful in scenarios ranging from moving applications to the cloud to simplifying challenges in machine learning and AI.
Key use cases include:
- Migrating existing applications to the cloud
- Simplifying deployment and management of microservices-based applications
- Scaling with ease
- IoT device deployment and management
- Machine learning
See best practices and architectural patterns created by the thousands of technical professionals and partners who use Kubernetes.
A Kubernetes deployment allows you to describe your desired application deployment state. Kubernetes scheduler ensures the actual state matches your desired state—and maintains that state in the event one or more pods crash. Kubernetes deployments also allow you to consistently upgrade your applications without downtime.
Deployment to Kubernetes using DevOps typically involves a respository such as Git for version management. The repository serves as the beginning of the CI/CD line. Depending on the approach you use, changes in the repository trigger integration, build, delivery, and deployment activities.
Kubernetes and Docker work together.
Docker provides an open standard for packaging and distributing containerized applications. Using Docker, you can build and run containers, and store and share container images.
Kubernetes orchestrates and manages the distributed, containerized applications that Docker creates. It also provides the infrastructure needed to deploy and run those applications on a cluster of machines.
More about Kubernetes
- Learn Kubernetes basics
- See Kubernetes best practices
- Learn more about containers
Learn about AKS
- Explore Azure Kubernetes Service (AKS)
Watch AKS videos and on demand Azure webinars for demos, top features, and technical sessions.
- Take the self-paced Azure Kubernetes workshop
- See Azure quickstart templates for Kubernetes
- See AKS regional availability
Join other AKS users on GitHub , at KubeCon , or at a Kubernetes Meetup near you.
Follow step-by-step AKS tutorials:
- Create container images from an application
- Upload container images to the Azure Container Registry
- Deploy an AKS cluster
- Run container images in Kubernetes
- Scale an application and Kubernetes infrastructure
- Update an application running in Kubernetes
- Upgrade AKS cluster
Open source and Azure
- Find out more about open source on Azure
- See APIs, SDKs, and open source projects from Azure
- Read the Designing Distributed Systems e-book
Ready when you are—try Kubernetes free on Azure
- YouTube Thumbnail Downloader
- Image Compressor
- QR Code Generator
- Submit An Article
- Terms and Conditions
What is Kubernetes and how it is different from other similar software
- by Refresh Science
- May 3, 2022 May 3, 2022
Kubernetes is a free and open source technology for automating the deployment, scaling and management of containerized applications. It is being used by major cloud providers like Google, Microsoft, IBM and Alibaba to run their container-based workloads.
So, what is Kubernetes and how it is different from other similar software?
The history of Kubernetes:
Kubernetes was first released in 2014 by Google. The developers wanted to create a tool for automating the deployment, scaling and management of containerized applications.
Kubernetes has become one of the most popular open source projects because of its simplicity and easy integration with cloud computing services.
What is a container and why it is used in Kubernetes?
A container is nothing but a virtual machine that is running on a single physical server.
It is just a virtual machine that has been configured to provide a complete operating environment for your container.
A container provides a complete operating system and all the applications and libraries that are required to run a software application.
So, if you want to build a software application, you need to install the dependencies in the container, and if you want to run the application then you need to install the containers in the host machine.
This is the basic concept of a container and Kubernetes uses this concept to manage the applications.
Features of Kubernetes:
One of the main reasons for using Kubernetes is to automate the process of creating, deploying, managing and monitoring the containers.
This allows the developers to focus on their primary task, which is writing the code. Kubernetes does all the work for them.
This automation also allows the developers to scale their applications as per their needs.
Kubernetes supports many tools that make it easy for the developers to deploy, scale and manage the applications.
Kubernetes can easily be scaled up or down to meet the requirements of a cloud-based service provider .
Kubernetes supports horizontal scaling, which means that you can add more machines to a cluster to increase the number of containers.
The Kubernetes architecture is highly scalable.
Kubernetes is a free and open source project. It is being used by many cloud providers to run their container-based workloads.
As it is a free tool, you don’t have to pay for the licensing fees of the software. So, you can save a lot of money by using the free version of the tool.
Download Kubernetes PowerPoint Presentation:
Kubernetes security and observability.
Kubernetes is one of the most popular container management systems available today. The rise of container technology has increased the demand for containers within organizations. Because of this, Kubernetes adoption is on the rise, and so is the demand for Kubernetes security.
For example, companies like Google, Netflix, and Amazon are using Kubernetes extensively. In fact, Kubernetes is the leading platform for container orchestration and management. As such, there are a number of tools that you can use to monitor Kubernetes security.
In this blog post, we will review the top open source tools for monitoring and securing Kubernetes. We will also discuss how you can use them to protect your Kubernetes clusters.
Best open source monitoring tools for Kubernetes
There are a number of open source tools that you can use to monitor and secure Kubernetes. Here are some of the most popular tools for doing so:
1. Heptio Ark
The Heptio Ark is a cloud-native monitoring solution for Kubernetes. The Ark is built around the idea of containers, and as such, it monitors and reports on container health. The Ark also provides a number of features to help you troubleshoot issues.
The Ark is one of the most popular tools for monitoring and securing Kubernetes.
2. New Relic
New Relic is a commercial service that offers a lot of features to help you troubleshoot issues with your Kubernetes deployments. New Relic can help you monitor and report on performance issues and can even alert you if a Kubernetes cluster is under stress.
Prometheus is an open source service for monitoring and reporting on Kubernetes. It provides a number of features that you can use to monitor and report on Kubernetes clusters.
Prometheus is an open source service for monitoring and reporting on Kubernetes.
CloudWatch is a commercial service from Amazon that helps you monitor and report on your Kubernetes clusters.
Grafana is an open source tool that helps you monitor and report on your Kubernetes clusters.
Alertmanager is an open source tool that helps you monitor and manage alerts for your Kubernetes clusters.
Commvault Kubernetes backup
One of the most important ways to secure your Kubernetes clusters is to back up your data.
Kubernetes is a highly-available system, which means that it’s not very easy to recover your data if a failure occurs.
For example, if you’re using Kubernetes on Google Cloud, you can use the Google Kubernetes Engine to create a cluster. If you’re using Kubernetes on AWS, you can use the AWS Fargate service to create a cluster.
If you’re using a public cloud, you can use CloudBackup to backup your Kubernetes cluster.
However, if you’re using Kubernetes in your own data center, then you can use Commvault to backup your Kubernetes clusters.
Commvault is an open source product that is used to backup and restore your Kubernetes clusters.
There are a number of reasons why you should consider backing up your Kubernetes clusters.
First, you can restore your data easily if a failure occurs.
Second, you can use the Commvault Backup API to automate your backups.
Third, you can create a daily backup schedule to ensure that your data is backed up.
Fourth, you can restore your data from the Commvault Backup API.
You can read more about Commvault’s Kubernetes backup capabilities in upcoming articles. Subscribe us to get notified.
Kubernetes is a free and open source project. It is being used by many cloud providers to run their container-based workloads. As it is a free tool, you don’t have to pay for the licensing fees of the software. So, you can save a lot of money by using the free version of the tool.
I hope you liked this post about “What is Kubernetes and how it is different from other similar software”. Do let us know your questions in comments section.
References – Kubernetes site .
Researched by Consultants from Top-Tier Management Companies
One Page Pitch
Top 10 Kubernetes PowerPoint Presentations For Organizations
What is Kubernetes?
It is an open-source container-centric management software that has made deployment easy for various organizations. Also known as "k8" or "kube", it automates the manual deployment processes to scale the containerized applications.
Kubernetes is an open-source platform that helps in managing containerized workloads. Now, with this definition, you may come up with questions like, what are containers and their uses? Let's understand it in depth. Kubernetes helps in the automated deployment process; back in time, organizations used to rely on traditional deployment methods. They ran applications on physical servers without any knowledge of the resources they took. Suppose, you deployed 10 applications on a physical server without having any clue of resources consumed by each application. This traditional deployment caused issues in resource allocation.
Now, understand from this diagram:
In a traditional deployment, applications were run on physical servers, and there was no resource allocation. No boundaries were set for an application to consume resources. The solution to this was Virtualized deployment.
Organizations ran multiple virtual machines on a single server in the virtual deployment era. Virtualization provides security to applications, and the information of one application cannot be accessed by the other. It allows better utilization of resources. Container deployment is similar to virtual machine and are light-weighted. Their creation and deployment is easy and efficient.
What is a container and why is it used in Kubernetes?
A container is a lightweight and isolated environment that runs on a physical server. It contains everything needed for a software application to run, including the operating system, dependencies, libraries, and runtime environments. Containers offer portability and consistency, making them popular for deploying applications across different computing environments.
Template 1: Kubernetes Concept PowerPoint Presentation
This is a complete kubernetes powerpoint presentation deck where you can explain everything from basic to advanced. These slides contain highly researched content and you can use this pre-made presentation to deliver an awesome presentation. It also contains a 30-60-90 days plan, benefits of kubernetes, containerization and everything around it. With the help of these appealing slides, you can save a plethora of time in creating a presentation from scratch. The best part is to explain the kubernetes concept via diagrams and these slides can explain that perfectly. Go and grab this complete powerpoint presentation today and ace your upcoming presentation.
Template 2: Kubernetes Docker Container PPT
We are introducing our meticulous kubernetes docker container ppt. This presentation contains everything to equip an organization with Kubernetes knowledge. This complete deck is beautifully crafted and highly researched. The images and graphics are top-notch quality, and the slides contain the reasons for adopting Kubernetes. Don’t miss this PowerPoint presentation and deliver the presentation of your entire life. The colors and graphical icons, everything is used very carefully. So, grab this PowerPoint presentation today!
Template 3: Kubernetes Architecture With Diagram
The Kubernetes architecture comprises three main components: the cluster, the Master, and the Node. A cluster is a group of servers that includes disk storage, CPU resources, and other devices necessary for running applications. The Node is responsible for executing virtual machines and hosting application workloads. The Kubernetes control panel is built upon a collection of clusters and handles the scheduling of events within the system.
To provide a comprehensive understanding of Kubernetes, Docker, containers, and their requirements, our Kubernetes PowerPoint presentation (PPT) offers an in-depth demonstration. Through this presentation, you can explore the intricacies of these technologies and gain insights into their significance in modern application development and deployment.
Template 4: Diving Digital Transformation With Containers
The Kubernetes Concepts And Architecture PowerPoint Presentation Slides are of significant importance for several reasons. Firstly, they provide a clear and concise explanation of the concept of containers, allowing the audience to grasp their significance. Additionally, these slides visually depict the architecture of containers and microservices, facilitating a better understanding of their structure and benefits. The slides offer a roadmap for implementation, highlight the benefits of Kubernetes, explain its components, and provide insights for efficient cluster management. Download this Kubernetes presentation ppt today!
Templates 5: Kubernetes Architecture and Containers PPT
Thе nееd for a PowеrPoint prеsеntation (PPT) еxplaining thе bеforе and aftеr aspеcts of Kubеrnеtеs arisеs from thе complеxity of thе platform and its transformativе impact on infrastructurе and application dеploymеnt. Thе PPT providеs an ovеrviеw of Kubеrnеtеs, comparеs thе traditional "bеforе" statе with thе improvеd "aftеr" statе, and highlights thе bеnеfits of adopting Kubеrnеtеs, including simplifiеd dеploymеnt, optimizеd hardwarе utilization, and еnhancеd scalability through containеrization
Template 6: Before and After Kubernetes PPT
Template 7: Security Measures in Kubernetes Presentation PPT
Download our "Bеst Sеcurity Practicеs in Kubеrnеtеs Production Environmеnt" PowеrPoint prеsеntation to dеmonstratе thе most еffеctivе sеcurity mеasurеs for corporations. This concisе and rеfinеd prеsеntation providеs valuablе insights into thе top sеcurity practices that should bе implеmеntеd in Kubеrnеtеs production еnvironmеnts. By following thеsе bеst practicеs, corporations can еnhancе thеir sеcurity posturе, protеct against unauthorizеd accеss and data brеachеs, and еnsurе thе rеliability of thеir infrastructurе. Proactivе risk mitigation, compliancе with rеgulatory rеquirеmеnts, and protеction of businеss continuity arе kеy rеasons why corporatеs should download this prеsеntation. It offеrs industry-rеcognizеd bеst practicеs and rеcommеndations, еmpowеring corporations to align with industry standards and lеvеragе thе collеctivе knowlеdgе of thе Kubеrnеtеs community. Safеguard your Kubеrnеtеs еnvironmеnt and dеmonstratе a strong commitmеnt to sеcurity by downloading our professional PowеrPoint prеsеntation today.
Template 8: Kubernetes Use Case
Experience the power of Kubernetes in our captivating PowerPoint presentation, "Kubernetes 7 Use Cases Heavy Computing." With seven stages highlighting its applications, this professionally designed slideshow is a must-download for anyone looking to impress their audience. Seamlessly blending creativity and professionalism, our visually appealing slides simplify complex concepts, making it easy for viewers to understand the potential of Kubernetes. Whether you're a seasoned presenter or a beginner, this versatile and editable presentation is a valuable resource across industries. Instantly download "Kubernetes 7 Use Cases Heavy Computing" and elevate your presentations to pro-level status.
Template 9: Kubernetes Components PPT
This kubernetes presentation is a professionally designed Powerpoint template that focuses on kubernetes components and kubelet. Our kubernetes powerpoint templates are visually engaging and will enhance your presentation to the next level. The kubernetes presentation slide explains kubernetes components via diagram. The diagram explains everything from cloud-controller manager to kuber-controller manager, and kube-api server. You can deliver a stunning presentation using this ppt
Template 10: Kubernetes Networking Model
Kubernetes network program allows communication between containers. This Kubernetes presentation helps you to explain container to container communciation concept. This communication requires to create a pod that can run two containers. Pod is a small unit managed by Kubernetes platform and it can handle more than one container because containers are tightly coupled. This similar information can be conveyed using our kubernetes ppt. So, go and download it today and save your valuable time and energy.
Kubernetes Development Environment with Kind.
Starting your journey into Kubеrnеtеs dеvеlopmеnt can bе ovеrwhеlming, еspеcially whеn facеd with thе task of dеploying a Kubеrnеtеs clustеr. Howеvеr, thеrе is a tool called Kind that makеs this procеss much еasiеr and morе accеssiblе, particularly for bеginnеrs in containеr-basеd dеvеlopmеnt.
kind allows you to run local Kubеrnеtеs clustеrs using Dockеr containеr "nodеs." While its primary purpose is tеsting Kubеrnеtеs, it can also bе usеd for local dеvеlopmеnt or continuous intеgration. With kind, you can quickly sеt up and start using a Kubеrnеtеs clustеr in just a mattеr of minutеs, еliminating thе complеxity and timе-consuming aspеcts of clustеr dеploymеnt.
By utilizing Kind, you can divе into Kubеrnеtеs dеvеlopmеnt with еasе, whеthеr you'rе building your first application or sеrvicе. It providеs a usеr-friеndly еxpеriеncе and еliminatеs thе stееp lеarning curvе oftеn associatеd with sеtting up Kubеrnеtеs еnvironmеnts. Within minutеs of installing and running kind, you'll havе a fully functional Kubеrnеtеs clustеr rеady for your dеvеlopmеnt work.
Frequently Asked Questions
What is kubernetes and why it is used.
Kubеrnеtеs is an opеn-sourcе containеr orchеstration systеm that automatеs thе dеploymеnt, scaling, and managеmеnt of containеrizеd applications. It is widеly usеd by companies to еffеctivеly handlе thеir containеrizеd workloads.
Kеy fеaturеs of Kubеrnеtеs includе automatic scaling, which adjusts application rеsourcеs based on dеmand, optimizing pеrformancе and cost-еfficiеncy. Additionally, it offеrs load balancing to еvеnly distributе traffic among containеrs, еnsuring high availability and prеvеnting ovеrload. Kubеrnеtеs also pеrforms hеalth chеcks, еnabling proactivе issuе idеntification and rеsolution bеforе affеcting usеrs. Lastly, it еnsurеs application rеsiliеncе by automatically rеstarting failеd containеrs, guarantееing continuous availability.
What are the main features of Kubernetes?
Kubеrnеtеs offеrs a broad array of fеaturеs, including automatеd dеploymеnt and scaling of containеrizеd applications, dynamic load balancing to distributе traffic across containеrs, continuous hеalth monitoring to dеtеct and rеstart unhеalthy containеrs, cеntralizеd logging for collеcting application logs, auto-scaling capabilitiеs to adjust clustеr sizе basеd on dеmand, storagе managеmеnt for applications, nеtworking managеmеnt, and sеcurity provisions for containеrizеd applications.
- Kubernetes Templates for Automating Computer Application Deployment and Scaling
- How to Quickly Find the Best Content for Your Presentation on SlideTeam
- How To Create an Awesome PowerPoint Presentation in 3 Steps
- How Icons Can Give a Complete Makeover to Your Slides & Make Them Look Sexy!
Liked this blog? Please recommend us
Digital revolution powerpoint presentation slides
Sales funnel results presentation layouts
3d men joinning circular jigsaw puzzles ppt graphics icons
Business Strategic Planning Template For Organizations Powerpoint Presentation Slides
Future plan powerpoint template slide
Project Management Team Powerpoint Presentation Slides
Brand marketing powerpoint presentation slides
Launching a new service powerpoint presentation with slides go to market
Agenda powerpoint slide show
Four key metrics donut chart with percentage
Engineering and technology ppt inspiration example introduction continuous process improvement
Meet our team representing in circular format
- Red Hat Enterprise Linux A flexible, stable operating system to support hybrid cloud innovation.
- Red Hat OpenShift A container platform to build, modernize, and deploy applications at scale.
- Red Hat Ansible Automation Platform A foundation for implementing enterprise-wide automation.
- Start a trial Assess a product with a no-cost trial.
- Buy online Buy select products and services in the Red Hat Store.
Featured cloud services
- Red Hat OpenShift Service on AWS
- Red Hat OpenShift Data Science
- Microsoft Azure Red Hat OpenShift
- Cloud-native development
- Digital transformation
- SAP workloads
By organization type
- Financial services
- Industrial sector
- Media and entertainment
- Public sector
- British Army
- HCA Healthcare
- Macquarie Bank
- Tata Consultancy Services
- Search all success stories
- Open Innovation Labs
- Technical Account Management
Training & certification
- All courses and exams
- All certifications
- Verify a certification
- Skills assessment
- Learning subscription
- Learning community
- Red Hat Academy
- Connect with learning experts
- Red Hat System Administration I (RH124)
- Red Hat OpenShift Administration I (DO280)
- Red Hat Certified Engineer (RHCE)
- Cloud computing
- Edge computing
- See all topics
- What are cloud services?
- What is edge computing?
- What is hybrid cloud?
- Why build a Red Hat cloud?
- Cloud vs. edge
- Red Hat OpenShift vs. Kubernetes
- Learning Ansible basics
- What is Linux?
More to explore
- Customer success stories
- Events and webinars
- Podcasts and video series
- Resource library
- Training and certification
- Our partners
- Red Hat Ecosystem Catalog
- Find a partner
- Partner Connect
- Become a partner
- Our company
- How we work
- Our social impact
- Development model
- Subscription model
- Product support
- Open source commitments
- How we contribute
- Red Hat on GitHub
- Analyst relations
- For system administrators
- For architects
- Customer advocacy
As you browse redhat.com, we'll recommend resources you may like. For now, try these.
- All Red Hat products
- Tech topics
- Red Hat resources
Select a language
- Training & services
- Jump to section
- Understanding Linux containers
- Introduction to Kubernetes architecture
If you know only the basics of Kubernetes , you know it’s an open source container orchestration platform designed for running distributed applications and services at scale. But you might not understand its components and how they interact.
Let’s take a brief look at the design principles that underpin Kubernetes, then explore how the different components of Kubernetes work together.
Kubernetes design principles
The design of a Kubernetes cluster is based on 3 principles, as explained in the Kubernetes implementation details .
A Kubernetes cluster should be:
- Secure. It should follow the latest security best-practices.
- Easy to use. It should be operable using a few simple commands.
- Extendable. It shouldn’t favor one provider and should be customizable from a configuration file.
What are the components of a Kubernetes cluster?
A working Kubernetes deployment is called a cluster . You can visualize a Kubernetes cluster as two parts: the control plane and the compute machines, or nodes. Each node is its own Linux® environment, and could be either a physical or virtual machine . Each node runs pods, which are made up of containers .
This diagram shows how the parts of a Kubernetes cluster relate to one another:
What happens in the Kubernetes control plane?
Let’s begin in the nerve center of our Kubernetes cluster: The control plane. Here we find the Kubernetes components that control the cluster, along with data about the cluster’s state and configuration. These core Kubernetes components handle the important work of making sure your containers are running in sufficient numbers and with the necessary resources.
The control plane is in constant contact with your compute machines. You’ve configured your cluster to run a certain way. The control plane makes sure it does.
Need to interact with your Kubernetes cluster? Talk to the API. The Kubernetes API is the front end of the Kubernetes control plane, handling internal and external requests. The API server determines if a request is valid and, if it is, processes it. You can access the API through REST calls, through the kubectl command-line interface, or through other command-line tools such as kubeadm.
Is your cluster healthy? If new containers are needed, where will they fit? These are the concerns of the Kubernetes scheduler.
The scheduler considers the resource needs of a pod, such as CPU or memory, along with the health of the cluster. Then it schedules the pod to an appropriate compute node.
Controllers take care of actually running the cluster, and the Kubernetes controller-manager contains several controller functions in one. One controller consults the scheduler and makes sure the correct number of pods is running. If a pod goes down, another controller notices and responds. A controller connects services to pods, so requests go to the right endpoints. And there are controllers for creating accounts and API access tokens.
Configuration data and information about the state of the cluster lives in etcd , a key-value store database. Fault-tolerant and distributed, etcd is designed to be the ultimate source of truth about your cluster.
What happens in a Kubernetes node?
A Kubernetes cluster needs at least one compute node, but will normally have many. Pods are scheduled and orchestrated to run on nodes. Need to scale up the capacity of your cluster? Add more nodes.
A pod is the smallest and simplest unit in the Kubernetes object model. It represents a single instance of an application . Each pod is made up of a container or a series of tightly coupled containers, along with options that govern how the containers are run. Pods can be connected to persistent storage in order to run stateful applications.
Container runtime engine
To run the containers, each compute node has a container runtime engine. Docker is one example, but Kubernetes supports other Open Container Initiative-compliant runtimes as well, such as rkt and CRI-O.
Each compute node contains a kubelet, a tiny application that communicates with the control plane. The kublet makes sure containers are running in a pod. When the control plane needs something to happen in a node, the kubelet executes the action.
Each compute node also contains kube-proxy, a network proxy for facilitating Kubernetes networking services. The kube-proxy handles network communications inside or outside of your cluster—relying either on your operating system’s packet filtering layer, or forwarding the traffic itself.
What else does a Kubernetes cluster need?
Beyond just managing the containers that run an application, Kubernetes can also manage the application data attached to a cluster. Kubernetes allows users to request storage resources without having to know the details of the underlying storage infrastructure. Persistent volumes are specific to a cluster, rather than a pod, and thus can outlive the life of a pod.
The container images that Kubernetes relies on are stored in a container registry. This can be a registry you configure, or a third party registry.
Where you run Kubernetes is up to you. This can be bare metal servers, virtual machines, public cloud providers, private clouds, and hybrid cloud environments. One of Kubernetes’s key advantages is it works on many different kinds of infrastructure.
Nobody said this would be easy
This simplified overview of Kubernetes architecture just scratches the surface. As you consider how these components communicate with each other—and with external resources and infrastructure—you can appreciate the challenges of configuring and securing a Kubernetes cluster .
Kubernetes offers the tools to orchestrate a large and complex containerized application, but it also leaves many decisions up to you. You choose the operating system, container runtime, continuous integration/continuous delivery (CI/CD) tooling, application services, storage, and most other components. There’s also the work of managing roles, access control, multitenancy , and secure default settings. Additionally, you can choose to run Kubernetes on your own or work with a vendor who can provide a supported version.
This freedom of choice is part of the flexible nature of Kubernetes. While it can be complex to implement, Kubernetes gives you tremendous power to run containerized applications on your own terms, and to react to changes in your organization with agility.
Build cloud-native applications with Kubernetes
Watch this webinar series to get expert perspectives to help you establish the data platform on enterprise Kubernetes you need to build, run, deploy, and modernize applications.
Why choose Red Hat OpenShift for Kubernetes?
Red Hat is a leader and active builder of open source container technology , including Kubernetes, and creates essential tools for securing , simplifying, and automatically updating your container infrastructure.
Red Hat® OpenShift® is an enterprise-grade Kubernetes distribution. With Red Hat OpenShift, teams gain a single, integrated platform for DevOps . Red Hat OpenShift offers developers their choice of languages, frameworks, middleware , and databases, along with build and deploy automation through CI/CD to supercharge productivity. Also available is a data and storage services platform engineered for containers, Red Hat OpenShift Data Foundation .
- What's a Linux container?
A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.
- Containers vs VMs
Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.
- What is container orchestration?
Container orchestration automates the deployment, management, scaling, and networking of containers.
More about containers
An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.
- Backup and recovery for containers
What is rkt?
- Orchestrating Windows containers on Red Hat OpenShift
- High availability and disaster recovery for containers
- What is rkt?
- What is CaaS?
- What is a container registry?
- What is containerization?
- Learning Kubernetes basics
- What is a Kubernetes cluster?
- What is a Kubernetes operator?
Cost management for Kubernetes on Red Hat OpenShift
Spring on Kubernetes with Red Hat OpenShift
What is Clair?
What is Podman?
What is Buildah?
- What is container-native virtualization?
- What is Docker?
- What is etcd?
- What is Kubernetes?
- What is Kubernetes cluster management?
- What is a Kubernetes deployment?
- What is a Kubernetes pod?
- What is the Kubernetes API?
- Why choose Red Hat for containers?
- Why choose Red Hat for Kubernetes?
- Red Hat’s perspective on Kubernetes
What is orchestration?
What is enterprise Kubernetes?
Introduction to Kubernetes patterns
Why run Apache Kafka on Kubernetes?
How to handle Kubernetes security
What is Kubernetes role-based access control (RBAC)?
What is Skopeo?
Command Line Heroes Season 1, Episode 5: "The Containers Derby"
Developing apps in containers: 5 topics to discuss with your team
Boost agility with hybrid cloud and containers
A layered approach to container and Kubernetes security
Building apps in containers: 5 things to share with your manager
Embracing containers for software-defined cloud infrastructure
Free training course
Running Containers with Red Hat Technical Overview
Containers, Kubernetes and Red Hat OpenShift Technical Overview
Developing Cloud-Native Applications with Microservices Architectures
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
- Customer support
- Developer resources
Try, buy, & sell
- Product trial center
- Red Hat Marketplace
- Red Hat Store
- Buy online (Japan)
- Contact sales
- Contact customer service
- Contact training
- About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Red Hat legal and privacy links
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit
- Privacy statement
- All policies and guidelines
- Digital accessibility
- Cloud Computing
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- Operating System
- Computer Network
- Write an Interview Experience
- Share Your Campus Experience
- Kubernetes Tutorial
Introduction to Kubernetes
Introduction to kubernetes (k8s).
- Kubernetes – Architecture
- Kubernetes – Monolithic Architecture of Kubernetes
- Kubernetes vs Docker
Installation and Setup
- Kubernetes – Installation Methods
- How to install and configure Kubernetes on Ubuntu?
- How to set up Kubernetes cluster on local machine using minikube ?
- Kubernetes – Concept of Containers
- Kubernetes – Introduction to Container Orchestration
- Kubernetes – Images
- Kubernetes – Jobs
- Kubernetes – Labels & Selectors
- Kubernetes – Namespaces
- Kubernetes – Node
- Kubernetes – NodePort Service
- Kubernetes – Cluster IP vs Node-Port
- Kubernetes – Services
- Kubernetes – Pods
- Kubernetes – Run a Command in Pod’s Containers
- Kubernetes – Creating Multiple Container in a Pod
- Kubernetes – Replication Controller
- Kuberneters – Difference Between Replicaset and Replication Controller
- Kubernetes – Deployments
- Kubernetes – ConfigMap
- Kubernetes – Create Config Map From Files
- Kubernetes – Create ConfigMap From YAML File
- Kubernetes – Config Map From Directory
- Kubernetes – Injecting ConfigMap as Files
- Kubernetes – Injecting ConfigMap in Pods
Scaling and Updating Applications
- Kubernetes – Volumes
- Kubernetes – Secrets
- Kubernetes – Working With Secrets
- Kubernetes – Load Balancing Service
- Kubernetes – Service DNS
- Kubernetes – API
- Kubernetes – Taint and Toleration
- Kubernetes Resource Model (KRM) and How to Make Use of YAML?
- Installing Private Git Server on K8s Cluster with Gitea and AKS
- Enable Remote Debugging For Java Application Deployed in Kubernetes Environment
- How to Enable JMX For Java Application Running in the Kubernetes Cluster?
Kubernetes is an open-source Container Management tool that automates container deployment, container scaling, descaling, and container load balancing (also called a container orchestration tool). It is written in Golang and has a vast community because it was first developed by Google and later donated to CNCF (Cloud Native Computing Foundation). Kubernetes can group ‘n’ number of containers into one logical unit for managing and deploying them easily. It works brilliantly with all cloud vendors i.e. public, hybrid, and on-premises.
Kubernetes is an open-source platform that manages Docker containers in the form of a cluster. Along with the automated deployment and scaling of containers, it provides healing by automatically restarting failed containers and rescheduling them when their hosts die. This capability improves the application’s availability.
Features of Kubernetes:
- Automated Scheduling – Kubernetes provides an advanced scheduler to launch containers on cluster nodes. It performs resource optimization.
- Self-Healing Capabilities – It provides rescheduling, replacing, and restarting the containers which are dead.
- Automated Rollouts and Rollbacks – It supports rollouts and rollbacks for the desired state of the containerized application.
- Horizontal Scaling and Load Balancing – Kubernetes can scale up and scale down the application as per the requirements.
- Resource Utilization – Kubernetes provides resource utilization monitoring and optimization, ensuring containers are using their resources efficiently.
- Support for multiple clouds and hybrid clouds – Kubernetes can be deployed on different cloud platforms and run containerized applications across multiple clouds.
- Extensibility – Kubernetes is very extensible and can be extended with custom plugins and controllers.
- Community Support- Kubernetes has a large and active community with frequent updates, bug fixes, and new features being added.
Kubernetes Vs Docker:
Architecture of Kubernetes
Kubernetes follows the client-server architecture where we have the master installed on one machine and the node on separate Linux machines. It follows the master-slave model, which uses a master to manage Docker containers across multiple Kubernetes nodes. A master and its controlled nodes(worker nodes) constitute a “Kubernetes cluster” . A developer can deploy an application in the docker containers with the assistance of the Kubernetes master.
1. Kubernetes- Master Node Components –
Kubernetes master is responsible for managing the entire cluster, coordinates all activities inside the cluster, and communicates with the worker nodes to keep the Kubernetes and your application running. This is the entry point of all administrative tasks. When we install Kubernetes on our system we have four primary components of Kubernetes Master that will get installed. The components of the Kubernetes Master node are:
a.) API Server – The API server is the entry point for all the REST commands used to control the cluster. All the administrative tasks are done by the API server within the master node. If we want to create, delete, update or display in Kubernetes object it has to go through this API server.API server validates and configures the API objects such as ports, services, replication, controllers, and deployments and it is responsible for exposing APIs for every operation. We can interact with these APIs using a tool called kubectl . ‘kubectl’ is a very tiny go language binary that basically talks to the API server to perform any operations that we issue from the command line. It is a command-line interface for running commands against Kubernetes clusters
b.) Scheduler – It is a service in the master responsible for distributing the workload. It is responsible for tracking the utilization of the working load of each worker node and then placing the workload on which resources are available and can accept the workload. The scheduler is responsible for scheduling pods across available nodes depending on the constraints you mention in the configuration file it schedules these pods accordingly. The scheduler is responsible for workload utilization and allocating the pod to the new node.
c.) Controller Manager – Also known as controllers. It is a daemon that runs in a non terminating loop and is responsible for collecting and sending information to the API server. It regulates the Kubernetes cluster by performing lifestyle functions such as namespace creation and lifecycle event garbage collections, terminated pod garbage collection, cascading deleted garbage collection, node garbage collection, and many more. Basically, the controller watches the desired state of the cluster if the current state of the cluster does not meet the desired state then the control loop takes the corrective steps to make sure that the current state is the same as that of the desired state. The key controllers are the replication controller, endpoint controller, namespace controller, and service account, controller. So in this way controllers are responsible for the overall health of the entire cluster by ensuring that nodes are up and running all the time and correct pods are running as mentioned in the specs file.
d.) etc – It is a distributed key-value lightweight database. In Kubernetes, it is a central database for storing the current cluster state at any point in time and is also used to store the configuration details such as subnets, config maps, etc. It is written in the Go programming language.
2. Kubernetes- Worker Node Components –
Kubernetes Worker node contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled. The components of the Kubernetes Worker node are:
a.) Kubelet – It is a primary node agent which communicates with the master node and executes on each worker node inside the cluster. It gets the pod specifications through the API server and executes the container associated with the pods and ensures that the containers described in the pods are running and healthy. If kubelet notices any issues with the pods running on the worker nodes then it tries to restart the pod on the same node. If the issue is with the worker node itself then the Kubernetes master node detects the node failure and decides to recreate the pods on the other healthy node.
b.) Kube-Proxy – It is the core networking component inside the Kubernetes cluster. It is responsible for maintaining the entire network configuration. Kube-Proxy maintains the distributed network across all the nodes, pods, and containers and exposes the services across the outside world. It acts as a network proxy and load balancer for a service on a single worker node and manages the network routing for TCP and UDP packets. It listens to the API server for each service endpoint creation and deletion so for each service endpoint it sets up the route so that you can reach it.
c.) Pods – A pod is a group of containers that are deployed together on the same host. With the help of pods, we can deploy multiple dependent containers together so it acts as a wrapper around these containers so we can interact and manage these containers primarily through pods.
d.) Docker – Docker is the containerization platform that is used to package your application and all its dependencies together in the form of containers to make sure that your application works seamlessly in any environment which can be development or test or production. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Docker is the world’s leading software container platform. It was launched in 2013 by a company called Dot cloud. It is written in the Go language. It has been just six years since Docker was launched yet communities have already shifted to it from VMs. Docker is designed to benefit both developers and system administrators making it a part of many DevOps toolchains. Developers can write code without worrying about the testing and production environment. Sysadmins need not worry about infrastructure as Docker can easily scale up and scale down the number of systems. Docker comes into play at the deployment stage of the software development cycle.
Application of Kubernetes
- Microservices architecture: Kubernetes is well-suited for managing microservices architectures, which involve breaking down complex applications into smaller, modular components that can be independently deployed and managed.
- Cloud-native development: Kubernetes is a key component of cloud-native development, which involves building applications that are designed to run on cloud infrastructure and take advantage of the scalability, flexibility, and resilience of the cloud.
- Continuous integration and delivery: Kubernetes integrates well with CI/CD pipelines, making it easier to automate the deployment process and roll out new versions of your application with minimal downtime.
- Hybrid and multi-cloud deployments: Kubernetes provides a consistent deployment and management experience across different cloud providers, on-premise data centers, and even developer laptops, making it easier to build and manage hybrid and multi-cloud deployments.
- High-performance computing: Kubernetes can be used to manage high-performance computing workloads, such as scientific simulations, machine learning, and big data processing.
- Edge computing: Kubernetes is also being used in edge computing applications, where it can be used to manage containerized applications running on edge devices such as IoT devices or network appliances.
Please Login to comment...
Improve your coding skills with practice.
Demystifying containers, Docker, and Kubernetes
- Share Demystifying containers, Docker, and Kubernetes on Twitter Twitter
- Share Demystifying containers, Docker, and Kubernetes on LinkedIn LinkedIn
- Share Demystifying containers, Docker, and Kubernetes on Facebook Facebook
- Share Demystifying containers, Docker, and Kubernetes on Email Email
- Print a copy of Demystifying containers, Docker, and Kubernetes Print
Modern application infrastructure is being transformed by containers. The question is: How do you get started?
Understanding what problems containers, Docker, and Kubernetes solve is essential if you want to build modern cloud-native apps or if you want to modernize your existing legacy applications. In this post, we’ll go through what they are and how you can learn more to advance to the next level.
What are containers?
Containers effectively virtualize the host operating system (or kernel) and isolate an application’s dependencies from other containers running on the same machine. Before containers, if you had multiple applications deployed on the same virtual machine (VM), any changes to shared dependencies could cause strange things to happen—so the tendency was to have one application per virtual machine.
The solution of one application per VM solved the isolation problem for conflicting dependencies, but it wasted a lot of resources (CPU and memory). This is because a VM runs not only your application but also a full operating system that needs resources too, so less would be available for your application to use.
Containers solve this problem with two pieces: a container engine and a container image, which is a package of an application and its dependencies. The container engine runs applications in containers isolating it from other applications running on the host machine. This removes the need to run a separate operating system for each application, allowing for higher resource utilization and lower costs.
If you want to learn more about containers, watch this short video on why you should care about containers .
What is Docker?
Docker was first released in 2013 and is responsible for revolutionizing container technology by providing a toolset to easily create container images of applications. The underlying concept has been around longer than Docker’s technology, but it was not easy to do until Docker came out with its cohesive set of tools to accomplish it. Docker consists of a few components: a container runtime (called dockerd), a container image builder (BuildKit), and a CLI that is used to work with the builder, containers, and the engine (called docker).
Docker images vs. Docker containers
A Docker image is a template; a Docker container is a running instance of that template.
To create an image with your application’s source code, you specify a list of commands in a special text file named Dockerfile . The docker builder takes this file and packages it into an image. Once you have an image, you push it to a container registry—a central repository for versioning your images.
When you want to run a Docker image, you need to either build it or pull the image from a registry. DockerHub is a well-known public registry, but there are also private registries like Azure Container Registry that allow you to keep your application images private.
If you want a hands-on example, this is a good great resource: Deploy Python using Docker containers .
What is Kubernetes?
Kubernetes is an open-source container management platform that unifies a cluster of machines into a single pool of compute resources. With Kubernetes, you organize your applications in groups of containers, which it runs using the Docker engine, taking care of keeping your application running as you request.
Kubernetes provides the following:
- Compute scheduling —It considers the resource needs of your containers, to find the right place to run them automatically
- Self-healing —If a container crashes, a new one will be created to replace it.
- Horizontal scaling —By observing CPU or custom metrics, Kubernetes can add and remove instances as needed.
- Volume management —It manages the persistent storage used by your applications
- Service discovery & load balancing —IP address, DNS, and multiple instances are load-balanced.
- Automated rollouts & rollbacks –During updates, the health of your new instances are monitored, and if a failure occurs, it can roll back to the previous version automatically.
- Secret & configuration management . It manages application configuration and secrets.
Kubernetes uses a master/slave communication model where there is at least one master and usually several worker nodes. The master (sometimes called the control plane) has three components and a data store:
- API server —exposes the Kubernetes API for controlling the cluster
- Controller manager —responsible for watching the cluster’s objects and resources and ensuring the desired state is consistent
- Scheduler —responsible for scheduling compute requests on the cluster
- etcd —an open-source distributed key value store used to hold the cluster data
The worker nodes provide the container runtime for your applications and have a few components responsible for communicating with the master and networking on every worker node:
- Kubelet —responsible for communicating to the master and ensuring the containers are running on the node
- Kube-proxy —enables the cluster to forward traffic to executing containers
- Docker (container runtime) —provides the runtime environment for containers
The master and workers are the platform that run your applications. In order to get your applications running on the cluster, you need to interact with the API server and work with the Kubernetes object model.
To run an application on Kubernetes, you need to communicate with the API server using the object model. The objects are usually expressed in .yaml or .json format; kubectl is the command-line interface used to interact with the API.
The most common objects are:
- Pod —a group of one or more containers and metadata
- Service —works with a set of pods and directs traffic to them
- Deployment —ensures the desired state and scale are maintained
To find out more about how your applications work on Kubernetes, watch this short video by Brendan Burns on How Kubernetes works .
Here is a great walkthough that uses a Python voting application and a Redis cache to help you get started with the Kubernetes concepts.
Containers are the foundation of modern applications. Docker provides the toolset to easily create container images of your applications, and Kubernetes gives you the platform to run it all.
Now that you know the basic pieces of the puzzle and have a better idea of what containers, Docker, and Kubernetes are all about, you can learn more at Kubernetes Learning Path.
Related blog posts
Boosting performance in onnx runtime with intel® amx for 4th gen intel® xeon® processors .
ONNX Runtime, Intel®, and Microsoft developed the 8-bit integer matrix multiplication kernel in ONNX Runtime Read more
Scaling workloads on Microsoft Azure with Kubernetes Event-Driven Autoscaling
KEDA reduces the complexity of infrastructure autoscaling, it makes it simpler for Kubernetes cluster administrators Read more
Automate optimization techniques for transformer models
Intel has collaborated with Microsoft to integrate Intel® Neural Compressor into Olive, enabling developers to Read more
- Get filtered RSS
- Get all RSS
Top 10 Kubernetes KBs! Insights from August 2023’s Top KBs for Tanzu Kubernetes.
In this blog post, we’re delving deep into the world of Kubernetes troubleshooting by exploring insights from the most popular Knowledge Base (KB) articles in August 2023. These articles represent the hottest topics in Kubernetes problem-solving, reflecting the challenges faced by administrators, DevOps engineers, and developers in real-world scenarios.
Whether you’re a Kubernetes newcomer or an experienced pro, these KB articles offer invaluable solutions to some of the most commonly encountered issues. From container runtime quagmires to network complications and vSphere integrations, the challenges in Kubernetes are diverse and ever-evolving.
Join us as we take a tour through the most popular Kubernetes KBs of August 2023, revealing the critical insights and solutions that can help you navigate the complex landscape of Kubernetes troubleshooting.
- Troubleshooting cURL “error 60: SSL certificate problem: unable to get local issuer certificate” in vSphere Integrated Containers
This knowledge base article addresses the common error message “cURL: (60) SSL certificate problem: unable to get local issuer certificate” encountered when attempting to connect to an external server from a container using the curl command. It explains the causes and risks associated with this error, highlighting the importance of secure connections. The article provides a resolution, including steps to re-download the curl CA bundle or load a self-signed, known-trusted certificate to resolve the issue. Additionally, it offers related information and commands to identify untrusted certificates.
- Persistent volumes cannot attach to a new node if previous node is deleted
This article addresses an issue related to persistent volumes in vSphere Container Storage Interface (CSI) driver. Due to a race condition between detaching and deleting volume operations, CNS (Cloud Native Storage) volumes may not detach from nodes when those nodes are deleted. This issue is particularly relevant during upgrades and when stateful workloads are using persistent volumes.
The symptoms of this problem include worker nodes being stuck during the upgrade process, with CSI controllers attempting to detach volumes from non-existent nodes. This leads to stateful workloads being unable to create containers or getting stuck in an init state.
The resolution involves applying a workaround. You need to check the current status of pods and nodes, identify outdated volume attachments, and delete the attachments for nodes that are no longer part of the cluster. This workaround allows new nodes to successfully mount the persistent volumes.
For detailed instructions and commands, please refer to the full article in the knowledge base.
- Updating private image repository CA certificate to existing clusters.
This knowledge base article guides users in updating a custom Certificate Authority (CA) certificate for private image repositories on Tanzu Kubernetes Grid (TKG) clusters. The article provides an overview of the process, including node configuration, Kapp Controller and TKR Controller updates, and handling the CA certificate for workload clusters.
- Calico-node pods and kube-proxy may fail intermittently on Photon 3
This knowledge base article addresses intermittent failures in Calico-node pods and kube-proxy on Photon 3 within Kubernetes clusters. These failures result in readiness check errors, affecting pod networking. The root cause is identified as an issue in the Linux kernel in versions prior to 5.2-rc2. A kernel patch will be backported to a future Photon 3 release to resolve this problem. Meanwhile, a workaround is provided to disable the bpfilter module using a script. Additionally, node reboots can temporarily alleviate the issue.
- pod running on containerd runtime in unprivileged mode getting error bind port permission denied
This knowledge base article addresses a scenario where pods, initially running successfully on the Docker runtime, encounter issues when migrated to the ContainerD runtime. After the migration, these pods may enter a crashloopbackoff state due to a “permission denied” error when attempting to bind to ports. The solution involves adjusting the security context by adding a specific sysctl configuration, net.ipv4.ip_unprivileged_port_start, to allow these pods to bind to lower-numbered ports, such as port 80. An example configuration for an Nginx pod running in unprivileged mode on port 80 is provided. This adjustment enables the pods to function correctly with the ContainerD runtime while still binding to privileged ports.
- vCenter task “Download plug-in” keeps failing for ‘VMware TKG plugin’ with status code 502
This knowledge base article addresses an issue where the vCenter task “Download Plug-in” fails consistently for the ‘VMware TKG plugin’ with a status code 502 error. This error also affects the vCenter UI’s ability to access the TKG Service configuration under Cluster > Configure > TKG Service, resulting in a “Bad Gateway 502” error.
- Pulling images using proxy from external registries fail
This knowledge base article addresses issues with pulling container images using a proxy from external registries, specifically focusing on the registry.k8s.io registry. When attempting to run Conformance or Lite tests from Tanzu Mission Control (TMC), pods may fail to start due to image pull errors, even though connectivity through the proxy appears to be functional. The error messages indicate issues with image authentication and domain resolution.
- WCP Supervisor Cluster stuck in “Removing”
This knowledge base article addresses a situation where a VMware vCenter Server Supervisor Cluster becomes stuck in a “Removing” state. The symptoms include Supervisor Cluster VMs disappearing from the vSphere inventory and errors in WCP and EAM service logs. The cause of this issue is related to vCLS (vSphere Cluster Services) VMs interfering with the EAM (ESX Agent Manager) service, preventing the removal process from completing successfully.
- Pods fail to attach or detach volumes
This knowledge base article addresses an issue in a multi-cluster Tanzu Kubernetes Grid Integrated Edition (TKGI) environment using the vSphere CSI driver. Pods fail to attach or detach volumes, leading to errors and timeouts. The root cause is the shared cluster ID among multiple clusters, causing synchronization conflicts in vCenter. The resolution involves assigning a unique cluster ID to each vSphere CSI driver deployment. A workaround is provided for clusters with shared IDs, allowing for cluster ID changes and volume re-registration. This issue is specific to TKGI 1.10 and lower, with integration of vSphere Cloud Native Storage (CNS) in TKGI 1.11 and higher.
- Docker fails to start with “failed to dial “/run/containerd/containerd.sock”” in TKGI
This knowledge base article addresses the issue where Docker and kubelet fail to start on TKGi (Tanzu Kubernetes Grid Integrated Edition) nodes. The error message “failed to dial ‘/run/containerd/containerd.sock’: context deadline exceeded” is logged in the docker.stderr.log file. The cause of this issue is another Kubernetes workload in the cluster mounting the /run/containerd/containerd.sock directory, preventing Docker from mounting and starting up. The article provides information on the resolution for specific TKGi versions and offers a workaround involving the removal of the directory on the affected Worker node.
Our curated solutions are available 24/7 through VMware KB Articles. Make sure to also check our other resources:
- Knowledge Base : So many articles. So many topics. Find what you need.
- Product Documentation : Explore a full range of technical documentation including manuals, release notes, and more.
- Tech Zone : Go from zero to hero with the latest VMware technical resources.
- Technical Papers : Access content written by VMware technical experts.
- VMware Blogs : Benefit from the experience of industry professionals from around the VMware community.
- Compatibility Guides : Get information regarding supported and compatible hardware, software, and guest and host operating systems.
- Product Support Centers : All available support information grouped by product.
- Community Forums : Connect with users across the globe.
- Support Best Practices : Learn about best practices to help you get the most out of your support experience.
- SnS Extension Estimator : Determine whether you will receive a support contract extension when upgrading your license.
Top 10 VMware Knowledge Base Articles: Troubleshooting Tips for PostgreSQL, Greenplum, and Data Domain systems for August 2023.
Top 10 VMware Tanzu KB Articles for Dev Apps in August, 2023
Navigating vmware gemfire: top kb articles of august 2023., leave a reply cancel reply.
Your email address will not be published. Required fields are marked *
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Azure Kubernetes Service (AKS)
AKS allows you to quickly deploy a production ready Kubernetes cluster in Azure. Learn how to use AKS with these quickstarts, tutorials, and samples.
About Azure Kubernetes Service (AKS)
- What is AKS?
- Secure your cluster using pod security policies (Deprecated)
- Automatically upgrade node images
- Azure Linux container host for AKS
- Vertical Pod Autoscaler (Preview)
- Workload identity
- Use Confidential Virtual Machines
- AKS GitHub Actions
- Enable Azure resources to access AKS clusters using Trusted Access (Preview)
- Kubernetes core concepts for AKS
- Clusters and workloads
- Access and identity
- Introduction to Azure Kubernetes Service
- Introduction to containers on Azure
- Build and store container images with Azure Container Registry
Deploy an AKS cluster in 5 minutes
- Azure PowerShell
- Azure Portal
- Resource Manager template
Deploy an application in 5 minutes
- Develop with Helm
- Develop with Dapr
- Use Draft and the DevX extension for Visual Studio Code
- Use Automated Deployments
- Use Bridge to Kubernetes with Visual Studio Code
- Use Bridge to Kubernetes with Visual Studio
- Run Java Open Liberty
- Run Java WebLogic Server
- Baseline architecture
- Baseline for microservices
- Baseline for PCI-DSS 3.2.1
- Baseline for multiregion
- Day-2 operations guide
- Best practices for cluster operators and developers
- Other AKS solutions
Configure your cluster for Windows containers
- Create a Windows Server container using the Azure CLI
- Create a Windows Server container using the Azure PowerShell
- Windows Server node security
- Security baseline
- Upgrade from Windows Server 2019 to 2022
- Create Dockerfiles for Windows Server containers
- Optimize Dockerfiles for Windows Server containers
- Use HostProcess containers
- Enable network policies
- Use Azure disks CSI drivers
- Use Azure files CSI drivers
- Connect to Windows Server nodes over RDP
- Connect to Windows Server nodes over SSH
- Windows Server containers FAQ
Deploy, manage, and update applications
- 1. Prepare an application for AKS
- 2. Deploy and use Azure Container Registry
- 3. Deploy an AKS cluster
- 4. Run your application
- 5. Scale applications
- 6. Update an application
- 7. Upgrade Kubernetes in AKS
Extend the capabilities of your cluster
- Istio add-on
- Dapr cluster extension
- Cluster extensions
- GitHub Actions for AKS
- Troubleshooting Guides
- Troubleshoot create operations
- Troubleshoot common issues
Search code, repositories, users, issues, pull requests...
We read every piece of feedback, and take your input very seriously.
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
Learn Kubernetes Basics
This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts, and includes an interactive online tutorial. These interactive tutorials let you manage a simple cluster and its containerized applications for yourself.
Using the interactive tutorials, you can learn to:
- Deploy a containerized application on a cluster.
- Scale the deployment.
- Update the containerized application with a new software version.
- Debug the containerized application.
What can Kubernetes do for you?
With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.
Kubernetes Basics Modules
1. create a kubernetes cluster, 2. deploy an app, 3. explore your app, 4. expose your app publicly, 5. scale up your app, 6. update your app.
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow . Open an issue in the GitHub repo if you want to report a problem or suggest an improvement .