DevOps with Kubernetes
上QQ阅读APP看书,第一时间看更新

Understanding Kubernetes

Kubernetes is a platform for managing containers across multiple hosts. It provides lots of management features for container-oriented applications, such as auto-scaling, rolling deployments, compute resources, and storage management. Like containers, it's designed to run anywhere, including on bare metal, in our data center, in the public cloud, or even in the hybrid cloud.

Kubernetes fulfills most application container operational needs. Its highlights include the following:

  • Container deployment
  • Persistent storage
  • Container health monitoring
  • Compute resource management
  • Auto-scaling
  • High availability by cluster federation

With Kubernetes, we can manage containerized applications easily. For example, by creating Deployment, we can roll out, roll over, or roll back selected containers (Chapter 9, Continuous Delivery) with just a single command. Containers are considered ephemeral. If we only have one host, we could mount host volumes into containers to preserve data. In the cluster world, however, a container might be scheduled to run on any host in the cluster. How do we mount a volume without specifying which host it's run on? Kubernetes volumes and persistent volumes were introduced to solve this problem (Chapter 4, Managing Stateful Workloads).

The lifetime of containers might be short; they may be killed or stopped anytime when they exceed the resource limit. How do we ensure our services are always on and are served by a certain number of containers? Deployment in Kubernetes ensures that a certain number of groups of containers are up and running. Kubernetes also supports liveness probes to help you define and monitor your application's health. For better resource management, we can define the maximum capacity for Kubernetes nodes and the resource limit for each group of containers (also known as pods). The Kubernetes scheduler will select a node that fulfills the resource criteria to run the containers. We'll learn about this further in Chapter 8Resource Management and Scaling. Kubernetes also provides an optional horizontal pod auto-scaling feature, which we can use to scale a pod horizontally by core or custom metrics. Kubernetes is also designed to have high availability (HA). We're able to create multiple master nodes and prevent single points of failure.