Kubernetes Tutorial: Learn the Basics and Get Started

It’s been over three years since Google open-sourced the Kubernetes project. Even so, you might still be wondering what Kubernetes is and how you can get started using it. Well, you’re in the right place! In this post, I’m going to explain the basics you need to know to get started with Kubernetes. And I won’t just be throwing concepts at you—I’ll give you real code examples that will help you get a better idea of why you might need to use Kubernetes if you’re thinking about using containers.

Boat_wheel_with_Scalyr_colors_signifying_Kubernetes_Tutorial

Getting Started With Hands-On Exercises

Before we dive into the core concepts of Kubernetes, you’re going to want to brew a pot of coffee.

Now, let’s get started by setting up your local environment. I’m not sure about you, but I learn more quickly by doing rather than just reading or memorizing concepts. Kubernetes’s learning curve is significant, and sometimes you can get overwhelmed with all the ecosystem that surrounds this popular container-orchestration system. But once you have a solid knowledge base, you’ll see that Kubernetes only happens to have a lot of pieces that you need to put together to create something cool.

If you don’t want to go through the process of installing Kubernetes right away, there’s a playground you can use called Play with Kubernetes (PWK). I’d recommend you take the PWK approach only if installing Kubernetes becomes complicated with your current computer. There are a lot of options to install and use Kubernetes, but I’ll focus only on the local environment.

Installing the Kubernetes Command Line

Before installing Kubernetes, you need to install the command line (CLI) tool called kubectl (pronounced: cube-cuttle). Kubectl is the CLI you’ll use to interact with any Kubernetes cluster, not just the local one. If you don’t want to have a local Kubernetes installation, but you want to operate a Kubernetes cluster, you’ll only use kubectl.

Installing kubectl is very simple, and the instructions vary depending on your OS. It’s supported for Linux, Windows, or Mac. You can follow the detailed instructions in the official docs page. Run the following command to verify the installation:

kubectl version

Now continue with the local installation of Kubernetes.

Installing Kubernetes Locally

You start by installing Docker. Kubernetes does support other container systems, but I think using Docker is the easiest way to learn Kubernetes.

Depending on your OS, I’d recommend a specific install method:

  • If Linux, the preferred method is to install minikube. You’ll need a hypervisor like VirtualBox or KVM.
  • If Windows or Mac, the preferred method is to install the latest version of Docker for desktop. You might need to enable Kubernetes if you are already using the newest version.

The installation process might take a while, so take this time to grab another cup of coffee.

Verify that Kubernetes is running with the following command:

kubectl get svc

If you don’t have an error, that’s it! You now have Kubernetes running locally. If you want to have a graphic interface, you can install it very quickly. In this guide, I’ll only use the CLI, but it’s good to have the GUI just in case you want to have a graphic view of all the components.

Kubernetes Core Concepts

You interact with Kubernetes using the API (typically via the kubectl CLI) with the purpose of setting up the desired state of your applications. That means that you tell Kubernetes how your application should run. Kubernetes will make sure that the application is running in the state you specify, even if there was a failure in the cluster or the load increase and you need to scale out the resources.

The way you set the desired state is by creating objects in Kubernetes. We’ll take a look at some of the core objects that you’ll need to deploy with zero-downtime a resilient application. You define these objects in a YAML format, and that’s what I’ll use to practice the concepts.

Pods

A pod is the smallest compute unit object you create in Kubernetes, and it’s how you group one or more containers. You might know that containers have only one responsibility. But the reason why a pod has more than one container is for performance and co-scheduling purposes. Containers in a pod share the same networking, the same storage, and are scheduled in the same host.

For example, a web app could be a container that writes the logs to a storage device. Another container that collects logs in files could use the same storage device as the web app container. That way, you don’t have more than one responsibility into a container. So you should think of a pod as if it was a representation of a virtual machine, but it isn’t.

You create a pod object with a YAML file definition named pod.yaml, like the following:

apiVersion: v1
kind: Pod
metadata:
    labels:
        app: helloworld
spec:
    containers:
    - name: helloworld
      image: christianhxc/helloworld:1.0
      ports:
      - containerPort: 80
      resources:
        requests:
            cpu: 50m
        limits:
            cpu: 100m

Now you create the pod by running the following command:

kubectl apply -f pod.yaml

You then verify that the pod is running with this command:

$ kubectl get pods
NAME         READY     STATUS    RESTARTS   AGE
helloworld   1/1       Running   0          41s

If for some reason the pod dies, you don’t have to tell Kubernetes to create the pod again. Kubernetes will recreate one automatically because the pod’s state is not the same as the one you defined. Let’s take a look at the following object then.

Deployments

Pods are mortal, and that means that your application is not resilient. A deployment object is how you can give immortality to pods. You define how many pods you want to have running all the time, the scaling policies, and the policy for zero-downtime deployments. If a pod dies, Kubernetes will spin up a new pod. Kubernetes continually verifies that the desired state matches with the object’s definition.

Let’s create a deployment.yaml file to say that we want to have two pods running all the time, like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld
  labels:
    app: helloworld
spec:
  replicas: 3
  selector:
    matchLabels:
      app: helloworld
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: helloworld
        image: christianhxc/helloworld:1.0
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 50m
          limits:
            cpu: 100m

Now let’s tell Kubernetes that we want to create the deployment object by running the following command:

$ kubectl apply -f deployment.yaml
deployment.apps/helloworld created

Verify that the deployment exists by running the following command:

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
helloworld   3         3         3            3           40s

You’ll see that now you have three pods running plus the one we created previously.

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
helloworld                    1/1       Running   0          8h
helloworld-75d8567b94-7lmvc   1/1       Running   0          24s
helloworld-75d8567b94-n7spd   1/1       Running   0          24s
helloworld-75d8567b94-z9tjq   1/1       Running   0          24s

If you want to try killing a pod, run the command “kubectl delete podhelloworld-75d8567b94-7lmvc”. You’ll see how Kubernetes creates a new pod so that the desired state matches with the deployment definition. Every time you need to update the de container image, you’ll change the deployment.yaml file and run the “apply” command. Then, Kubernetes will create new pods with the updated container image version. Until they’re running, the pods with the previous version will be terminated. You can configure in much more detail how you want this update to happen, but for now the default behavior works for learning purposes.

Services

At the moment, you can’t access the pods through a web browser to verify that they’re working—you need to expose them. The recommended way to expose pods is via the service object. Still, other pods in the cluster can communicate between each other by using the internal IP address. But you can’t rely on an IP to communicate with pods for its dynamism, which is why the solution is still a service.

A service then is a representation of a load balancer for pods. You can configure the load balancing algorithm, and if Kubernetes is integrated with a cloud provider, you’ll use the native load balancers from the cloud provider.

The most straightforward way to define your services is as the following service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: helloworld
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: helloworld

You might notice that we’ve been using labels and selectors, and that’s because that’s how a service knows to which pods redirect the requests. In the objects you’ve been creating, the label chosen is “app: helloworld” in the pod’s metadata definition. It doesn’t matter how a pod was created. As long as it has that label, it will be “registered” in the service.

Let’s run the following command to create the service:

$ kubectl apply -f service.yaml
service/helloworld created

To verify that the service is running, and to get the IP address to test it, run the following command:

$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
helloworld   LoadBalancer   10.105.139.72   localhost     80:30299/TCP   21s
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        12h

Use the value given for “EXTERNAL-IP” and the “PORT” to test it. In this case, the URL should be http://localhost/api/values for the container image we’re using. You might receive different values depending on where you’re running Kubernetes. If for some reason port 80 is taken, you can change the port in the service.yaml file and run the “apply” command again.

And that’s it! Just by using pods, deployments, and services, you can deploy a resilient stateless app in your local environment.

What’s Next?

The next step is to move forward by deploying the same app to the following environment, like the cloud. The process and flow are the same, at least with a simple app like the one I used in the examples.

Even though this guide is a little more advanced than a typical “hello world” app, the app is still straightforward. But that’s not usually the reality in today’s complex and distributed world. I just scratched the surface—Kubernetes has a lot more concepts. (For example, I didn’t talk about other core concepts like volumes, namespaces, ingress, configmap, secrets, jobs, daemonset, and statefulset.) But the concepts I explained and demonstrated here should be a good foundation for your Kubernetes journey.

If you want to continue learning more, the official documentation is a pretty good source, and here are some useful links to start:

Get your hands-on experience by practicing; it’s the best way you’ll learn Kubernetes.

This post was written by Christian Meléndez. Christian is a technologist that started as a software developer and has more recently become a cloud architect focused on implementing continuous delivery pipelines with applications in several flavors, including .NET, Node.js, and Java, often using Docker containers.

Leave a Reply

Your email address will not be published. Required fields are marked *