Journey into Kubernetes - Deployments

·

6 min read

Now that our namespaces and resource quotas are set, it's time to finally start putting some microservices into the cluster.

The way a service gets up into the cluster is via two main paths: Pod and Deployment. In YAML, you can define a pod, tell it what image to run and go! This will spin up a single pod in the cluster. As I mentioned in a previous article, pods can contain more than one container, but generally they don't. Pods have an finite lifetime and when they restart everything inside the pod gets lost. Any sort of data persistence inside a pod is thus moot.

There's a lot more to learn about pods, but for the sake of not getting bogged down in details, let's just understand pods as container instances. Great, let's move onto Deployments.

What is a Deployment?

An easy way to understand a deployment is as a definition of a microservice. A deployment is a wrapper around a pod or pods with additional functionality. You can define how many replicas of a pod you always want up. You can control a set of pods via the deployment instead of each individual pod. For example, if you have a deployment which defines 6 replicas of a pod, if you wanted to stop the pods, you'd have to individually shut down each one. Or... you just stop the deployment, and it stops all those replicas for you. You can scale the replicas up or down, and more! Check the link in resources below for a LOT more information about deployments.

Let's create a deployment! Call it 'fancyapi-deployment.yaml':

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fancyapi-deployment
  namespace: integration
  labels:
    app: fancyapi
spec:
  replicas: 2
  template:
    metadata:
      name: fancyapi
      labels:
        app: fancyapi
    spec:
      containers:
        - name: fancyapi
          image: ...
          imagePullPolicy: Always
          env:
            - name: ASPNETCORE_ENVIRONMENT
              value: Integration
          resources:
            requests:
              cpu: 250m
              memory: 128Mi
            limits:
              cpu: 1000m
              memory: 512mi
        restartPolicy: Always
  selector:
    matchLabels:
      app: fancyapi

There's a lot going on here. Let's take a look piece by piece.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fancyapi-deployment
  labels:
    app: fancyapi

This should be pretty standard by now. Creating a Deployment named fancyapi-deployment. Theoretically the -deployment is redundant, but when I kubectl get deployments and see the list, I like to be reassured I'm looking at deployments and not something else. Personal preference really. We add a label to the deployment named app with the value fancyapi.

Next, replicas.

spec:
  replicas: 2

This tells K8s that we always want at least 2 instances of the pod running. When adding this KVP, K8s creates a ReplicaSet in the background. We'll come back to it later.

Under the template map is where we define what every pod replica should look like. You're already familiar with the metadata, so let's skip right down to the spec.

spec:
  containers:
    - name: fancyapi
      image: ...
      imagePullPolicy: Always
      env:
        - name: ASPNETCORE_ENVIRONMENT
          value: Integration
      resources:
        requests:
          cpu: 250m
          memory: 128Mi
        limits:
          cpu: 1000m
          memory: 512mi
    restartPolicy: Always

The containers property is in plural, because as I mentioned in a previous article, you can host multiple containers in one pod. In our case we are gonna have a list with one item in it.

Much like everything else in the K8s world, a container needs a name. The image needs to be a URI pointing at the ACR. Let's say, for example, that your registry is called CoolContainerRegistry and that your image is called fancyapi and has a tag v1. You'll point it at ACR like this:

image: coolcontainerregistry.azurecr.io/fancyapi:v1

You'll need to find the URI of your registry in Azure Portal (or you can also find it using Azure CLI:)

az acr list | sls "loginServer"

The imagePullPolicy tells the container to always pull the image when it starts up. This ensures that when you restart your pods after a new release, the new image will be acquired.

There are multiple different strategies for deploying containers to pods. The strategy in this deployment is just one, where you use the same tag (v1) for every release. However, you may want to tag every release with a new version (like v1.16.3.879). This will make your image look like fancyapi:v1.16.3.879 and to ensure your pod's running the latest version, you'd have to use the kubectl set image command. Or more likely you have releases tagged with :int and :prod like in the scenario we've been building. In any case, you have options.

Next, we have the env list. This is a list of maps that contains the name of the environment variable and its value. You can add any environment variables you want your container to have right here. In the case of ASP.NET Core, you may want to inject the environment name. We're hardcoding it here, but there's a neat trick you can use, where you can use the Pod's metadata properties as environment variable values.

For example, if your pod is hosted in the namespace Production, you could inject the variable like this:

env:
  - name: ASPNETCORE_ENVIRONMENT
    valueFrom:
      fieldRef:
        fieldPath: metadata.namespace

This would make ASPNETCORE_ENVIRONMENT=Production and thus enable you to use production-specific appsettings.Production.json.

Right then. Next, we have the resources map with requests and limits child maps. If you recall the Resources discussion from an earlier article, these are pod-specific values.

Requests are what the pod is guaranteed to have (provided it's not over the namespace's request limit) and limits are what the pod is allowed to have at maximum. These values are extremely important to set! It will take some time to determine how much CPU and memory your service takes up, but once you have the rough numbers and set them, the cluster will manage everything for you. If your traffic somehow explodes, your entire cluster won't get hosed -- only the pod under load will get throttled (thus possibly resulting in 502s to calling users, but better than a total outage).

As an aside, when you have your microservices all nice and cozy in the cluster and you're ready to load test, check out artillery.io. I use this nodejs lib to hammer my APIs and collect results. Very handy tool.

Okay, lastly, we have the restartPolicy: Always. This tells the pod that we want it to restart whenever anything makes the container exit. This includes things like if the pod runs out of resources or there's a critical boot failure inside the container, or one of the containers inside the pod dies. It also includes positive events like the container gracefully exiting. In this case, the pod will restart itself and stay in 'Running' state. There are other policies like OnFailure and Never. These are handy for Cron/Jobs -- specifically Never, since you don't want a job to keep rerunning itself after it's finished... unless you do.

Great, almost done!

Last section is the selector:

selector:
  matchLabels:
    app: fancyapi

Here we have defined a label selector. You can see that this selector is on the same level as the template map. The selector applies to the deployment and is not part of the pod template. fancyapi is a label we added to pod template. If you look at the other two keys on this level, replicas and template, you can deduce that the deployment basically says _I want two replicas to run the container template with the label app: fancyapi. It's a bit confusing, isn't it? Leave it to Google to overcomplicate a concept and then not document it well.

Anyway, you now have your deployment YAML. Go ahead and apply it:

kubectl apply -f fancyapi-deployment.yaml

Let's see it in our cluster:

kubectl get deployments

We can also see that K8s spun up two pods for us, because we requested 2 replicas:

kubectl get pods

Hurrah! But wait... you have an API running now, but how do you access it from the outside? That's coming up in the next article!


Resources:

Did you find this article valuable?

Support Programming with Paulers by becoming a sponsor. Any amount is appreciated!