My CKAD Notes

POD

- Create an NGINX Pod

kubectl run nginx --image=nginx

- Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)

kubectl run nginx --image=nginx --dry-run=client -o yaml

To get the pods with labels

kubectl get pods --same-labels

DEPLOYMENT

- Create a deployment

kubectl create deployment --image=nginx nginx

- Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml

NOTE

kubectl create deployment does not have a --replicas option. You could first create it and then scale it using the kubectl scale command.

Save it to a file - (If you need to modify or add some other details)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml

We can then update the YAML file with the replicas or any other field before creating the deployment.

SERVİCE

- Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379

kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors)

or

kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml

(This will not use the pods labels as selectors, instead it will assume selectors as app=redis. You cannot pass in selectors as an option. So it does not work very well if your pod has a different label set. So generate the file and modify the selectors before creating the service).

- Create a Service named nginx-service of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes

kubectl expose pod nginx --port=80 --name nginx-service --type=NodePort --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)

EXAMPLES

Deploy a pod named nginx-pod using the nginx:alpine image

kubectl run nginx-pod --image=nginx:alpine

Deploy a redis pod using the redis:alpine image with the labels set to tier=db.

kubectl run redis --image=redis:alpine --labels=tier=db

Create a service redis-service to expose the redis application within the cluster on port 6379.

kubectl expose pod redis --name=redis-service --port=6379

Create a deployment named webapp using the image kodekloud/webapp-color with 3 replicas

kubectl create deployment webapp --image=kodekloud/webapp-color

kubectl scale deployment webapp --replicas=3

Create a new pod called custom-nginx using the nginx image and expose it on container port 8080

kubectl run custom-nginx --image=nginx --port=8080

Create a new namespace called dev-ns.

kubectl create namespace dev-ns

Create a new deployment called redis-deploy in the dev-ns namespace with the redis image. It should have 2 replicas.

kubectl create deployment redis-deploy -n dev-ns --image=redis

kubectl scale deployment redis-deploy -n dev-ns --replicas=2

Create a pod called httpd using the image httpd:alpine in the default namespace. Next, create a service of type ClusterIP by the same name (httpd). The target port for the service should be 80.

kubectl run httpd --image=httpd:alpine

kubectl expose pod httpd --port=80


Edit PODs and Deployments

Edit a POD

Remember, we CANNOT edit specifications of an existing POD other than the below.

spec.containers[*].image

spec.initContainers[*].image

spec.activeDeadlineSeconds

spec.tolerations

For example we cannot edit the environment variables, service accounts, resource limits of a running pod. But if you really want to, you have 2 options:

Run the kubectl edit pod <pod name> command. This will open the pod specification in an editor (vi editor). Then edit the required properties. When you try to save it, you will be denied. This is because you are attempting to edit a field on the pod that is not editable.

A copy of the file with your changes is saved as a yaml file in a temporary location.

We can then delete the existing pod by running the command: kubectl delete pod <pod name>

Then create a new pod with changes using the temporary file kubectl create -f <newly created yaml file with changes>

like kubectl create -f /tmp/kubectl-edit-ccvrq.yaml

The second option is to extract the pod definition in YAML format to a file using the command kubectl get pod <pod name> -o yaml > my-new-pod.yaml

Then make the changes to the exported file using an editor (vi editor). Save the changes vi my-new-pod.yaml

Then delete the existing pod kubectl delete pod <pod-name>

Then create a new pod with the edited file kubectl create -f my-new-pod.yaml

Edit Deployments

With Deployments we can easily edit any field/property of the POD template. Since the pod template is a child of the deployment specification, with every change the deployment will automatically delete and create a new pod with the new changes. So if we are asked to edit a property of a POD part of a deployment you may do that simply by running the command

kubectl edit deployment my-deployment

configMaps

When we have a lot of pod definition files, it will be difficult to manage the environment data stored within the various files. We can take this information out of the POD definition file and manage it centrally using the Configuration maps.

Config maps are used to pass configuration data in the form of key value pairs in kubernetes.

When a POD is created, inject the config map into the pod. So the key value pairs are avilable as environment variables for application hosted inside the container in the POD.

There are two phases involved in configuring configMaps. 

* Create configMap.

* Inject into POD.


There are two ways of creating a configMap:

configmap

APP_COLOR: blue

APP_MODE: prod

Imperative way - kubectl create configmap

without using configmap definition file.


Declartaive way - kubectl create -f <filename>

using configmap definition file.


kubectl get configmaps

To list all the available configmaps

kubectl describe configmaps

This lists the configuration data.

ConfigMap in Pods

We have pod definition file and to inject an environment variable, add an new property to the container called envFrom. envForm property is a list and we can pass as many environment variables as required.

Each item in the list corresponds to a configMap item. Specify the name of the configMap we created earlier.



Title:

For example hardcoding host, username and passwords in th code is not a good idea.

One option for this is to move these data to a configmap. configMap stores configuration data in plain text format. It will be ok to move the Hostname and username to the config but it is definetely not the right place to move the passwords.


Secrets

Secrets are used to store sensitive information.

Secrets are similar to configMaps except that they are stored in an encoded or hashed format.


There are two steps involved in working with secrets.

1. Create a secret

2. Inject into a POD.


Create a secret

There are two ways to create secrets.

* Imperative way - without using a secret definition file.


Declarative way - using a secret definition file.


We need to encode passwords and other important things while giving in definition file.

kubectl get secrets
To view the secrets available

kubectl describe secrets
To view more information about the newly created secret. This shows the attributes of the newly created secret, but hides the values.

To view the values of secrets
kubectl get secret <secret name> -o yaml

If we want to decode the hash values

Inject into a POD
We inject secrets as environment variables in the pod definition file as below

Docker Security
Let's assume a host with docker installed on it. This host has a set of its own processes running such as a number of operating system processes, docker demeon, shh server etc,. We will now run an ubuntu docker container that runs and process that sleeps for an hour docker run ubuntu sleep 3600.
We have learnt that unlike virtual machines containers are not completely isolated from their host. Container and host share the same kernel. Containers are isolated using namespaces in Linux. The host has a namespace and the containers have their own namespace. All the processes run by the containers are infact run on the host itself but in their own namespace. As far as the docker container is concerned, it is in its own namespace and it can see its own processes only. It cannot see anything outside of it.
When we list the processes from within the docker container, we can see only sleep process with process Id 1.
For the docker host, all processes of its own as well as those in the child namespaces are visible as just an another process in the system. So when we list the processes on the host, we see a list of processes including the sleep command but the different process Id. This is because the processes can have different process Ids in different namespaces and thus how docker isolates containers within a system.
This is called Process Isolation.

Security - Users
The docker host has a set of users, a root user as well as number of non-root users. By default, docker runs processes within containers as the root user.
Both within the container and ouside the container on the host, the process is run as the root user.
If we do not want the process within the container to run as rhe root user, we may set the user using user option within the docker runcommand and specify the new userId.
docker run --user=1000 ubuntu sleep 3600
We will see that the process now runs with the new userId and another way to enforce user security is to have this defined in the docker image itself at the time of creation.

For Example:

Dockerfile

FROM ubuntu
USER 1000

docker build -t my-ubuntu-image .
docker run my-ubuntu-image sleep 3600
This process will be run with the userId 1000.

Docker implements set of security features that limits the abilities of root user within the container. The root user within the container isn't really like the root user on the host.
Docker uses Linux capabilities to implement this. The root user is the most powerful user on the system, the root user can literally do anything and so does the process run by the root user. It has unrestricted access to the system such as modifying files and permission on files and so on....

We can control and limit what capabilities are made available to the user. By default, docker runs a container with a limited set of capabilities so that the processes running within the container do not have privileges such as reboot the host or perform the operations that can disrupt the host or other containers running on the same host.
If we want to override this behaviour and provide the additional privileges than what is avilable, use cap-add option in the docker run command.
docker run --cap-add MAC_ADMIN ubuntu
We can drop privileges as well using cap-drop option.
docker run --cap-drop KILL ubuntu
If we want to run the container with all the privileges available, use the privileaged flag
docker run --privileged ubuntu

Kubernetes Security Contexts
AUTHENTICATIONS






we need to give kube config file path to show result , otherwise defult kubeconfig comes out:
kubectl get pods --kubeconfig=/path/to/admin.conf

API GROUPS

kubectl api-resources -o wide
it list of the resurces api groups for the yaml to create roles

AUTHORIZATIONS
SEE AUTHORIZATION MODE COMMAND:
kubectl describe pod kube-apiserver-controlplane -n kube-system

RBAC ROLE BASED ACCESS CONTROL




kubectl create role developer --verb=create --verb=list --verb=delete --resource=pods


kubectl create rolebinding dev-user-binding --clusterrole=dev --user=dev-user

CLUSTER ROLES



thic comman gives how many clusterroles ha veen on the cluster. important point wc -l
kubectl get clusterroles --no-headers  | wc -l

kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods

kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --group=group1

Admission Controller




for example namespace blue does not exists but admission controller created it.
to see enable and disable asmissions use this command:
ps -ef | grep kube-apiserver | grep admission-plugins

 

create tls secret for webhook : 
kubectl -n webhook-demo create secret tls webhook-server-tls \
    --cert "/root/keys/webhook-server-tls.crt" \
    --key "/root/keys/webhook-server-tls.key"

kubectl proxy 8001& 
this command provide us to curl an check details like
curl localhost:8001/apis/authorization.k8s.io
as a result it gives as authorization.k8s.io details :
{
  "kind": "APIGroup",
  "apiVersion": "v1",
  "name": "authorization.k8s.io",
  "versions": [
    {
      "groupVersion": "authorization.k8s.io/v1",
      "version": "v1"
    }
  ],
  "preferredVersion": {
    "groupVersion": "authorization.k8s.io/v1",
    "version": "v1"
  }
}

Customs Resource Definition (CRD)


Set the dataField length to 2 and access permission should be true:


kind: Global
apiVersion: traffic.controller/v1
metadata:
  name: datacenter
spec:
  dataField: 2
  access: true
Container Security

When we run a docker container, we have the option to define a set of security standards such as Id of the user to run the container, the linux capabilities that can be added or removed from the container etc,. These can be configured in kubernetes as well.

In kubernetes containers are encapsulated in PODs. We may choose to configure the security settings at a container level or at a POD level. If we configure it at POD level, the settings will carry over to all the containers within the POD. If we configure it at both container and POD, the settings on the container will overwrite the settings on the POD.

To set the security at POD level.
apiVersion: v1
  • kind: Pod
  • metadata:
  •   name: web-pod
  • spec:
  •   securityContext: 
  •     # Setting the user
  •     runAsUser: 1000
  •   conatiners:
  •     - name: ubuntu
  •       image: ubuntu
  •       command: ["sleep", "3600"]
To set the security at container level, move the securityContextsection under the container specification as below.

  • apiVersion: v1
  • kind: Pod
  • metadata:
  •   name: web-pod
  • spec:
  •   conatiners:
  •     - name: ubuntu
  •       image: ubuntu
  •       command: ["sleep", "3600"]
  •       securityContext: 
  •         # Setting the user
  •         runAsUser: 1000
  •         capabilities: 
  •            add: ["MAC_ADMIN"]
To add the capabilities use the capabilities option and specify a list of capabilities to add to the POD.

"Capabilities are only supported at the container level and not at the POD level."

The concept of service accounts is linked to other security related concepts in kubernetes such as authentication, authorization, roll base access controls etc,.

There are two types of accounts in the kubernetes.

User Account
The user account is used by humans.
A user account could be for an administor accessing the cluster to perform adminstrative tasks or a developer accessing the clusetr to deploy applications etc,.

Service Account
The service account is used by machines.
A service account could be an account used by a an application to interact with the kubernetes cluster.

For example: An automated build tool like jenkins uses service accounts to deploy applications on the kubernetes cluster.

To create a service account
kubectl create serviceaccount <account name>

To view all the service accounts kubectl get serviceaccount

To describe the service account
kubectl describe serviceaccount <account name>
When a service account is created, it also creates a token automatically.

The service account token is what must be used by the external application while autrhenticating to the kubernetes-api. The token however is stored as the secret object. The secret object is linked to the service account.


This token can then be used as an authentication bearer token while making REST call to the kubernetes API.

For Example: Using curl, We can use bearer token as authorization header while making REST call to the kubernetes API.

Thus, we can create a service account, assign the right permissions using roll based access mechanism(which we can see later), and export service account tokens and use it to configure third party application to authenticate to the kubernetes API.

If the third party application is hosted on kubernetes cluster itself, the whole process of exporting service account token and configuring the third party application to use it can be made simple by automatically mounting service token secret as a volume inside the pod hosting the third party application. That way, the token to access the kubernetes API is already placed inside the POD and can be easily read by the application.

For every namespace in kubernetes, a service account named default is automatically created. Each namespace has its own default servcie account.
Whenever a POD is created, the default service account and its token are automatically mounted to that POD as a volume mount.


The default service account is very much restricted, it only has permissions to run basic kubernetes queries.
If we want to use different service account(which we created), modify the pod-definition file to include a service account field and specify the name of the service account.

"We cannot edit service account of the existimg POD, we must delete and recreate the POD. However, in case of deployment we can edit the service account of the POD, as any changes th POD file automatically trigger the new rollout for the deployment. The deployment will take care of deleting and recreating PODs with the right service account."

Kubernetes automatically mounts default service account, if we haven't explicitly specify any. We may choose not to mount a service account automatically by setting automountServiceAccountToken as false under spec section as below.

Let us consider a 3-node kubernetes cluster, each node has a set of CPU, memory and disk resources available. A POD consumes a set of resources. Whenever a POD is placed on a node, it consumes resources available to that node.
Kubernetes scheduler decides to which node the POD goes to. The scheduler takes into consideration, the amount of resources required by a POD and those available on the nodes.
If the node has no sufficient resources, the scheduler avoids placing the POD on that node instead places the POD on the one where sufficient resources are available.
If there is no sufficient resources avaialble in any of the nodes, kubernetes holds back scheduling the POD, we will see the POD in pending state, in events we can see reason as insufficient CPU

Resource Requirements for POD
By default, kubernetes assumes that a POD or a container inside the POD requires 0.5CPU and 256 Mebi Byte of memory. This is known as the resource request for a container i.e., the minimum CPU or memory requested by the container. When the schedular tries to place a POD on a node, it uses these numbers to identify a node which has sufficient number of resources available.

If we know that our application needs more than these, we can modify these values by specfying them in POD or Deployment definition files.

In the simple POD definition file, add a section called resources, under which add requests and specify new values of CPU and memory.

Resources - CPU
1 count of CPU - 1 block of CPU.
Blocks are used for illustration purpose only, it doesn't have to be in the increment of 0.5. We can specify any value as low as 0.1.
0.1 CPU can also be expressed as 100 m (m stands for milli). We can go as low as 1m but not lower than that.

1 count of CPU is equivalent to

  • 1 vCPU in AWS or
  • 1 core in GCP or
  • 1 core in Azure or
  • 1 Hyperthread
We could requests higher number of CPUs for the container, provided that nodes are sufficiently funded.

Similarly, with memory, we can specify 256Mi or specify the same value as 268435456 or use other suffix as below

In Docker world, a docker container has no limit to the resources it can consume on a node.
Lets say, a container starts with 1vCPU on a node, it can go up and consume as much reource as it requires, suffocating the native processes on the node or other containers of resources. However, we can set a limit for the reouces usage on tehse PODs by adding limit section under resources section in pod-definition file(specfying new memory and CPU) as below.

When the POD is created, kubernetes sets new limits for the container.
The limits and requests are set for each container within the POD.

When a POD tries to exceed the resources beyond its specified limit,
In case of CPU, Kubernetes THROTTLE the CPU so that it does not go beyond the specified limit. A container cannot use more CPU resources than its limit.
In case of memory, a container can use memory resources than its limit. So, if a POD tries to consume more memory than its limit, constantly the POD gets terminated (when its reached maximum node resources).

Taints and tolerations have nothing to do with security or intrusion on the cluster.

Taints and Tolerations are used to set restrictions on what pods can be shared on that node.

When the PODS are created, kubernetes scheduler tries to place these pods on the available worker nodes.
As of now, there are no restrictions or limitations and so scheduler places the PODs across all of the nodes to balance them out equally.

Let us assume, we have dedicated resources on node1 for a particular use case or application. so, we would like only those PODs that belong this application to be placed on node1.

First, we prevent all PODs from placing on node1 by placing a taint on the node, lets say it as blue.
By default, PODs won't have any tolerations i.e., unless specified otherwise none of the PODs can tolerate any taint. So, in this case none the PODs can be placed on node1 as none of them can tolerate the taint blue. i.e., no unwanted PODs can be placed on node1.
Now, we need to enable certain PODs to be placed on this node. For this, we must specify which PODs are tolerant for this particular taint. In our case, we would like to allow only POD D to be placed on this node. So, we add a toleration to POD D. POD D is now tolerant to blue. Now, when scheduler tries to place POD D on node1, it goes through. Node1 can only accept PODs that are tolerant to blue taint.

!!! Note " Taints are set on nodes and Tolerations are set on PODs "

Taints - Node
kubectl taint nodes node-name key=value:taint-effect
To taint a node , specify the node name to taint followed by a taint itself which is a key value pair.
taint-effect defines what would happen to the PODs if they do not tolerate the taint.

There are three taint effects.
* NoSchedule -- PODs will not be scheduled on the node. * PreferNoSchedule -- The system will try to avoid placing a POD on the node but that is not guaranteed. * NoExecute -- The new PODs will not be scheduled on the node and the existing pods on the nodes will be evicted if they do not tolerate the taint. These PODs may have been scheduled on the node before taint was applied on the node.

kubectl taint nodes node1 app=blue:NoSchedule

Tolerations - PODs
To add a toleration to the POD, pull up the pod-definition file, under spec section, add tolerations section and in this section move all the values thats specified while placing taint on the node.

kubectl taint nodes node1 app=blue:NoSchedule

pod-definition file

apiVersion: v1
kind: Pod
metadata: 
  name: myapp-pod
spec:
  containers:
   - name: nginx-container
      image: nginx
  tolerations:
    - key: "app"
      operator: "Equal"
      value: "blue"
      effect: "NoSchedule"
When the PODs are now created or update with new tolerations, they are either not scheduled on node1 or evicted from the existing node, depending on the effect set.

Taints and Tolerations are only meant to retsrict nodes from accepting certain PODs. It does not gurantee that the POD with the toleration will be kept only placed on the node with taint. Since there won't be any taints on other nodes, the POD with toleration can very well be placed on other nodes.

!!! Note " Taints and Tolerations does not tell the POD to go to a particular node. Instead, it tells the node to only accept PODs with certain Tolerations."

Master Node
The scheduler does not schedule any PODs on the master node.
When the kubernetes cluster is first set up, a taint is set up on the master node automatically that prevents any PODs from being scheduled on this node.
We can modify this if required. However, the best practice is to not deploy application workloads on master server.

To view the taints on the master node
kubectl describe node <nodename> | grep Taint

Let us start with a simple example,
We have 3 node cluster of which 2 are smaller nodes with lower hardware resources and one of them is a larger node configured with higher resources.
We have different kinds of workloads running in the cluster, we would like to dedicate the data processing workloads that require higher horse power to the larger node as that is the only node that will not run out of resources, in case job demands extra resources.
However, in the current default setup any PODs can go any nodes. Therefore, the POD that requires higher horse power may end up placed on the smaller node.

To solve this, we can set limitation on PODs, so that they only run on particular nodes.

There are two ways to do this.
* Node Selectors * Node Affinity

Node Selectors
pod-definition file

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: data-processor
    image: data-processor
  # To limit this pod to run on larger node
  nodeSelector:
    size: Large
    # Large and smalls are the labels assigned to the node, the schedular uses this labels to match and identify the right nodes to place th pods on 
We need to label the node before using the labels in nodeSelector of Pod file.

To label the node
kubectl label nodes <node-name> <label-key>=<label-value>
kubectl label nodes node-1 size=Large

Now we can create the pod using pod-definition file
kubectl create -f pod-definition.yaml

Now when the pod is now created, it is placed on larger node as desired.

Limitations
We used a single label and selector to achieve our goals, but what if we have some complex requirement.
For Example: Place the pod on a large or medium node, or something like place the pod on any node that are not small. We cannot achive this using node selector. for this, we use Node Affinity.

Node Affinity
The primary purpose of Node Affinity feature is to ensure that pods are hosted on particular nodes.
The Node Affinity feature provides us with advanced capabilities to limit pod placement on specific nodes.

With great power, comes great complexity. So, the pod-definition file with Node Affinity looks as below.

As per above screen, both Node selector and Node-Affinity do the same thing, that is placing pod on large node.

For large or medium node

For not in small node

If we did not set the size label for small nodes, we can just use exists as below

key: size
operator: Exists
When the pods are created, these rules are considered and the pods are placed on to the right nodes. If the node affinity could not match a node with the given expression, that is if we dont have any node with label size, then these can be handled by type affinity.

Node Affinity Types
  • requiredDuringSchedulingIgnoredDuringExecution
  • preferredDuringSchedulingIgnoredDuringExecution


Let us assume,
we have three nodes and three pods in three colors Blue, Red and Green as below.

Th ultimate aim is to place blue pod in the blue node, red pod in the red node and the green pod in the green node.
We have other pods in the cluster as well as other nodes.
We do not want any other pods to be placed on our color nodes, neither we want our pods to be placed on their nodes.

Using Taints and Tolerations
We apply a taint to the nodes marking them with their colors blue, red and green and set a tolerations on the pods to tolerate the respective colors.
When the pods are now created, the nodes ensure they accept the pods with right toleration.

However, taints and tolerations does not gurantee that the pods will only prefer these nodes, so any pod can ends up on other nodes which do not have any taint set.

Using Node Affinity
With Node Affinity, we first label the nodes with their respective colors blue, red and green. We then set node selectors on the pods, to tie the pods to the nodes. As such, the pods end up on the right nodes.

However, this does not gurantee that other pods are not placed on these nodes. In this case, there is a chance that one of the other pods may end up on our nodes as below.

Using Taints/Tolerations and Node Affinity
Combination of Taints/Tolerations and Node affinity rules can be used together to cmpletely dedicate nodes for specific pods.

We first use Taints and Tolerations to prevent other pods from being placed on our nodes and then we use Node Affinity to prevent our pods from being placed on other nodes.

The idea of decoupling a large Monolith application into sub components known as microservices enables us to develop and deploy a set independent small and reusable code. This architecture then helps us to scale up or down as well as modify each service as required as opposed for modifying the entire application.

However at times, we may need two services to work together such as web server and a logging service. We need one agent instance per web server instance that are paired together. We don't want to merge and bloat the code of the two servcies as each of them target different functionalities. we would still like them to be devloped and deployed separately.
We only need these two functionalities to work together. We need one agent per webserver instance paired together that can scale up and down together.

That is why we have multi-container pods that share the same life cycle (created together and destroyed together), share the same network space (they can refer each other as locolhost) and they have access to the same storage volumes.
This way, we need to have to establish volume sharing or services between the pods to enable communication between them.

To Create a multi-container pod, add the new container to the definition file.

pod-definition file

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp
  labels:
    name: simple-webapp
spec:
  containers:
  - name: simple-webapp
    image: simple-webapp
    ports:
    - containerPort: 8080

  - name: log-agent
    image: log-agent
There are three common design patterns for multi-container pods.

  • Sidecar
  • Ambassdor
  • Adapter
Example for sidecar:
Deploying a logging agent alongside a webserver to collect logs and forward them to a central log server.

Say we have multiple applications generating logs in different formats, it would be hard to process the various lof formats on the central logging server.For sending the logs to the central server, we would like to convert logs to a common format.For this we deploy the Adapter container.
Adapter container processes the logs before sending it to the central server.

Our application communicates to different database instances at different stages of development. local database for development, one for testing and one for production. We must make sure to modify this connectivity in our application code depending on the environment we are deploying our application to.
We may choose to outsource this logic to a separate container within a pod, so that our application can always refer to a database at locolhost and the new container will proxy that request to the right database. This is known as ambassador container. These are the different design pattern in multi-container pods.

Implemention of these patterns using the pod-definition file will be same, that is just having multiple containers in the pod-definition file.

Readiness Probes
POD LifeCycle
A POD has a POD status and conditions.

A POD status tells us where a POD is in its lifecycle.
When a POD is creating, it will be in a pending state, this is when the scheduler tries to figure out, where to place the POD.
If the scheduler cannot find a node to place the POD, it remains in the pending state.
To find out, why it's stuck in a pending state run
kubectl describe pod <podname>
Once the POD is scheduled, it goes to containerCreating status, where are the images needed for the application are pulled and the container starts.

Once all the containers in a POD starts, it goes into a running state. It continues to be in running state until the program gets completed successfully or it is terminated.

The status of the POD can be seen under kubectl get pods. The POD status only gives us the high level summary of the POD.

Conditions complement the POD status.
It is an array of True or False values, that tell us the state of a POD.
When a POD is scheduled on a node, the POD scheduled condition is set to True and when the POD is initialised, it's value is set to True.
When all the containers in the POD are ready, the containersReady is set to True and finally the POD itself is considered to be ready.
To see the status the POD conditions..
kubectl describe <pod-name> -- under conditions.
We also see the ready state of the POD in the output of kubectl get pods.
The containersReady or Ready condition indicates that the containers inside the pod are ready to accept the user traffic.

Normally servers takes some seconds to intialize the server and ready to serve the users. During this wait period, the pod continues to show that the container is ready to server. In this case, the status of the container is wrong. By default, kubernetes assumes as soon as the container is created, it is ready to serve the user traffic.
This makes users to hit a pod that is not yet running a live application.

We need a way to tie the ready condition to the actual state of the application inside the container.

To check the application is ready or not, we can set up of different kinds of tests

  • In case of webservers - HTTP test - /api/ready
  • In case of database - TCP test - particular TCP socket is listening or not
  • customer script in the container that executes successfully when container is ready.
In the above cases, only when the test gets passed, the container will be set to ready state. These tests can be configures in pod-definition file as below.

For a minimum of 10 seconds warmup we can add initialDelaySeconds option To specify how often to probe, we can use periodSeconds option By default, if the application is not ready after 3 attemps the probe will stop.
If we want to make more attempts use failureThreshold option.

Liveness Probes
When we run an image of nginx using docker, it starts to serve the users. For some reason, the web server crashes and nginx process exists. The container exists as well.
Since, docker is not an orchestration engine, the container continues to stays to dead and denies services to users, until we manually create other nginx container.

In kubernetes orchestation, when we run the same application with kubernetes, everytime the application crashes, kubernetes makes an attempt to restart the container to restore service to users. (if we observe, number of restarts in kubectl get pods command get increases)

If the application is not working (because of coding issues) but the container continues stay alive i.e., container is up but the users hitting the container are not served. In this case, container needs to be restarted or destroyed and a new container needs to be brought up. Here comes the liveliness probe.

A liveliness probe can be configured on the container to periodically test whether the application within the container is actually healthy? If the test fails, the container is considered unhealthy and is destroyed and recreated.
As a developer, we need to define what it means to a application to be healthy.

  • In case of a webapplication - when an api server is up and running.
  • In case of a database - a particular TCP socket is listening.
  • Simply execute a command to perform a test.
Liveness probe is configured in the pod definitionfile as readiness probe


In docker,
docker run -d <imagename>
To view the logs
docker logs -f <containerId>
-f option helps us to stream the logs live

In Kubernetes,
kubectl create -f <yaml-filename>
To view the logs
kubectl logs -f <podname>
Use -f to stream the logs live.
These logs are specific to the container running inside the pod.

In case of multiple container pod, we must explicitly specify name of the container in the command.
kubectl logs -f <podname> <containername>

To monitor resource consumption in kubernetes.For Example, to monitor nodel level metrics such as the number of nodes in the clusters, how many are healthy and performance metrics such as CPU, memory and dissk utilization., as well as pod level metrics such as number of pods and performance metrics such as CPU and memory utilization.
We need a solution that monitors these metrics, store them and provide analytics around this data.

Kubernetes does not come with a full featured monitoring solution, however there are a number of open source solutions available such as metrics server, Prometheus, Elastic Stack, Data Dog, dynatrace.

Heapster was one of the original projects that enabled monitoring and analysis features for kubernetes.
However, Heapster is now deprecated and a slimmed down version was formed known as the Metric Server.

The Metrics server receives metrics from each of the kubernetes nodes and pods, aggregates them and stores them in memory.
Metrics server is only an in-memory monitoring solution and does not store the memory on the disk. Therefore, we cannot historical performance data. For this, we must rely on advanced monitoring solutions such as Prometheus, Elastic Stack, Data Dog, dynatrace.

kubernetes runs an agent on each node known as kubelet which is responsible for receiving instructions from the kubernetes api master server and running pods on the nodes.
The kubelet also contains a sub component known as cAdvisor or Container Advisor.

The cAdvisor is responsible for receiving performance metrics from pods and exposing them through the kublet api to meet the metrics available for the metrics server.

Deploy the metrics server
In minikube - minikube addons enable metrics-server
In other environments - git clone https://github.com/kubernetes-incubator/metrice-serve
then kubectl create -f deploy/1.8+/
This commands deploys specific roles, services that enables metrics server.

Cluster performance can be viewed
kubectl top node
This provides the CPU and memory consumption of each of the nodes.

kubectl top pod
To view performance metrics of pods.

Labels and selectors are standard methods to group things together.

labels are properties attached to each item.
selctors help to filter the items based on labels.

We create a lot of different types of objects in kubernetes such as pods, services, replicasets, deployments etc,.
For kubernetes, all of these are different objects. Overtime, we may end up having hundreds or thousands of objects in our cluster. In this case, we need a way to filter and view different objects by different categories like view objects by type, by application or by functionality. we can group and select objects using labels and selectors.

For each object, attach labels as per need like app, function etc,. Then while selcting, specify a condition to filter specific object like app=app1.

In pod-definition file, under metadata add labels as key value format. We can add as many labels as needed.

Once the pod is created, to select the pod with the labels
kubectl get pods --selector <labels>
kubectl get pods --selector app=app1

Kubernetes objects use labels and selectors internally to connect different objects together.

While labels and selectors are used to group ans select objects, annotations are used to record other details for informative purpose.
For Example, tool details like name, version, build version etc,. or contact details mail_id, phone number etc, that may be used for some kind of integration purpose.

Rollout and Versioning
When we first create a deployment, it triggers a rollout, a new rollout creates a new deployment revision. Let's call it revision 1.
In future, when the application is upgraded i.e., container version is updated with a new image, a new rollout is triggered and a new deployment revision is created named revision2.
This helps us to keep track the changes made to our deployment and enables us to rollback to the previson versions of deployment if necessary.

To see the status of the rollout
kubectl rollout status <deployment name>

To see the revisions and history of rollout
kubectl rollout history <deployment name>
This will show the revisions and history of our deployment

Deployment Strategy
There are two types of deployment strategies.
* When we have multiple replicas of web application instance deployed, one way to upgrade these to a newer version is to destroy all of these and then create newer versions of application instances.
The problem with this is, for the period between the older version is down and newer version is up, the application is down and inaccessible to the users. This strategy is called recreate strategy and this is not the actual deployment strategy.

In the second strategy, we don't destroy all the instances once. Instead, we take down the older version and bring up a newer version one by one. This way the application never goes down and the upgrade is seemless. This is called rollout strategy. If we do not specify a strategy while creating a deployment, by default it assumes the deployment strategy as rolling update.
Rolling update is the default deployment strategy.
Updating the deployment means like updating the container version, updating labels, updating number of replicas. We can modify these in the deployment efinition file. Once we make necessary changes we run
kubectl apply -f <deployment definition filename> to apply the changes.
A new rollout is triggered and a new revision of the deployment is created.

To update the image of the application, we can use
kubectl set image deployment <deploymentname> <podname>=<newimage>
If we do like this, the configurations will be different in the definition file, so we need to be careful when we use the same definition file in future for changes.

To see the detailed information regarding deployments
kubectl describe deployment <deploymentname>
We will notice, when the recreate strategy is used, the old replicas are scaled down to 0 first and then the new replicas are scaled up.
However, when the rolling update strategy is used, the old replicaset is scaled down one at a time, simultaneously scaling up the new replicaset one at a time.

Upgrades
When the new deployment is created, say to deploy 5 replicas. If first creates a replicaset automatically, which in turn creates the number of pods required to meet the number of replicas.
When we upgrade our application, the kubernetes deployment object creates a new replicaset under the hoods and starts deploying the containers there. At the same time, taking down the pods from the old replicaset following a rolling update strategy.
This can be seen using kubectl get replicasets

Rollback
For Example, once we upgrade our application, we realize something is wrong with the new version of the build we used to upgrade. So, we would like to rollback our update.
To rollback to the previous revision or to undo a change
kubectl rollout undo deployment <deploymentname>
The deployment then destroy the pods in the new replicaset and bring the older ones up in the old replicaset. Then the application is back to older format.

Commands

We can check the status of each revision individually by using the --revision flag
kubectl rollout history deployment <deploymentname> --revision=<number>

Normally, the change-cause field will be empty in the rollout history.
We can use the --record flag to save the command used to create/update a deployment against the revision number. kubectl set image deployment <deploymentname> <podname>=<newimage> --record

There are different types of workloads that a container can serve.
Workloads such as web servers continue to run for a long period of time, until manually taken down. There are other kinds of workloads such as batch processing, analytics or reporting, that are meant to carryout specific tasks and finish.
Example: Performing a computation, processing an image, performing an analytic task, sending an email etc,. These are meant for short period of time.

Let us first see how these workloads work in docker.
In Docker,
docker run ubuntu expr 3+2 (performs a simple math operation)
In this case, docker container comes up, perform the simple operation, prints the output and then exists.
when we run docker ps -a command, we see the container in exit state.

In Kubernetes,


Though the container computes output and exits, kubernetes continues to restart the container and bring it up again. This continues to happen until a threshold is reached.

Kubernetes wants our applications to live forever. The default behaviour of pods is to attempt to restart the container to keep it running.
This behaviour is defined by the property restartPolicy set on the pod which is by default set to always. Therefore, the pod always recreates the container when it exits. We can overwrite this behaviour by setting this property to Never or Onfailure. Thus, the kubernetes does not restart the container once the job is finished.

pod-definition file

apiVersion: v1
kind: Pod
metadata: 
  name: math-pod
spec:
  containers: 
    - name: math-add
      image: ubuntu
      command: ['expr', '3', '+', '2']

  restartPolicy: Never

While replicaset is used to make sure a specified number of pods are running at all time, a job is used to run a set of pods to perform a given task to completion.

We create a job using a definition file.

job-definition file

apiVersion: batch/v1
kind: Job
metadata:
  name: math-add-job
spec:
  template:
  # pod-definition specification
    spec:
      containers:
        - name: math-add
          image: ubuntu
          command: ['expr', '2', '+', '3']
      restartPolicy: Never

To create a job kubectl create -f job-definition.yaml

To view the jobs
kubectl get jobs

The standard ouput of the container can be seen in container logs
kubectl logs <podname>

To delete the job
kubectl delete job <jobname>


Job with multiple pods

To run multiple pods we set a value for completions under the job specification.


By default, the pods are created one after the other. The second pod is created only after the first is finished.

If the pod fails, the job tries to create new pods until it has three successful completions, and that completes the job.


Instead of getting the pods created sequentially, we can get them created in parallel. For this add the property called parallelism to the job specification. We set it to 3, to create 3 pods in parallel


The job first creates 3 pods at once, two of which completes successfully. Now, we only need one more, it is intelligent enough to create one pod at a time, until we get a total 3 successful pods.

CronJobs

A CronJob is a job that can be scheduled. We can schedule and run a CronJob Periodically. Example: A job to generate a report and send an email.

To create cronjob
kubectl create -f <cronjob-definition-file>

To view the available cronjobs kubectl get cronjob

cronjob-definition file

apiVersion: batch/v1beta1
kind: CronJob
metadata: 
  name: reporting-cron-job
spec:
  # schedule option takes cron like format string, where it takes the time when the job is to be run
  schedule: "*/1 * * * *"
  jobTemplate:
    # Job spec
    spec: 
      completions: 3
      parallelism: 3
      template:
        spec:
          containers: 
            - name: reporting-tool
              image: reporting-tool
          restartPolicy: Never

Kubernetes servcies enable various communications between various components within and outside of the application.
Kubernetes servcies helps us to connect applications together and for the users.

For Example: Our appliaction has group of pods serving various functions such as a group serving for front end load to users, another group running backend processes and third group connecting external database.

It is services that enable connectivity between these group of pods. Services enable the front end application made available to end users, it helps in communicating backend and frontend pods and helps in connecting to external database.
Thus, servcies enable loose coupling between micro-services in our application.

External communication

We deployed a pod having web application running on it. How do we as an external user access the web page.

Services

This type of service is called a NodePort Servcie, as service listens to a port on the node and forwards request to POD.

Service Types

NodePort

Nodeport Service makes internal POD accessible on a port on the node.

This servcie helps us by mapping a port on the POD to the port on the node.

Nodeport

The port on the pod where an actual web server is running is 80 and it is referred to target port as this is the port where service forward the request to.

Second port is the port on the service itself. It is simple referred to port.
This terms are from the view point of the service.

Finally, we have the port on the node itself, which we use to access the webserver externally and that is known as the nodeport. Nodeport can only be in valid range i.e., 30000 to 32767.

service-definition.yaml

apiVersion: v1
kind: Service
metadata: 
  name: myapp-service

spec:
  type: NodePort
  ports:
    - targetPort: 80 # will assumed to be same as port by default
      port: 80  # only mandatory
      nodePort: 30008 #if not provided, it will automatically assign the port in the valid range.
  selector:
  # to identify the pod
    app: myapp
    type: front-end

To create a service
kubectl create -f servcie-definition.yaml

To get all services
kubectl get services

To access the webservice curl http://<NodeIp>:<nodePort>
For Example: curl http://192.168.1.2:30008

When we have multiple PODs running our web application, they will have the same labels and when we create a service, it will consider all the pods with the particular label as target pods. We don't have to do any additional configuration to access multiple instances of a POD.
To balance the load across the multiple pods, it uses a random algorithm. Thus, service acts as a built-in load balancer to distribute load across different pods.

When the pods are distributed across multiple nodes, in this case a web application on PODs on separate nodes in the cluster.
When we create a service, without any additional configuration. The kubernets automatically creates a service that spans across all the nodes in the cluster and maps the targetport to the same nodePort on all the nodes in the cluster.
This way, we can access the application using the IP of any node in the cluster and using the same port number.

In any case, whether it can be a single pod on the single node, multiple pods on the single node or multiple pods on the multiple nodes, the service will be created exactly the same.
When the PODs are removed or added, the service gets automatically updated making it highly flexible and adoptive. Once created, we won't typically have to make any additional configuration changes.

ClusterIP

In this case, servcie creates a virtual IP inside the cluster to enable communication between a set of servers such as a set of front end servers to a set of backend servers.

A service created for the backend pods will help to group all the backend pods together and provide a single interface for other pods to access the service. The requests are forwarded to one of the pods under the service.

clusterIP

service-definition.yaml

apiVersion: v1
kind: Service
metadata: 
  name: back-end
spec:
  type: clusterIp
  # clusterIP is the default type, if we didn't specify it, the service will automatically assumes it to be clusterIP.
  ports:
   - targetPort: 80 # where backend is exposed
     port: 80 # where servcie is exposed
  selector:
  # to link the service to set of pods
    app: myapp
    type: back-end 

To create a service
kubectl create -f servcie-definition.yaml

To get all services
kubectl get services

This service can be accessed by other pods using IP address or service name.

LoadBalancer

This service provisions a load balancer for our application in supported cloud providers.
For Example: To distribute load across the web servers in front end tier.

Ingress helps users to access the application using a single externally accessible url that we can configure to route to different services within the cluster based on the url path, at the same time implements SSL security.

Ingress is a layer 7 load balancer built into the kubernetes cluster that can be configured using native kubernetes primitives just like any other object in kubernetes.

Even with ingress, we still need to expose it to make it accessible outside the cluster or we still have to either publish it as NodePort or with a cloud native load balancer. But this is just one time configuration.

Ingress first deploys a supported solution which can be nginx or HAproxy and then specify a set of rules to configure Ingress.
The soultion we deploy is called as an Ingress Controller and the set of rules we configure are called as Ingress Resources.
Ingress Resources are created using definition files as we created for pod, deployment etc,.

Ingress Controller

A kubernetes cluster does not come with Ingress Controller by default. We must deploy one.
There are number of solutions available for Ingress. A few of them are GCE (Google's layer 7 HTTP Load Balancer), nginx, HAproxy, Contour, traefik and Istio. Out of these, GCE and nginx are currently supported and maintained by kubernetes.

These Ingress Controllers are not just a load balancer or nginx server. The load balancer components are just a part of it. The Ingress Controllers have additional intelligence built into them to monitor kubernetes cluster for new definitions or ingress resources or configure the nginx server accordingly.

An nginx Controller is deployed as just an other deployment in kubernetes.

ingress-con

Ingress Resources

ingress-res

Traffic

For Example, we have a web server serving frontend to users, app server serving APIs and a database server.
The user sends in a request to the weserver at port 80, the webserver then sends request to the API server at port 5000 in the backend. The API server then fetches data from database server at port 3306 and sends the data back to the user.

traffic

We have two types of Traffic here.
For a webserver, the incoming traffic from the users is an ingress traffic and the outgoing request to the appserver is egress traffic.
While defining ingress and egress, we are just looking at the direction in which the traffic originated. The response to the user do not really matter.
Similarly, in case of the backend API server. it receives ingress traffic at port 5000 and has egress traffic to the port 3306 to the database server.
From the database server prespective, it receives ingress tarffic on port 3306 from the API server.

To list the rules to get this working, we will have
Ingress rule that is required to accept HTTP traffic on port 80 on the webserver and
Egress rule to allow traffic from the webserver to the port 5000 on API server. * Ingress rule to accept traffic on port 5000 on API server. * Egress rule to allow traffic to port 3306 on the database server. * Finally, an Ingress rule on the database server to accept traffic on the port 3306.

rules

Network Security

One of the prerequiste for networking in kubernetes is whatever solution we implement, the PODs should be able to communicate with each other without having to configure any additional settings like routes.

Kubernetes is configured by default with an All Allow rule that allows traffic from any POD to anyother POD or services within the cluster.

In the before example, for each component in the application we deploy a POD. one for frontend server, one for API server and one for the database.
We create services to enable communication between the PODS as well as to the enduser. Based on the All Allow policy, all PODs with communicate with each other in the kubernetes cluster.

If we do not want frontend server to be able to communicate with the database server directly. For Example, security teams and audits require us to prevent that from happening.

That is where, we would implement Network Policy to allow traffic to the DB server only from the API server.

Network Policy is another object in the kubernetes namespace just like PODS, services. We link a network policy to one or more PODs. We can define rules within the network policy. In this case, we can say, only allow Ingress traffic from the API pod on port 3306.
Once this policy is created, it blocks all other traffic to the pod and only allows the traffic that matches to specified rule.
This is only applicable to the POD on which network policy is applied.

To apply or link a network policy to a pod, we use labels and selectors. We label the pod and use the same labels on the podSelector field in the network policy and then we build our rule.

Under policyType, we can specify to allow Ingress or Egress or both.

policy

Network-policy

To create a network policy
kubectl create -f <policy-definition file>

To get the network policies
kubectl get networkpolicy

Note:
Network policies are enforced by the network solution implemented on kubernetes cluster. Not all network solutions support network policy.

A few of them that are supported are:

  • kube-router
  • Romana
  • Calico
  • Weave-net

If we use Fannel as the networking solution, it does not support network policies.

Even in a cluster configured with a solution that does not support network policies, we can still create the policies, but just that they will not enforced. We will not get the error message saying that network solution does not support network policies.

Volumes in Docker

Docker Containers are meant to be transient in nature i.e., they are meant to last only for a short period of time.
They are called upon when required to process data and destroyed once finished.
The same is true, for the data within the container. The data is destroyed along with the container.

To Persist data processed by the containers, we attach a volume to the containers when they are created. The data processed by the container is now placed in this volume there by retaining it permanently.
Even if the container is deleted, the data generated or processed by it remains.

Volumes in Kubernetes

Just as in docker, the PODs created in kubernetes are transient in nature, when a POD is created to process data and then deleted. The data processed by it gets deleted as well. for this, we attach a volume to the POD.
The data generated by the POD is now stored in the volume and even after the POD is deleted, the data remains.

Implementation

We have a single node kubernetes cluster. We create a simple POD that generates a random number between 1 and 100 and writes that to a file at /opt/number.out. It then gets deleted along with the random number.

To retain the number generated by the POD, we create a volume and a volume needs a storage.
When we create a volume, we can choose to configure its storage in different ways.
Now we simple configure it to use a directory on the host.
This way any files created in the volume, will be stored in the directory data on my node.

Once the volume is created, to access it from a container we mount the volume to a directory inside the container.
We use volumeMounts field in each container to mount the data volume to the directory /opt within the container.

The random number will now be returned to /opt mount inside the container, which happens to be on the data volume which infact the data directory on the host.

volumes

When the POD gets deleted, the file with the random number still lives on the host.

Volume Storage options

We can use hostpath option to configure a directory on the host as storage space for the volume. This works fine on a single node.
This is not recommended to use on multi-node cluster.

Volume storage on AWS

volume-AWS

Persistent Volumes

When we created volumes, we configured volumes within the pod definition file.
Every configuration information required to configure storage for the volume goes within the pod definition file.

When we have large environment, with lot of users deploying a lot of PODs, the users would have to configure storage everytime for each POD.

To manage storage centrally or configured in a way that an administrator can create a large pool of storage and then have users carve out pieces from it as required.
This is where persistant volumes can help us..

A Persistent volume is a cluster wide pool of storage volumes configured by an adminstrator to be used by users deploying applications on the cluster.
The users can now select storage from this pool using persistent volume claims.

per-volume

Accessmode defines how a volume should be mounted on the host whether in a read only mode, read write mode.

The supported values are

  • ReadOnlyMany
  • ReadWriteOnce
  • ReadWriteMany

Under capacity, specify the amount of storage to be reserved to this persistent volume.
next comes the volume type, we use host path for this.

Persistent Volume Claims

We create Persistent Volume Claim to make the storage available to a node.

Persistent Volumes and Persistent Volume Claims are two separate objects in the kubernetes namespace.

An administrator creates a set of persistent volumes and a user creates persistent volume claims to use storage.
Once the persistent volume claims are created, kubernetes binds the persistent volumes to claims based on the request and properties set on the volume.
Every persistent volume claim is bound to a single persistent volume. During the binding process, kubernetes tries to find a persistent volume that has sufficient capacity as requested by the claim and any another request property such Acess Modes, Volume Modes, Storage Class etc,.

If there are multiple matches for a single claim, and you would like to select a particular volume, we can use labels and selectors to bind to the right volumes.

There is a one to one relationship between claims and volumes. So, no other claims can utilize the remaining capacity in the volume.

If there are no volumes available, the persistent volume claim will remain in pending state, until the newer volumes are made available to the cluster. Once newer volumes are made available, the claim will automatically bound to the newly available volume.

Creation

pvc

To get the persistentVolumeClaim
kubectl get pvc

To delete the persistentVolumeClaim
kubectl delete persistentvolumeclaim <claimname>

We can choose what should happen to persistent volume, once pvc gets deleted.
Bydefault persistentVolumeReclaimPolicy is set to Retain i.e., persistent volume retains until it is manually deleted by the administrator. It is not available to use by any other claim.
With Delete option, PV will be deleted as soon as PVC gets deleted.
With Recycle option, the data in the persistent volume will be scrubbed before making it available to other claims.

Persistent volumes and persistent volume claims

To inorder to bind pvc with a pv, accessMode and storageClassName of both should be same.

PV:

spec:
 hostPath:  
   path:    

PODS

labels

show all the labels of pods
kubectl get pods --show-labels

Change the labels of pod 'nginx' to be app=v2
kubectl label pod nginx app=v2 --overwrite

Get the label 'app' for the pods
kubectl get pods --label-columns=app

Get only the 'app=v2' pods
kubectl get pods --selector=app=v2

Remove the 'app' label from the nginx pod
kubectl label pod nginx app-

Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'
First add the label to the node
kubectl label node <nodename> accelerator=nvidia-tesla-p100
use the 'nodeSelector' property on the Pod YAML. under nodeSElector give the label.

To know where to write nodeSelector in the yaml file.
kubectl explain po.spec

Annotate pod nginx with "description='my description'" value
kubectl annotate pod nginx description='my description'

check the annotations for pod nginx
kubectl describe pod nginx | grep -i 'annotations'

remove the annotation
kubectl annotate pod description-

check how the deployment rollout is going
kubectl rollout status deploy <deploymentname>

check the rollout history
kubectl rollout history deploy <deploymentname>

undo the latest rollout
kubectl rollout undo deploy <deploymentname>

Return the deployment to the second revision (number 2)
kubectl rollout undo deploy <deploymentname> --to-revision=2

Check the details of the fourth revision
kubectl rollout history deploy <deploymentname> --revision=4

Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%
kubectl autoscale deploy <deploymentname> --min=5 --max=10 --cpu-percent=80

pause the rollout of the deployment
kubectl rollout pause deploy <deploymentname>

Create Horzontal pod autoscaler for deployment nginx that maintains between 1 and 10 replicas of the Pods, targetting CPU utilization at 80%
kubectl autoscale deploy nginx --min=1 --max=10 --cpu-percent=80

Delete the deployment and the horizontal pod autoscaler you created
kubectl delete deploy nginx
kubectl delete hpa nginx
or
kubectl delete deploy/nginx hpa/nginx

Create a job with image perl
kubectl create job pi --image=perl

Create a job with image perl that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"
kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'

Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute
kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'while true; do echo hello; sleep 10; done'
Add activeDeadlineSeconds=30 under job spec section in yaml file and create the job.

configmaps

from literals
kubectl create cm map1 --from-literal=var1=val1

from file
echo -e 'var1=val1\nvar2=val2' > config.txt
kuebctl create cm map2 --from-file=config.txt

from env file
echo -e 'var1=val1\nvar2=val2' > config.env
kuebctl create cm map3 --from-file=config.env

from a file, giving the key special
kubectl create cm map4 --from-file=special=config.txt

Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'
kubectl create cm options --from-literal=var5=val5
kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
Add

env:
 - name: option
   valueFrom:
      configMapkeyRef:
        name: options  
        key: var5

under spec.containers

Secrets

Create a secret with the values password=mypass
kubectl create secret generic mysecret --from-literal=password=mypass

from file
kubectl create secret generic mysecret --from-file=finename

To get the value of the secret
echo <data.username> | base64 -D

Comments