공부/CKA

[CKA] Udemy - Mock Exam3 문제풀이

작은소행성 2023. 8. 5. 22:57

Q1.Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace.

  • ServiceAccount: pvviewer
  • ClusterRole: pvviewer-role
  • ClusterRoleBinding: pvviewer-role-binding
  • Pod: pvviewer
  • Pod configured to use ServiceAccount pvviewer ?

 

A1. 

Pods authenticate to the API Server using ServiceAccounts. If the serviceAccount name is not specified, the default service account for the namespace is used during a pod creation.

Reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

 

Now, create a service account pvviewer, create a clusterrole, create a clusterrolebinding

## create a service account pvviewer
kubectl create serviceaccount pvviewer

## To create a clusterrole:
kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list

## To create a clusterrolebinding
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer

## 상세 내용 확인하기 
k describe clusterrolebinding pvviewer-role-binding

Solution manifest file to create a new pod called pvviewer as follows

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pvviewer
  name: pvviewer
spec:
  containers:
  - image: redis
    name: pvviewer
  # Add service account name
  serviceAccountName: pvviewer

 

 

Q2. List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips.

Answer should be in the format: InternalIP of controlplane<space>InternalIP of node01 (in a single line)

A2.

 

k get no -o wide

k get nodes -o json | jq | grep -i internalip

k get no -o json | jq -c 'paths'

k get no -o json | jq -c 'paths' | grep type
k get no -o json | jq -c 'paths' | grep type

 

k get no -o json | jq -c 'paths' | grep type | grep -v conditions

 k get no -o jsonpath='{.items}'

 

 k get no -o jsonpath='{.items[0].status.addresses}' | jq

k get no -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")]}' | jq

k get no -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}'

items[0] -> items[*]

k get no -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'

 

Explore the jsonpath loop.

k get no node01 -o json | jq | grep -i InternalIP -B 100

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips

 

 

 

 

Q3. Create a pod called multi-pod with two containers.
Container 1, name: alpha, image: nginx
Container 2: name: beta, image: busybox, command: sleep 4800

Environment Variables:
container 1:
name: alpha

Container 2:
name: beta

  • Pod Name: multi-pod
  • Container 1: alpha
  • Container 2: beta
  • Container beta commands set correctly?
  • Container 1 Environment Value Set
  • Container 2 Environment Value Set

 

A3.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-pod
  name: multi-pod
spec:
  containers:
  - image: nginx
    name: alpha
    env:
    - name: name
      value: "alpha"
  - image: busybox
    name: beta
    command: ["sleep","4800"]
    env:
    - name: name
      value: "beta"
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

 

 

 

 

Q4. Create a Pod called non-root-pod , image: redis:alpine

runAsUser: 1000
fsGroup: 2000

  • Pod non-root-pod fsGroup configured
  • Pod non-root-pod runAsUser configured

 

A4. 

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: non-root-pod
  name: non-root-pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - image: redis:alpine
    name: non-root-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

 

 

 

Q5. We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it.
Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.

Important: Don't delete any current objects deployed

 

  • Important: Don't Alter Existing Objects!
  • NetworkPolicy: Applied to All sources (Incoming traffic from all pods)?
  • NetWorkPolicy: Correct Port?
  • NetWorkPolicy: Applied to correct Pod?

 

A5. 

Solution manifest file to create a network policy ingress-to-nptest as follows

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-to-nptest
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: np-test-1
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - protocol: TCP
      port: 80

 

 

 

Q6. Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.

key: env_type, value: production, operator: Equal and effect: NoSchedule

 

  • Key = env_type
  • Value = production
  • Effect = NoSchedule
  • pod 'dev-redis' (no tolerations) is not scheduled on node01?
  • Create a pod 'prod-redis' to run on node01

 

A6. 

To add taints on the node01 worker node:

kubectl taint node node01 env_type=production:NoSchedule

Now, deploy dev-redis pod and to ensure that workloads are not scheduled to this node01 worker node.

kubectl run dev-redis --image=redis:alpine


To view the node name of recently deployed pod:

kubectl get pods -o wide


Solution manifest file to deploy new pod called prod-redis with toleration to be scheduled on node01 worker node.

---
apiVersion: v1
kind: Pod
metadata:
  name: prod-redis
spec:
  containers:
  - name: prod-redis
    image: redis:alpine
  tolerations:
  - effect: NoSchedule
    key: env_type
    operator: Equal
    value: production

 

To view only prod-redis pod with less details:

kubectl get pods -o wide | grep prod-redis

 

 

 

Q7. Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
image: redis:alpine

Use appropriate labels and create all the required objects if it does not exist in the system already.

  • hr-pod labeled with environment production?
  • hr-pod labeled with tier frontend?

 

A7

$ k create ns hr

---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: hr-pod
  name: hr-pod
  namespace: hr
  labels:
    environment: production
    tier: frontend
spec:
  containers:
  - image: redis:alpine
    name: hr-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}


$ k describe po hr-pod -n hr

 

 

 

 

Q8. A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

Fix /root/CKA/super.kubeconfig

 

A8. 

Verify host and port for kube-apiserver are correct.

Open the super.kubeconfig in vi editor.

Change the 9999 port to 6443 and run the below command to verify:

kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig

 

 

 

Q9. We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.

deployment has 3 replicas

 

A9. 

Use the command kubectl scale to increase the replica count to 3.

kubectl scale deploy nginx-deploy --replicas=3

 

The controller-manager is responsible for scaling up pods of a replicaset. If you inspect the control plane components in the kube-system namespace, you will see that the controller-manager is not running.

kubectl get pods -n kube-system

The command running inside the controller-manager pod is incorrect.
After fix all the values in the file and wait for controller-manager pod to restart.

Alternatively, you can run sed command to change all values at once:

sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' /etc/kubernetes/manifests/kube-controller-manager.yaml

This will fix the issues in controller-manager yaml file.

At last, inspect the deployment by using below command:

kubectl get deploy

 

 

 

 

반응형