root@node01:~# apparmor_parser /etc/apparmor.d/nginx_apparmor AppArmor parser error for /etc/apparmor.d/nginx_apparmor in /etc/apparmor.d/ninx_apparmor at line 2: Found unexpected character: '-'
Certifications- expire 36 months from the date that the Program certification requirements are met by a candidate.
Certified Kubernetes Security Specialist (CKS)
The following tools and resources are allowed during the exam as long as they are used by candidates to work independently on exam tasks (i.e. not used for 3rd party assistance or research) and are accessed from within the Linux server terminal on which the Exam is delivered. During the exam, candidates may:
review the Exam content instructions that are presented in the command line terminal.
review Documents installed by the distribution (i.e. /usr/share and its subdirectories)
use the search function provided on https://kubernetes.io/docs/ however, they may only open search results that have a domain matching the sites listed below
use the browser within the VM to access the following documentation:
Each task on this exam must be completed on a designated cluster/configuration context.
Sixteen clusters comprise the exam environment, one for each task. Each cluster is made up of one master node and one worker node.
An infobox at the start of each task provides you with the cluster name/context and the hostname of the master and worker node.
You can switch the cluster/configuration context using a command such as the following:
kubectl config use-context <cluster/context name>
Nodes making up each cluster can be reached via ssh, using a command such as the following:
ssh <nodename>
You have elevated privileges on any node by default, so there is no need to assume elevated privileges.
You must return to the base node (hostname cli) after completing each task.
Nested −ssh is not supported.
You can use kubectl and the appropriate context to work on any cluster from the base node. When connected to a cluster member via ssh, you will only be able to work on that particular cluster via kubectl.
For your convenience, all environments, in other words, the base system and the cluster nodes, have the following additional command-line tools pre-installed and pre-configured:
kubectl with kalias and Bash autocompletion
yq and jq for YAML/JSON processing
tmux for terminal multiplexing
curl and wget for testing web services
man and man pages for further documentation
Further instructions for connecting to cluster nodes will be provided in the appropriate tasks
The CKS environment is currently running etcd v3.5
The CKS environment is currently running Kubernetes v1.26
The CKS exam environment will be aligned with the most recent K8s minor version within approximately 4 to 8 weeks of the K8s release date.
More items for CKS than CKA and CKAD
Pod Security Policies(PSP) - removed from Kubernetes in v1.25
# 不熟 ## 输出pod status kubectl -n default describe pod pod1 | grep -i status: kubectl -n default get pod pod1 -o jsonpath="{.status.phase}" ## Check the pod for error kubectl describe pod podname | grep -i error ... Error: ImagePullBackOff ## a fast way to get an overview of the ReplicaSets of a Deployment and their images could be done with: kubectl -n neptune get rs -o wide | grep deployname NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deployname 3 3 3 9m6s httpd httpd:alpine app=wonderful
## 创建job kubectl -n neptune create job neb-new-job --image=busybox:1.31.0 $do > /opt/course/3/job.yaml -- sh -c "sleep 2 && echo done" ## If a Secret bolongs to a serviceaccount, it'll have the annotation kubernetes.io/service-account.name kubectl get secrets -oyaml | grep annotations -A 1 # shows secrets with first annotation ## log kubectl logs podname > /opt/test.log ## decode base64 base64 -d filename ## check service connection using a temporary Pod ## k run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://svcname.namespace:svcport kubectl run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://svcname.namespace:80 ## check that both PV and PVC have the status Bound: k -n earth get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/earth-project-earthflower-pv 2Gi RWO Retain Bound earth/earth-project-earthflower-pvc 8m4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/earth-project-earthflower-pvc Bound earth-project-earthflower-pv 2Gi RWO 7m38s
## We can confirm the pod of deployment with PVC mounting correctly: k describe pod project-earthflower-586758cc49-hb87f -n earth | grep -A2 Mount: Mounts: /tmp/project-data from task-pv-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jj2t2 (ro)
## Verify everything using kubectl auth can-i kubectl auth can-i create deployments --as system:serviceaccount:app-team1:cicd-token -n app-team1 # YES # 常用 ## 创建pod kubectl run pod1 --image=httpd:2.4.41-alpine $do > 2.yaml kubectl get pod sat-003 -o yaml > 7-sat-003.yaml # export kubectl delete pod pod1 --force --grace-period=0 ## 创建service kubectl expose deployment d1 --name=服务名 --port=服务端口 --target-port=pod运行端口 --type=类型 kubectl expose pod pod名 --name=服务名 --port=服务端口 --target-port=pod运行端口 --type=类型 # CKS ## 创建secret kubectl create secret generic db-credentials --from-literal db-password=passwd ## modify a pod yaml to deployment yaml ### put the Pod's metadata: and spec: into the Deployment's template: section:
## To verify that the token hasn't been mounted run the following commands: kubectl -n one exec -it pod-name -- mount | grep serviceaccount kubectl -n one exec -it pod-name -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
AppArmor is enabled on the cluster’s worker node. An AppArmor profile is prepared, but not enforced yet. You may use your browser to open one additional tab to access the AppArmor documentation.
On the cluster’s worker node, enforce the prepared AppArmor profile located at /etc/apparmor.d/nginx_apparmor . Edit the prepared manifest file located at /home/candidate/KSSH00401/nginx-deploy.yaml to apply the AppArmor profile. Finally, apply the manifest file and create the pod specified in it.
Fix all issues via configuration and restart theaffected components to ensure the new settings take effect. 通过配置修复所有问题并重新启动受影响的组件以确保新的设置生效。
Fix all of the following violations that were found against the API server: 修复针对 API 服务器发现的所有以下违规行为: Ensure that the 1.2.7 –authorization-mode FAIL argument is not set to AlwaysAllow Ensure that the 1.2.8 –authorization-mode FAIL argument includes Node Ensure that the 1.2.9 –authorization-mode FAIL argument includes RBAC Ensure that the 1.2.18 –insecure-bind-address FAIL argument is not set Ensure that the 1.2.19 –insecure-port FAIL argument is set to 0
Fix all of the following violations that were found against the kubelet: 修复针对 kubelet 发现的所有以下违规行为: Ensure that the 4.2.1 –anonymous-auth FAIL argument is set to false Ensure that the 4.2.2 –authorization-mode FAIL argument is not set to AlwaysAllow Use Webhook authn/authz where possible. 注意:尽可能使用 Webhook authn/authz。
Fix all of the following violations that were found against etcd: 修复针对 etcd 发现的所有以下违规行为: Ensure that the 4.2.1 –client-cert-auth FAIL argument is set to true
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace kamino. Look for images with High or Critical severity vulnerabilities, and delete the Pods that use those images.
Trivy is pre-installed on the cluster’s master node only; it is not available on the base system or the worker nodes. You’ll have to connect to the cluster’s master node to use Trivy.
### 删除检测到的漏洞镜像对应的 pod(如果有控制器,得删除控制器) kubectl delete pod -n kamino pod名称
考题4 - Sysdig & Falco
you may use you brower to open one additonal tab to access sysdig’s documentation or Falco’s documentaion
Task
Use runtime detection tools to detect anomalous processes spawning and executing frequently in the sigle container belonging to Pod redis. Two tools are avaliable to use: 使用运行时检测工具来检测 Pod tomcat 单个容器中频发生成和执行的异常进程。有两种工具可供使用:
sysdig
falco
The tools are pre-installed on the cluster’s worker node only, they are not avaliable on the base system or the master node. Using the tool of you choice (including any non pre-install tool) analyse the container’s behavior for at least 30 seconds, using filers that detect newly spawing and executing processes, store an incident file at /opt/KSR00101/incidents/summary, containing the detected incidents one per line in the follwing format: 注:这些工具只预装在 cluster 的工作节点,不在 master 节点。 使用工具至少分析 30 秒,使用过滤器检查生成和执行的进程,将事件写到 /opt/KSR00101/incidents/summary 文件中,其中包含检测的事件, 每个单独一行 格式如下:
### vim /etc/falco/falco_rules.yaml # Container is supposed to be immutable. Package management should be done in building the image. - rule: Launch Package Management Process in Container desc: Package management process ran inside container condition: > spawned_process and container and user.name != "_apt" and package_mgmt_procs and not package_mgmt_ancestor_procs and not user_known_package_manager_in_container output: > Package management process launched in container %evt.time,%user.uid,%proc.name
kubectl get rolebinding -n qa -o wide kubectl get clustorrolebinding -n qa -o wide
kubectl delete -n qa serviceaccount contentsa
考题6 - 2022真题v1.20 Pod 安全策略-PodSecurityPolicy
2023的最新考试已经没有这道题了,替代的是Pod Security Standard
Context6
A PodsecurityPolicy shall prevent the creation on of privileged Pods in a specific namespace. PodSecurityPolicy 应防止在特定 namespace 中特权 Pod 的创建。
Task6
Create a new PodSecurityPolicy named restrict-policy, which prevents the creation of privileged Pods.
Create a new ClusterRole named restrict-access-role, which uses the newly created PodSecurityPolicy restrict-policy. Create a new serviceAccount named psp-denial-sa in the existing namespace staging.
Finally, create a new clusterRoleBinding named dany-access-bind, which binds the newly created ClusterRole restrict-access-role to the newly created serviceAccount psp-denial-sa.
### (5)启用 PodSecurityPolicy(在控制节点上修改 apiserver 配置) vim /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
systemctl restart kubelet
考题6 - 2023真题V1.26 Pod Security Standard
Task weight: 8% Use context: kubectl config use-context workload-prod
There is Deployment container-host-hacker in Namespace team-red which mounts /run/containerd as a hostPath volume on the Node where it’s running. This means that the Pod can access various data about other containers running on the same Node.
To prevent this configure Namespace team-red to enforce the baseline Pod Security Standard. Once completed, delete the Pod of the Deployment mentioned above.
Check the ReplicaSet events and write the event/log lines containing the reason why the Pod isn’t recreated into /opt/course/4/logs.
Answer
Making Namespaces use Pod Security Standards works via labels. We can simply edit it:
This should already be enough for the default Pod Security Admission Controller to pick up on that change. Let’s test it and delete the Pod to see if it’ll be recreated or fails, it should fail!
1 2 3 4 5 6 7 8 9
➜ k -n team-red get pod NAME READY STATUS RESTARTS AGE container-host-hacker-dbf989777-wm8fc 1/1 Running 0 115s
➜ k -n team-red delete pod container-host-hacker-dbf989777-wm8fc pod "container-host-hacker-dbf989777-wm8fc" deleted
➜ k -n team-red get pod No resources found in team-red namespace.
Usually the ReplicaSet of a Deployment would recreate the Pod if deleted, here we see this doesn’t happen. Let’s check why:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
➜ k -n team-red get rs NAME DESIRED CURRENT READY AGE container-host-hacker-dbf989777 1 0 0 5m25s
➜ k -n team-red describe rs container-host-hacker-dbf989777 Name: container-host-hacker-dbf989777 Namespace: team-red ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- ... Warning FailedCreate 2m41s replicaset-controller Error creating: pods "container-host-hacker-dbf989777-bjwgv" is forbidden: violates PodSecurity "baseline:latest": hostPath volumes (volume "containerdata") Warning FailedCreate 2m2s (x9 over 2m40s) replicaset-controller (combined from similar events): Error creating: pods "container-host-hacker-dbf989777-kjfpn" is forbidden: violates PodSecurity "baseline:latest": hostPath volumes (volume "containerdata")
There we go! Finally we write the reason into the requested file so that Mr Scoring will be happy too!
1 2
# /opt/course/4/logs Warning FailedCreate 2m2s (x9 over 2m40s) replicaset-controller (combined from similar events): Error creating: pods "container-host-hacker-dbf989777-kjfpn" is forbidden: violates PodSecurity "baseline:latest": hostPath volumes (volume "containerdata")
Pod Security Standards can give a great base level of security! But when one finds themselves wanting to deeper adjust the levels like baseline or restricted… this isn’t possible and 3rd party solutions like OPA could be looked at.
考题7 - NetworkPolicy - default-deny
Context
A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn’t have any other NetworkPolicy defined. 一个默认拒绝(default-deny)的 NetworkPolicy 可避免在未定义任何其他 NetworkPolicy 的 namespace 中意外公开 Pod。
Task
Create a new default-deny NetworkPolicy named denynetwork in the namespace development for all traffic of type Ingress. The new NetworkPolicy must deny all ingress traffic in the namespace development.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace development. You can find a skeleton manifest file at /cks/15/p1.yaml
create a NetworkPolicy named pod-restriction to restrict access to Pod products-service running in namespace dev-team. Only allow the following Pods to connect to Pod products-service:
Pods in the namespace qa
Pods with label environment: testing, in any namespace
Make sure to apply the NetworkPolicy. You can find a skelet on manifest file at /cks/6/p1.yaml
创建一个名为 pod-restriction 的 NetworkPolicy 来限制对在 namespace dev-team 中运行的 Pod products-service 的访问。只允许以下 Pod 连接到 Pod products-service:
绑定到 Pod 的 ServiceAccount 的 Role 授予过度宽松的权限。完成以下项目以减少权限集。 A Role bound to a Pod’s serviceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.
Task
一个名为 web-pod 的现有 Pod 已在 namespace db 中运行。编辑绑定到 Pod 的 ServiceAccount service-account-web 的现有 Role, 仅允许只对 services 类型的资源执行 get 操作。
创建一个名为 role-2-binding 的新 RoleBinding,将新创建的 Role 绑定到 Pod 的 ServiceAccount。 注意:请勿删除现有的 RoleBinding。
Given an existing Pod named web-pod running in the namespace db. Edit the existing Role bound to the Pod’s serviceAccount sa-dev-1 to only allow performing list operations, only on resources of type Endpoints.
create a new Role named role-2 in the namespace db, which only allows performing update operations, only on resources of type persistentvolumeclaims
create a new RoleBinding named role-2-binding binding the newly created Role to the Pod’s serviceAccount.
Enable audit logs in the cluster. To do so, enable the log backend, and ensure that:
logs are stored at /var/log/kubernetes/audit-logs.txt
log files are retained for 10 days
at maximum, a number of 2 auditlog files are retained A basic policy is provided at /etc/kubernetes/logpolicy/sample-policy.yaml. it only specifies what not to log. The base policy is located on the cluster’s master node.
Edit and extend the basic policy to log:
namespaces changes at RequestResponse level
the request body of pods changes in the namespace front-apps
configMap and secret changes in all namespaces at the Metadata level
Also, add a catch-all ruie to log all other requests at the Metadata level.
Don’t forget to apply the modifiedpolicy. /etc/kubernetes/logpolicy/sample-policy.yaml
Retrieve the content of the existing secret named db1-test in the istio-system namespace. store the username field in a file named /home/candidate/user.txt, and the password field in a file named /home/candidate/pass.txt.
You must create both files; they don’t exist yet. Do not use/modify the created files in the following steps, create new temporary files if needed.
Create a new secret named db2-test in the istio-system namespace, with the following
### 验证 pod kubectl get pods -n istio-system dev-pod
考题12 - dockerfile 和 deployment 安全优化
Task
Analyze and edit the given Dockerfile (based on the ubuntu:16.04 image) /cks/docker/Dockerfile fixing two instructions present in the file being prominent security/best-practice issues.
Analyze and edit the given manifest file /cks/docker/deployment.yaml fixing two fields present in the file being prominent security/best-practice issues.
Don’t add or remove configuration settings; only modify the existing configuration settings, so that two configuration settings each are no longer security/best-practice concerns.
Should you need an unprivileged user for any of the tasks, use user nobody with user id 65535.
apiVersion: apps/v1 kind: Deployment metadata: labels: app: dev name: dev spec: replicas: 1 selector: matchLabels: app: dev template: metadata: labels: app: dev spec: containers: - image: mysql name: mysql securityContext: {'capabilities':{'add':['NET_ADMIN'],'drop':['all']},'privileged': False,'readOnlyRootFilesystem': True, 'runAsUser': 65535}
考题13 - admission-controllers - ImagePolicyWebhook
Context
A container image scanner is set up on the cluster, but it’s not yet fully integrated into the cluster’s configuration. When complete, the container image scanner shall scan for and reject the use of vulnerable images.
You have to complete the entire task on the cluster’s master node, where all services and files have been prepared and placed. Given an incomplete configuration in directory /etc/kubernetes/epconfig and a functional container image scanner with HTTPS endpoint https://acme.local:8082/image_policy:
Enable the necessary plugins to create an image policy
validate the control configuration and change it to an implicit deny
Edit the configuration to point to the provided HTTPS endpoint correctly.
Finally , test if the configuration is working by trying to deploy the vulnerable resource /cks/1/web1.yaml
You can find the container image scanner’s log file at /var/loglimagepolicyiacme.log
Context: it is best-practice to design containers to be stateless and immutable
Task
Inspect Pods runnings in namespace production and delete any Pod that is either not stateless or not immutable.
Use the following strict interpretation of stateless and immutable:
Pod being able to store data inside containers must be treated as not stateless. You don’t have to worry whether data is actually stored inside containers or not already.
Pod being configured to be privileged in any way must be treated as potentially not stateless and not immutable.
检查在 namespace production 中运行的 Pod,并删除任何非无状态或非不可变的 Pod。 使用以下对无状态和不可变的严格解释:
能够在容器内存储数据的 Pod 的容器必须被视为非无状态的。
被配置为任何形式的特权 Pod 必须被视为可能是非无状态和非不可变的。 注意:你不必担心数据是否实际上已经存储在容器中。
1 2 3 4 5 6 7 8 9 10 11 12
### 在命名空间 dev 中检查 running 状态的 pod kubectl get pods -n production | grep running -i
### 查看具有特权的 pod kubectl get pods -n production -oyaml | grep -i "privileged: true"
### 查看具有 volume 的 pod # jq 用于处理JSON输入,将给定过滤器应用于其JSON文本输入并在标准输出上将过滤器的结果生成为JSON。 kubectl get pods -n production -o jsonpath={.spec.volumes} | jq
### 将查询到的具有特权和 volume 的 pod 都删除 kubectl delete pods -n production pod名称
考题15 - gVisor/runtimeclass
Context
This cluster uses containerd as CRI runtime. Containerd’s default runtime handler is runc . Containerd has been prepared to support an additional runtime handler , runsc (gVisor).
Create a RuntimeClass named untrusted using the prepared runtime handler named runsc. Update all Pods in the namespace server to run on gvisor, unless they are already running on anon-default runtime handler. You can find a skeleton manifest file at /cks/13/rc.yaml
使用名为 runsc 的现有运行时处理程序,创建一个名为 untrusted 的 RuntimeClass。 更新 namespace server 中的所有 Pod 以在 gVisor 上运行。 您可以在 /cks/gVisor/rc.yaml 中找到一个模版清单
There may be lots of impediments to setting up this kubernetes cluster successfully due to network conditions or some misconfigurations, but those above can be solved step by step. Finally, node(s) is(are) ready as follows:
1 2 3
$ kubectl get node NAME STATUS ROLES AGE VERSION kube-master Ready control-plane 21m v1.28.3
做题工具
alias
1 2 3
alias k=kubectl # will already be pre-configured export do="--dry-run=client -o yaml" # k create deploy nginx --image=nginx $do export now="--force --grace-period 0" # k delete pod x $now
kubectl get pods -o json kubectl get pods -o=jsonpath='{@}' kubectl get pods -o=jsonpath='{.items[0]}' kubectl get pods -o=jsonpath='{.items[0].metadata.name}' kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}" kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' kubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\.io/hostname}'
yq
examples
1 2 3 4 5 6 7 8 9 10 11
# Read a value yq '.a.b[0].c' file.yaml # Pipe from STDIN yq '.a.b[0].c' < file.yaml # Update a yaml file, in place yq -i '.a.b[0].c = "cool"' file.yaml # Find and update an item in an array yq '(.[] | select(.name == "foo") | .address) = "12 cat st"'
jq
tr
truncate
crictl
cut
awk
常规使用
组装命令并执行
1
kubectl get svc | awk '{cmd="kubectl get svc "$1" -oyaml";system(cmd)}'
Egress, outbound connection from pod, non-isolated by default. If NetworkPolicy selects this pod and was Egress type, then only out connections mentioned in it allowed. If lots of NetworkPolicy select the same pod, then all connections mentoined in those list are allowed. Additive.
Ingress, inbound connection to pod, non-isolated by default. Effects are as the same as Egress. Only connections mentioned by NetworkPolicy can connect to this Pod successfully. Examples
apiVersion:networking.k8s.io/v1 kind:NetworkPolicy metadata: name:test-network-policy namespace:default spec: # Indicates which pods this NetworkPolicy will apply to, selecting by pod's label # podSelector: {} indicates this NetworkPolicy apply to all pods in default ns. podSelector: matchLabels: role:db policyTypes: -Ingress -Egress # Defines which pod can connect to this pod. ingress: # both `from` and `port` rules are satitisfied, then allowed -from: # 1. IP CIDR, connections from pod whose IP in this CIDR are allowd to connect -ipBlock: cidr:172.17.0.0/16 except: -172.17.1.0/24 # 2. Namespace, connection from pod whose namespace has following labels are allowed to connect -namespaceSelector: matchLabels: project:myproject # 3. Pod, connections from pod which has following labels are allowed to connect -podSelector: matchLabels: role:frontend # Based on `from`, if the target port of those connection was 6379 and protocl was TCP, allowed. ports: -protocol:TCP port:6379 # Defines which pod can be connected by this pod # both `to` and `port` rules are satitisfied, then allowed egress: -to: # 1. Connections from this pod can connect to this CIDR -ipBlock: cidr:10.0.0.0/24 # Based on `to`, if the target port and protocol of this connection was 5978 and TCP, allowed. ports: -protocol:TCP port:5978
parameters of to and from was the same, as follows(irrelevant informations are omitted):
1 2 3 4 5 6
$ kubectl explain networkpolicy.spec.ingress from <[]NetworkPolicyPeer> ports <[]NetworkPolicyPort> $ kubectl explain networkpolicy.spec.egress to <[]NetworkPolicyPeer> ports <[]NetworkPolicyPort>
If the Policy doesn’t work as expected, check kube-apiserver logs as below, make sure the Policy was loaded successfully. Since it seems to load a default AuditPolicy when failled to load the AuditPolicy passed in parameters of kube-apiserver. Logs are as below:
1
W0122 16:00:29.139016 1 reader.go:81] Audit policy contains errors, falling back to lenient decoding: strict decoding error: unknown field "rules[0].resources[0].resource"
The Pod Security Standards define three different policies to broadly cover the security spectrum. These policies are cumulative and range from highly-permissive to highly-restrictive. This guide outlines the requirements of each policy.
Policy violations will cause the pod to be rejected.
audit
Policy violations will trigger the addition of an audit annotation to the event recorded in the audit log, but are otherwise allowed.
warn
Policy violations will trigger a user-facing warning, but are otherwise allowed.
Usage
在命名空间上打标签
1 2 3 4 5 6 7 8 9 10 11 12
# The per-mode level label indicates which policy level to apply for the mode. # # MODE must be one of `enforce`, `audit`, or `warn`. # LEVEL must be one of `privileged`, `baseline`, or `restricted`. pod-security.kubernetes.io/<MODE>:<LEVEL>
# Optional: per-mode version label that can be used to pin the policy to the # version that shipped with a given Kubernetes minor version (for example v1.29). # # MODE must be one of `enforce`, `audit`, or `warn`. # VERSION must be a valid Kubernetes minor version, or `latest`. pod-security.kubernetes.io/<MODE>-version:<VERSION>
Set one of fields above to false to prevent auto injection for a pod.
Restrict access to Secrets Set annotation kubernetes.io/enforce-mountable-secrets to true for a ServiceAccount, then only secrets in the field of sa.secrets of this ServiceAccount was allowed to use in a pod, such as a secret volume,envFrom, imagePullSecrets.
How to use ServiceAccount to connect to apiserver? reference
# Simple way in a kubernetes cluster created by kubeadm $ kubectl apply \ -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
Contents Consists of the following topics:
master
etcd
controlplane
node
policies
Each topic starts with a list of items which was checked with checked status, then a list of remediations to FAIL or WARN items given. You can fix those issues under the given instructions. At last, check summary of this topic.
Here is a output example for topic master
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
[WARN] 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) [WARN] 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
== Remediations master == 1.1.9 Run the below command (based on the file location on your system) on the control plane node. For example, chmod 600 <path/to/cni/files> 1.1.10 Run the below command (based on the file location on your system) on the control plane node. For example, chown root:root <path/to/cni/files>
k get pod -A -ojsonpath="{range .items[*]}{.spec['initContainers','containers'][*].image} {.metadata.name} {'#'} {end}" | sed 's|#|\n|g' | sed 's|^ ||g' | sed 's| $||g' | awk '{cmd="echo "$2"; trivy -q image "$1" --severity HIGH,CRITICAL | grep Total";system(cmd)}'
Category: I/O ------------- spy_file Echo any read/write made by any process to all files. Optionall y, you can provide the name of one file to only intercept reads /writes to that file.
This chisel intercepts all reads and writes to all files. Instead of all files, you can limit interception to one file. Args: [string] read_or_write - Specify 'R' to capture only read event s; 'W' to capture only write events; 'RW' to capture read and w rite events. By default both read and write events are captured . [string] spy_on_file_name - The name of the file which the chis el should spy on for all read and write activity. $ sysdig -c spy_file "RW /root/spy_file_test.txt" 23:53:25.592303985 date(112109) W 32B /root/spy_file_test.txt Thu Jan 25 11:53:25 PM HKT 2024
23:53:43.333152845 cat(112206) R 32B /root/spy_file_test.txt Thu Jan 25 11:53:25 PM HKT 2024
23:53:43.333166670 cat(112206) R 0B /root/spy_file_test.txt NULL 23:53:51.856062624 date(112270) W 32B /root/spy_file_test.txt Thu Jan 25 11:53:51 PM HKT 2024
23:53:56.965894638 cat(112307) R 64B /root/spy_file_test.txt Thu Jan 25 11:53:25 PM HKT 2024 Thu Jan 25 11:53:51 PM HKT 2024
23:53:56.965902094 cat(112307) R 0B /root/spy_file_test.txt NULL
Usage
Save events to a file
1
sysdig -w test.scap
Read events from a file while analyzing (by chisels)
1
sysdig -r test.scap -c httptop
Specify the format to be used when printing the events -p , –print= Specify the format to be used when printing the events. With -pc or -pcontainer will use a container-friendly format. With -pk or -pkubernetes will use a kubernetes-friendly format. With -pm or -pmesos will use a mesos-friendly format. See the examples section below for more info.
1
sysdig -r test.scap -c httptop -pc
Specify the number of events Sysdig should capture by passing it the -n flag. Once Sysdig captures the specified number of events, it’ll automatically exit:
1
sysdig -n 5000 -w test.scap
Use the -C flag to configure Sysdig so that it breaks the capture into smaller files of a specified size. The following example continuously saves events to files < 10MB:
1
sysdig -C 10 -w test.scap
Specify the maximum number of files Sysdig should keep with the -W flag. For example, you can combine the -C and -W flags like so:
1
sysdig -C 10 -W 4 -w test.scap
You can analyze the processes running in the WordPress container with:
ubuntu@primary:~$ sysdig -l | grep "^container." container.id The truncated container ID (first 12 characters), e.g. 3ad7b26ded6d is extracted from the container.full_id The full container ID, e.g. container.name The container name. In instances of userspace container engine lookup delays, this field container.image The container image name (e.g. falcosecurity/falco:latest for docker). In instances of container.image.id The container image id (e.g. 6f7e2741b66b). In instances of userspace container engine container.type The container type, e.g. docker, cri-o, containerd etc. container.privileged 'true' for containers running as privileged, 'false' otherwise. In instances of userspace container.mounts A space-separated list of mount information. Each item in the list has the format container.mount (ARG_REQUIRED) Information about a single mount, specified by number (e.g. container.mount.source (ARG_REQUIRED) The mount source, specified by number (e.g. container.mount.source[0]) or container.mount.dest (ARG_REQUIRED) The mount destination, specified by number (e.g. container.mount.dest[0]) container.mount.mode (ARG_REQUIRED) The mount mode, specified by number (e.g. container.mount.mode[0]) or container.mount.rdwr (ARG_REQUIRED) The mount rdwr value, specified by number (e.g. container.mount.rdwr[0]) container.mount.propagation (ARG_REQUIRED) The mount propagation value, specified by number (e.g. container.image.repository The container image repository (e.g. falcosecurity/falco). In instances of userspace container.image.tag The container image tag (e.g. stable, latest). In instances of userspace container engine container.image.digest The container image registry digest (e.g. container.healthcheck The container's health check. Will be the null value ("N/A") if no healthcheck container.liveness_probe The container's liveness probe. Will be the null value ("N/A") if no liveness probe container.readiness_probe The container's readiness probe. Will be the null value ("N/A") if no readiness probe container.start_ts Container start as epoch timestamp in nanoseconds based on proc.pidns_init_start_ts and container.duration Number of nanoseconds since container.start_ts. container.ip The container's / pod's primary ip address as retrieved from the container engine. Only container.cni.json The container's / pod's CNI result field from the respective pod status info. It contains
ubuntu@primary:~$ sysdig -l | grep "^k8s." k8s.ns.name The Kubernetes namespace name. This field is extracted from the container runtime socket k8s.pod.name The Kubernetes pod name. This field is extracted from the container runtime socket k8s.pod.id [LEGACY] The Kubernetes pod UID, e.g. 3e41dc6b-08a8-44db-bc2a-3724b18ab19a. This legacy k8s.pod.uid The Kubernetes pod UID, e.g. 3e41dc6b-08a8-44db-bc2a-3724b18ab19a. Note that the pod UID k8s.pod.sandbox_id The truncated Kubernetes pod sandbox ID (first 12 characters), e.g 63060edc2d3a. The k8s.pod.full_sandbox_id The full Kubernetes pod / sandbox ID, e.g k8s.pod.label (ARG_REQUIRED) The Kubernetes pod label. The label can be accessed either with the k8s.pod.labels The Kubernetes pod comma-separated key/value labels. E.g. 'foo1:bar1,foo2:bar2'. This k8s.pod.ip The Kubernetes pod ip, same as container.ip field as each container in a pod shares the k8s.pod.cni.json The Kubernetes pod CNI result field from the respective pod status info, same as
$ cat /etc/netplan/00-installer-config.yaml # This is the network config written by 'subiquity' network: ethernets: enp0s3: addresses: - 192.168.56.3/24 nameservers: addresses: - 114.114.114.114 search: [] routes: - to: default via: 192.168.56.2 version: 2
配置修改后,可以看到默认路由已变成 192.168.56.2:
1 2 3 4
$ ip route default via 192.168.56.2 dev enp0s3 proto static 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.3