This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

Learn how to quickly install and start using Tetragon.

1 - Quick Kubernetes Install

Discover and experiment with Tetragon in a kubernetes environment

Create a cluster

If you don’t have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service:

The following commands create a single node Kubernetes cluster using Google Kubernetes Engine. See Installing Google Cloud SDK for instructions on how to install gcloud and prepare your account.

export NAME="$(whoami)-$RANDOM"
export ZONE="us-west2-a"
gcloud container clusters create "${NAME}" --zone ${ZONE} --num-nodes=1
gcloud container clusters get-credentials "${NAME}" --zone ${ZONE}

The following commands create a single node Kubernetes cluster using Azure Kubernetes Service. See Azure Cloud CLI for instructions on how to install az and prepare your account.

export NAME="$(whoami)-$RANDOM"
export AZURE_RESOURCE_GROUP="${NAME}-group"
az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
az aks create --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"
az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

The following commands create a Kubernetes cluster with eksctl using Amazon Elastic Kubernetes Service. See eksctl installation for instructions on how to install eksctl and prepare your account.

export NAME="$(whoami)-$RANDOM"
eksctl create cluster --name "${NAME}"

Tetragon’s correct operation depends on access to the host /proc filesystem. The following steps configure kind and Tetragon accordingly when using a Linux system.

cat <<EOF > kind-config.yaml
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /proc
        containerPath: /procHost
EOF
kind create cluster --config kind-config.yaml
EXTRA_HELM_FLAGS=(--set tetragon.hostProcPath=/procHost) # flags for helm install

Deploy Tetragon

To install and deploy Tetragon, run the following commands:

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon ${EXTRA_HELM_FLAGS[@]} cilium/tetragon -n kube-system
kubectl rollout status -n kube-system ds/tetragon -w

By default, Tetragon will filter kube-system events to reduce noise in the event logs. See concepts and advanced configuration to configure these parameters.

Deploy demo application

To explore Tetragon it is helpful to have a sample workload. Here we use Cilium’s demo application, but any workload would work equally well:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

Before going forward, verify that all pods are up and running - it might take several seconds for some pods to satisfy all the dependencies:

kubectl get pods

The output should be similar to this:

NAME                         READY   STATUS    RESTARTS   AGE
deathstar-6c94dcc57b-7pr8c   1/1     Running   0          10s
deathstar-6c94dcc57b-px2vw   1/1     Running   0          10s
tiefighter                   1/1     Running   0          10s
xwing                        1/1     Running   0          10s

What’s Next

Check for execution events.

2 - Quick Local Docker Install

Discover and experiment with Tetragon on your local Linux host

Start Tetragon

The easiest way to start experimenting with Tetragon is to run it via Docker using the released container images.

docker run --name tetragon-container --rm --pull always \
    --pid=host --cgroupns=host --privileged             \
    -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf    \
    quay.io/cilium/tetragon-ci:latest

This will start Tetragon in a privileged container. Priviliges are required to load and attach BPF programs. See Installation section for more details.

What’s Next

Check for execution events.

3 - Execution Monitoring

Execution traces with Tetragon

At the core of Tetragon is the tracking of all executions in a kubernetes cluster, virtual machines, and baremetal systems. This creates the foundation that allows Tetragon to attribute all system behavior back to a specific binary and its associated metadata (container, Pod, Node, and cluster).

Observe Tetragon Execution Events

Tetragon exposes the execution trace over JSON logs and GRPC stream. The user can then observe all executions in the system.

The following command can be used to observe exec events.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
docker exec tetragon-container tetra getevents -o compact

This will print a compact form of the exec logs. For an example we do the following with the demo application.

kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'
curl https://ebpf.io/applications/#tetragon

The CLI will print a compact form of the event to the terminal similar to the following output.

πŸš€ process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

The compact exec event contains the event type, the pod name, the binary and the args. The exit event will include the return code, in the case of curl 60 above.

For the complete exec event in JSON format remove the compact option.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents --pods xwing
docker exec tetragon-container tetra getevents

This will include a lot more details related the binary and event. A full example of the above curl is hown here, In a Kubernetes environment this will include the Kubernetes metadata include the Pod, Container, Namespaces, and Labels among other useful metadata.

Process execution event

{
  "process_exec": {
    "process": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4Njc0MzIxMzczOjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/usr/bin/curl",
      "arguments": "https://ebpf.io/applications/#tetragon",
      "flags": "execve rootcwd",
      "start_time": "2023-10-06T22:03:57.700327580Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "tid": 52699
    },
    "parent": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "arguments": "-c \"curl https://ebpf.io/applications/#tetragon\"",
      "flags": "execve rootcwd clone",
      "start_time": "2023-10-06T22:03:57.696889812Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjQ1MjQ1ODM5OjUyNjg5",
      "tid": 52699
    }
  },
  "node_name": "gke-john-632-default-pool-7041cac0-9s95",
  "time": "2023-10-06T22:03:57.700326678Z"
}

What’s next

Execution events are the most basic event Tetragon can produce. To see how to use tracing policies to enable file monitoring see the File Access Monitoring quickstart. To see a network policy check the Networking Monitoring quickstart.

4 - File Access Monitoring

File access traces with Tetragon

Tracing Policies can be added to Tetragon through YAML configuration files that extend Tetragon’s base execution tracing capabilities. These policies do filtering in kernel to ensure only interesting events are published to userspace from the BPF programs running in kernel. This ensures overhead remains low even on busy systems.

The following extends the example from Execution Tracing with a policy to monitor sensitive files in Linux. The policy used is the file_monitoring.yaml it can be reviewed and extended as needed. Files monitored here serve as a good base set of files.

To apply the policy Kubernetes uses a CRD that can be applied with kubectl. Uses the same YAML configuration as Kuberenetes, but loaded through a file on disk.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/file_monitoring.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon-ci:latest

With the file applied we can attach tetra to observe events again:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
docker exec tetragon-container tetra getevents -o compact

Then reading a sensitive file:

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
cat /etc/shadow

This will generate a read event (Docker events will omit Kubernetes metadata),

πŸš€ process default/xwing /bin/bash -c "cat /etc/shadow"
πŸš€ process default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ’₯ exit    default/xwing /bin/cat /etc/shadow 0

Attempts to write in sensitive directories will similarly create write events. For example, attempting to write in /etc.

kubectl exec -ti xwing -- bash -c 'echo foo >> /etc/bar'
cat /etc/shadow

Will result in the following output in the tetra CLI.

πŸš€ process default/xwing /bin/bash -c "echo foo >>  /etc/bar"
πŸ“ write   default/xwing /bin/bash /etc/bar
πŸ“ write   default/xwing /bin/bash /etc/bar
πŸ’₯ exit    default/xwing /bin/bash -c "echo foo >>  /etc/bar

What’s next

To explore tracing policies for networking try the Networking Monitoring quickstart. To dive into the details of policies and events please see Concepts section.

5 - Network Monitoring

Network access traces with Tetragon

This adds a network policy on top of execution and file tracing already deployed in the quick start. In this case we monitor all network traffic outside the Kubernetes pod CIDR and service CIDR.

Kubernetes Cluster Network Access Monitoring

First we must find the pod CIDR and service CIDR in use. The pod IP CIDR can be found relatively easily in many cases.

export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`

The services CIDR can then be fetched depending on environment. We require environment variables ZONE and NAME from install steps.

export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')

First we apply a policy that includes the podCIDR and serviceIP list as filters to avoid filter out cluster local traffic. To apply the policy:

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster.yaml
envsubst < network_egress_cluster.yaml | kubectl apply -f -

With the file applied we can attach tetra to observe events again:

 kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing --processes curl

Then execute a curl command in the xwing pod to curl one of our favorite sites.

 kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'

A connect will be observed in the tetra shell:

πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect default/xwing /usr/bin/curl tcp 10.32.0.19:33978 -> 104.198.14.52:443
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

We can confirm in-kernel BPF filters are not producing events for in cluster traffic by issuing a curl to one of our services and noting there is no connect event.

kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'

The output should be similar to:

Ship landed

And as expected no new events:

πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect default/xwing /usr/bin/curl tcp 10.32.0.19:33978 -> 104.198.14.52:443
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

Docker/Baremetal Network Access Monitoring

This example also works easily for local docker users as well except it is not as obvious to the quickstart authors what IP address CIDR will be useful. The policy by default will filter all local IPs 127.0.0.1 from the event log. So we can demo that here.

Set env variables to local loopback IP.

export PODCIDR="127.0.0.1/32"
export SERVICECIDR="127.0.0.1/32"

To create the policy,

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster.yaml
envsubst < network_egress_cluster.yaml > network_egress_cluster_subst.yaml

Start Tetragon with the new policy:

docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/network_egress_cluster_subst.yaml:/etc/tetragon/tetragon.tp.d/network_egress_cluster_subst.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon-ci:latest

Attach to the tetragon output

docker exec tetragon-container tetra getevents -o compact

Now remote TCP connections will be logged and local IPs filters. Executing a curl to generate a remote TCP connect.

curl https://ebpf.io/applications/#tetragon

Produces the following output:

πŸš€ process  /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect  /usr/bin/curl tcp 192.168.1.190:36124 -> 104.198.14.52:443
πŸ’₯ exit     /usr/bin/curl https://ebpf.io/applications/#tetragon 0

Whats Next

So far we have installed Tetragon and shown a couple policies to monitor sensitive files and provide network auditing for connections outside our own cluster and node. Both these cases highlight the value of in kernel filtering. Another benefit of in-kernel filtering is we can add enforcement to the policies to not only alert, but block the operation in kernel and/or kill the application attempting the operation.

To learn more about policies and events Tetragon can implement review the Concepts section.

6 - Policy Enforcement

Policy Enforcement

This adds a network and file policy enforcement on top of execution, file tracing and networking policy already deployed in the quick start. In this use case we use a namespace filter to limit the scope of the enforcement policy to just the default namespace we installed the demo application in from the Quick Kubernetes Install.

This highlights two important concepts of Tetragon. First in kernel filtering provides a key performance improvement by limiting events from kernel to user space. But, also allows for enforcing policies in the kernel. By issueing a SIGKILL to the process at this point the application will be stopped from continuing to run. If the operation is triggered through a syscall this means the application will not return from the syscall and will be terminated.

Second, by including kubernetes filters, such as namespace and labels we can segment a policy to apply to targeted namespaces and pods. This is critical for effective policy segmentation.

For implementation details see the Enforcement concept section.

Kubernetes Enforcement

The following section is layed out with the following:

  • A guide to promote the network observation policy that observer all network traffic egressing the cluster to enforce this policy.
  • A guide to promote the file access monitoring policy to block write and read operations to sensitive files.

Block TCP Connect outside Cluster

First we will deploy the Network Monitoring policy with enforcement on. For this case the policy is written to only apply against the empire namespace. This limits the scope of the policy for the getting started guide.

Ensure we have the proper Pod CIDRs

export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`

and Service CIDRs configured.

export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')

Then we can apply the egress cluster enforcement policy

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster_enforce.yaml
envsubst < network_egress_cluster_enforce.yaml | kubectl apply -n default -f -

With the enforcement policy applied we can attach tetra to observe events again:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing

And once again execute a curl command in the xwing:

kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'

The command returns an error code because the egress TCP connects are blocked shown here.

command terminated with exit code 137

Connect inside the cluster will work as expected,

kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'

The Tetra CLI will print the curl and annotate that the process that was issued a Sigkill. The successful internal connect is filtered and will not be shown.

πŸš€ process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect default/xwing /usr/bin/curl tcp 10.32.0.28:45200 -> 104.198.14.52:443
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon SIGKILL
πŸš€ process default/xwing /bin/bash -c "curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing"
πŸš€ process default/xwing /usr/bin/curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

Enforce File Access Monitoring

The following extends the example from File Access Monitoring with enforcement to ensure sensitive files are not read. The policy used is the file_monitoring_enforce.yaml it can be reviewed and extended as needed. The only difference between the observation policy and the enforce policy is the addition of an action block to sigkill the application and return an error on the op.

To apply the policy:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/file_monitoring_enforce.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring_enforce.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon-ci:latest

With the file applied we can attach tetra to observe events again,

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
docker exec tetragon-container tetra getevents -o compact

Then reading a sensitive file,

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
cat /etc/shadow

The command will fail with an error code because this is one of our sensitive files,

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'

The output should be similar to:

command terminated with exit code 137

This will generate a read event (Docker events will omit Kubernetes metadata),

πŸš€ process default/xwing /bin/bash -c "cat /etc/shadow"
πŸš€ process default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ’₯ exit    default/xwing /bin/cat /etc/shadow SIGKILL

Writes and reads to files not part of the enforced file policy will not be impacted.

πŸš€ process default/xwing /bin/bash -c "echo foo >> bar; cat bar"
πŸš€ process default/xwing /bin/cat bar
πŸ’₯ exit    default/xwing /bin/cat bar 0
πŸ’₯ exit    default/xwing /bin/bash -c "echo foo >> bar; cat bar" 0

What’s next

The completes the quick start guides. At this point we should be able to observe execution traces in a Kubernetes cluster and extend the base deployment of Tetragon with policies to observe and enforce different aspects of a Kubernetes system.

The rest of the docs provide further documentation about installation and using policies. Some useful links:

To explore details of writing and implementing policies the Concepts is a good jumping off point. For installation into production environments we recommend reviewing Advanced Installations. Finally the Use Cases section covers different uses and deployment concerns related to Tetragon.