Tarian – Antivirus for Kubernetes
We want to maintain this as an open-source project to fight against the attacks on our favorite
Prerequisites
A kubernetes cluster that supports running Falco
Install
Tarian integrates with Falco by subscribing Falco Alerts via gRPC API. Falco support running gRPC API with mandatory mutual TLS (mTLS). So, firstly we need to prepare the certificates.
Prepare Namespaces
kubectl create namespace tarian-system
kubectl create namespace falcoPrepare Certificate for mTLS
With Cert Manager
You can setup certificates manually and save those certs to secrets accessible from Falco and Tarian pods. For convenient, you can use Cert Manager to manage the certs.
- Install Cert Manager by following this guide https://cert-manager.io/docs/installation/
- Wait for cert manager pods to be ready
kubectl wait --for=condition=ready pods --all -n cert-manager --timeout=3m
- Setup certs
A. If you don’t have an existing cluster issuer, you can create one using a self-signed issuer
Save this to
tarian-falco-certs.yaml
, then runkubectl apply -f tarian-falco-certs.yaml
.apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: root-ca
namespace: cert-manager
spec:
isCA: true
commonName: root-ca
secretName: root-secret
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned-issuer
kind: ClusterIssuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: ca-issuer
spec:
ca:
secretName: root-secret
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: falco-grpc-server
namespace: falco
spec:
isCA: false
commonName: falco-grpc
dnsNames:
- falco-grpc.falco.svc
- falco-grpc
secretName: falco-grpc-server-cert
usages:
- server auth
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: ca-issuer
kind: ClusterIssuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: falco-integration-cert
namespace: tarian-system
spec:
isCA: false
commonName: tarian-falco-integration
dnsNames:
- tarian-falco-integration
usages:
- client auth
secretName: tarian-falco-integration
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: ca-issuer
kind: ClusterIssuer
group: cert-manager.ioB. If you have an existing cluster issuer
Save this to
tarian-falco-certs.yaml
, then runkubectl apply -f tarian-falco-certs.yaml
.apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: falco-grpc-server
namespace: falco
spec:
isCA: false
commonName: falco-grpc
dnsNames:
- falco-grpc.falco.svc
- falco-grpc
secretName: falco-grpc-server-cert
usages:
- server auth
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: your-issuer # change this to yours
kind: ClusterIssuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: falco-integration-cert
namespace: tarian-system
spec:
isCA: false
commonName: tarian-falco-integration
dnsNames:
- tarian-falco-integration
usages:
- client auth
secretName: tarian-falco-integration
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: your-issuer # change th is to yours
kind: ClusterIssuer
group: cert-manager.ioSetup certificates manually
If you have other ways to setup the certificates, that would work too. You can create kubernetes secrets containing those certificates. The following steps expect that the secrets are named:
tarian-falco-integration
in namespacetarian-system
falco-grpc-server-cert
in namespacefalco
For mTLS to work, those certificates need to be signed by the same CA.
Install Falco with custom rules from Tarian
Save this to
falco-values.yaml
extraVolumes:
- name: grpc-cert
secret:
secretName: falco-grpc-server-cert
extraVolumeMounts:
- name: grpc-cert
mountPath: /etc/falco/grpc-cert
falco:
grpc:
enabled: true
unixSocketPath: ""
threadiness: 1
listenPort: 5060
privateKey: /etc/falco/grpc-cert/tls.key
certChain: /etc/falco/grpc-cert/tls.crt
rootCerts: /etc/falco/grpc-cert/ca.crt
grpcOutput:
enabled: trueThen install Falco using Helm:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm upgrade -i falco falcosecurity/falco -n falco -f falco-values.yaml
--set-file customRules."tarian_rules.yaml"=https://raw.githubusercontent.com/kube-tarian/tarian/main/dev/falco/tarian_rules.yamlSetup a Postgresql Database
You can use a DB as a service from your Cloud Services or you can also run by yourself in the cluster. For example to install the DB in the cluster, run:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install tarian-postgresql bitnami/postgresql -n tarian-system
--set postgresqlUsername=postgres
--set postgresqlPassword=tarian
--set postgresqlDatabase=tarianInstall tarian
- Install tarian using Helm
helm repo add tarian https://kube-tarian.github.io/tarian
helm repo update
helm upgrade -i tarian-server tarian/tarian-server --devel -n tarian-system
helm upgrade -i tarian-cluster-agent tarian/tarian-cluster-agent --devel -n tarian-system
- Wait for all the pods to be ready
kubectl wait --for=condition=ready pod --all -n tarian-system
- Run database migration to create the required tables
kubectl exec -ti deploy/tarian-server -n tarian-system -- ./tarian-server db migrate
- Verify
After the above step, you should see falco alert in tarianctl get events (See the following Usage sections).
Configuration
See helm chart values for
- tarian-server
- tarian-cluster-agent
Cloud / Vendor specific configuration
Private GKE cluster
Private GKE cluster by default creates firewall rules to restrict master to nodes communication only on ports
443
and10250
. To inject tarian-pod-agent container, tarian uses a mutating admission webhook. The webhook server runs on port9443
. So, we need to create a new firewall rule to allow ingress from master IP address range to nodes on tcp port 9443.For more details, see GKE docs on this topic: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules.
Usage
Use tarianctl to control tarian-server
- Download from Github release page
- Extract the file and copy tarianctl to your PATH directory
- Expose tarian-server to your machine, through Ingress or port-forward. For this example, we’ll use port-forward:
kubectl port-forward svc/tarian-server -n tarian-system 41051:80
- Configure server address with env var
export TARIAN_SERVER_ADDRESS=localhost:41051
To see violation events
tarianctl get events
Add a process constraint
tarianctl add constraint --name nginx --namespace default
--match-labels run=nginx
--allowed-processes=pause,tarian-pod-agent,nginxtarianctl get constraints
Add a file constraint
tarianctl add constraint --name nginx-files --namespace default
--match-labels run=nginx
--allowed-file-sha256sums=/usr/share/nginx/html/index.html=38ffd4972ae513a0c79a8be4573403edcd709f0f572105362b08ff50cf6de521tarianctl get constraints
Run tarian agent in a pod
Then after the constraints are created, we inject tarian-pod-agent to the pod by adding an annotation:
metadata:
annotations:
pod-agent.k8s.tarian.dev/threat-scan: "true"Pod with this annotation will have an additional container injected (tarian-pod-agent). The tarian-pod-agent container will continuously verify the runtime environment based on the registered constraints. Any violation would be reported, which would be accessible with
tarianctl get events
.Demo: Try a pod that violates the constraints
kubectl apply -f https://raw.githubusercontent.com/kube-tarian/tarian/main/dev/config/monitored-pod/configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kube-tarian/tarian/main/dev/config/monitored-pod/pod.yaml
# wait for it to become ready
kubectl wait --for=condition=ready pod nginx
# simulate unknown process runs
kubectl exec -ti nginx -c nginx -- sleep 15
# you should see it reported in tarian
tarianctl get eventsAlert Manager Integration
Tarian comes with Prometheus Alert Manager by default. If you want to use another alert manager instance:
helm install tarian-server tarian/tarian-server --devel
--set server.alert.alertManagerAddress=http://alertmanager.monitoring.svc:9093
--set alertManager.install=false
-n tarian-systemTo disable it, you can set the alertManagerAddress value to empty.
Troubleshooting
See docs/troubleshooting.md
Automatic Constraint Registration
When tarian-pod-agent runs in registration mode, instead of reporting unknown processes and files as violations, it automatically registers them as a new constraint. This is convenient to save time from registering manually.
To enable constraint registration, the cluster-agent needs to be configured.
helm install tarian-cluster-agent tarian/tarian-cluster-agent --devel -n tarian-system
--set clusterAgent.enableAddConstraint=truemetadata:
annotations:
# register both processes and file checksums
pod-agent.k8s.tarian.dev/register: "processes,files"
# ignore specific paths from automatic registration
pod-agent.k8s.tarian.dev/register-file-ignore-paths: "/usr/share/nginx/**/*.txt"Automatic constraint registration can also be done in a dev/staging cluster, so that there would be less changes in production.
Other supported annotations
metadata:
annotations:
# specify how often tarian-pod-agent should verify file checksum
pod-agent.k8s.tarian.dev/file-validation-interval: "1m"Securing tarian-server with TLS
To secure tarian-server with TLS, create a secret containing the TLS certificate. You can create the secret manually, or using Cert Manager. Once you have the secret, you can pass the name to the helm chart value:
helm upgrade -i tarian-server tarian/tarian-server --devel -n tarian-system
--set server.tlsSecretName=tarian-server-tlsContributing
See docs/contributing.md
Download Tarian
If you like the site, please consider joining the telegram channel or supporting us on Patreon using the button below.