How to Secure Kubernetes Workloads with Network Policies using Civo and Cilium

How to Secure Kubernetes Workloads with Network Policies using Civo and Cilium


6 min read

Deploying and managing containerized applications at scale requires a secure environment that is vital throughout the container lifecycle due to the runtime’s constant change, which affects both the apps and the APIs that connect them to other applications and services.

Want to tighten security for your Kubernetes cluster and applications? This article shows how Civo and Cilium work together to secure your workloads with network policies.

What is Civo?

Civo is a cloud computing company that provides a developer-focused platform for deploying, managing, and scaling Kubernetes clusters and other cloud-based services. It offers a simplified approach to Kubernetes deployment, aiming to streamline the process for developers and businesses.

What is Cilium?

Cilium is an open-source software project designed to provide networking, observability, and security solutions in modern containerized environments, specifically within Kubernetes clusters. It acts as a CNI (Container Network Interface) plugin, serving as a powerful networking and security middleware connecting the application layer and the underlying network infrastructure.

What are Kubernetes Network Policies?

Imagine your Kubernetes cluster as a bustling city. Pods are like buildings, and network policies are like traffic rules on the streets.

These rules control:

Who can go where: You can allow some pods to visit certain "buildings" (services or other pods) while restricting others. What they can do there: You can define what kind of interactions pods can have, like reading data from a database or sending messages to another pod. How they can travel: You can specify which "streets" they can use (protocols, ports).

Kubernetes network policies define how pods can communicate securely with each other and various network services and control incoming and outgoing network traffic to and from the cluster.

How would Civo and Cilium work?

Cilium and Civo work together to provide a comprehensive and efficient way to secure your Kubernetes workloads with network policies. Civo simplifies deployment and management, while Cilium offers powerful enforcement and advanced security features. With both working in in sync, you can ensure your Kubernetes clusters and applications are well-protected.


A Civo account. You can create one here

A Civo CLI. You can install and set up using this guide.

How to deploy and configure a Kubernetes Network Policy Controller like Cilium using Civo

Step 1: Setting up a Civo Kubernetes cluster

To get started, ensure you have a running Civo Kubernetes cluster. You can create one using the Civo CLI or the Civo web interface. Ensure that your kubectl is configured properly to connect with your Civo Kubernetes cluster.

Step 2: Install Cilium on your Civo Kubernetes cluster

a. Authenticate the Civo CLI with your API token using the following command:

civo apikey save

You will be asked to give a name to your API key and also input your API key to be saved.

b. Use the Civo CLI or web console to start creating a new Kubernetes cluster

c. Go to the "Advanced options" during cluster configuration

d. Under "Networking", select "Cilium" from the CNI dropdown menu

e. Complete the other cluster details as per your requirements

f. Launch the cluster

This will automatically deploy Cilium as the CNI for networking and NetworkPolicy enforcement in the new cluster.

The manual installation steps using Helm will only be required if you already have an existing cluster running a different CNI, and want to transition to Cilium subsequently.

But for new clusters, choosing the Cilium CNI during creation is simpler and avoids the need to manually install it afterwards.

Step 3: Verify Cilium Installation

Once the installation is completed, to make sure Cilium is running properly, run this:

kubectl get pods -n kube-system -l k8s-app=cilium

The output should be like this:

         NAME           READY   STATUS    RESTARTS   AGE
         cilium-   1/1     Running   0          3h
         cilium-mhdb2   1/1     Running   0          3h

This indicates two Cilium pods are running, named "cilium-fs8hx" and "cilium-mhdb2". Both have a ready status of 1/1 meaning they are healthy and have restarted 0 times.

Ensure that all Cilium pods are in their "Running" state. Check also to see if the Cilium nodes are running too:

           kubectl get nodes --selector='!'

The expected output would be a list of "True" or "False" values, one for each node's Ready condition status.

         True False True True

This indicates there are 4 nodes, with the first, third and fourth nodes being Ready (True), and the second node not Ready (False).

Step 4: Configure Network Policies

To enforce network security regulations, Cilium uses the Kubernetes NetworkPolicy API. To manage traffic between pods, you can define your NetworkPolicies using the “labels” or “annotations” field in the metadata section.

Here is an example of a NetworkPolicy manifest that only permits communication between pods:

kind: NetworkPolicy
  name: frontend-to-backend
      app: frontend
  - Ingress
  - from:
    - podSelector:
          app: backend

where the key parts are:

  • The podSelector to select pods with label app: frontend
  • The ingress rule allowing ingress traffic from pods with label app: backend This will match all pods with the label app: frontend and only allow them to receive traffic from pods labelled app: backend.

Apply the above manifest to the cluster by saving it to a file (such as civo-user-provider. yaml):

kubectl apply -f civo-user-provider. yaml

You can create additional NetworkPolicies to suit your requirements.

Step 5: Verify network policies

You can test the network connectivity across pods to make sure that NetworkPolicies are being enforced by Cilium.

For instance, you can attempt accessing a service running in the backend pod from the frontend pod, if you have a frontend and a backend pod. If the NetworkPolicy is set up properly, access ought to be permitted. if otherwise, the connection will be terminated.

Below are the frontend and backend of a simple web application created with Node.js and Express.js, implementing a simple frontend app on port 4000 that requests the backend API on port 3000.

const express = require('express');
const axios = require('axios');

const app = express();

app.get('/', async (req, res) => {
  try {
    const response = await axios.get('http://localhost:3000/api');
  } catch (error) {
    res.status(500).send('Error connecting to backend');

app.listen(4000, () => {
  console.log('Frontend server listening on port 4000');
Backend (Server-side)

const express = require('express');

const app = express();

app.get('/api', (req, res) => {
  res.json({message: 'Hello from backend!'});

app.listen(3000, () => {
  console.log('Backend server listening on port 3000');  

Without a controller to use it, creating a NetworkPolicy resource has no impact. For comprehensive instructions on implementing and setting network policies in your specific environment, it is necessary to refer to the documentation and guidelines supplied by your network plugin provider.

Don't forget to modify these network policies to match your unique security needs and your Kubernetes cluster's design. Ensure your Kubernetes cluster can enforce network policies and is configured to do so.


Keep in mind that applying network policies to secure Kubernetes workloads is a continuous process. To respond to emerging threats and preserve the security of your cluster and applications, you must constantly evaluate and adjust your policies.

Aside from this, you can explore additional features on Cilium, like load balancing, observability, and network security, beyond what is covered in this basic guide. Refer to the Cilium documentation to learn more about these features and how to configure them. Check here