Table of Contents

How To Install CRI-O on Ubuntu For Kubernetes (Guide) 2024

How To Install CRI-O on Ubuntu For Kubernete

Are you looking for a guide on how to install CRI-O on Ubuntu for Kubernetes? Then yes, you are at the right place.

I will tell you about container orchestration, and how I’ve chosen CRI-O as a robust alternative to Docker. This setup is like a Kubernetes tutorial for beginners, helping you become an expert in it.

As Kubernetes transitions away from Docker, CRI-O stands out as a lightweight and efficient container engine, aligning seamlessly with OCI standards. 

With the ability to execute containers directly within Kubernetes and support for diverse image formats, including how does docker works in CRI-O to simplify the containerized environment

Suggested reads:

Prerequisite

To Install CRI-O on Ubuntu For Kubernetes, make sure you have the following:

  • Ubuntu 22.04 Server: This guide is based on the Ubuntu Server with the hostname “server-ubuntu” and the server’s IP address set to “192.168.5.10“.
  • Non-root User with Admin Privileges: Ensure you have a user account with administrative privileges to execute commands effectively.

Learning Kubernetes and passing the exam might be challenging, but its pricing becomes more affordable when you use the CKA Exam Coupon Code 2024.

Read on to know more.

Installing CRI-O Container Runtime

Let me break it down for you on how to install the CRI-O Container Runtime. First, I set up the environment variables for my operating system and CRI-O version.

This step is super important because it sets up CRI-O and makes sure everything runs smoothly, especially if you’re using a certified Kubernetes Administrator (CKA) exam guide.

Now, let’s get into installing it:

export OS=xUbuntu_22.04

export CRIO_VERSION=1.24

Next, I add the CRI-O repository for my Ubuntu 22.04:

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list

Then, I added the GPG key for the CRI-O repository:

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | sudo apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add -

After that, I updated my Ubuntu repository and refreshed the package index:

sudo apt update

I check the CRI-O package version:

sudo apt info cri-o

Now, I’m ready to install the CRI-O container runtime:

sudo apt install cri-o cri-o-runc

Once the installation is complete, I start and enable the CRI-O service:

sudo systemctl start crio

sudo systemctl enable crio

Finally, I check and verify the status of the CRI-O service:

sudo systemctl status crio

This personalized process ensures that I install CRI-O v1.24.0 for my Ubuntu 22.04 using a third-party repository.

After installation, the CRI-O service will be running, set to start automatically whenever my system boots up.

Installing CNI (Container Network Interface) Plugin

To set up the CNI (Container Network Interface) plugin for the CRIO container runtime, you can follow these easy steps:

Install the CNI plugin from the official Ubuntu repository using the apt command:

sudo apt install containernetworking-plugins

After installation, edit the CRI-O configuration file “/etc/crio/crio.conf” using the following command:

sudo nano /etc/crio/crio.conf

In the “[crio.network]” section, uncomment the “network_dir” and “plugin_dirs” options. Add the CNI plugin directory “/usr/lib/cni/” to the “plugin_dirs” option. Save and close the file.

[crio.network]

network_dir = "/etc/cni/net.d/"

plugin_dirs = [

 "/opt/cni/bin/",

 "/usr/lib/cni/",

]

Remove the default bridge CNI configuration:

rm -f /etc/cni/net.d/100-crio-bridge.conf

Download the new bridge CNI configuration to “/etc/cni/net.d/11-crio-ipv4-bridge.conf,” which enables only IPv4 for Pods and containers:

sudo curl -fsSLo /etc/cni/net.d/11-crio-ipv4-bridge.conf https://raw.githubusercontent.com/cri-o/cri-o/main/contrib/cni/11-crio-ipv4-bridge.conf

Restart the CRI-O service to apply the new CNI plugin settings:

sudo systemctl restart crio

Check and verify the CRI-O service status:

sudo systemctl status crio

Ensure that the CRI-O service is running with the new bridge CNI configuration.

These steps configure and integrate the CNI plugin, allowing you to set up networking for containers and Pods in your CRI-O container runtime.

Similarly, you can also learn about how to Change Nginx index.html in Kubernetes with ConfigMap.

Installing CRI-Tools Package

I’ve completed the installation of the CRI-O container runtime. It seamlessly runs with the correct CNI plugin.

Now, I’ll proceed to show you how I enhanced my experience by installing the “cri-tools” package, which I discovered through one of these online coding websites

It shows the versatile command-line utility “crictl” for effortless interactions with the CRI-O container runtime.

To add this package, I’ll execute the apt command:

sudo apt install cri-tools

Once the installation wraps up, I’ll use the “crictl” command to inspect the current runtime version, confirming that I’m utilizing CRI-O v1.24:

crictl version

Following that, I’ll employ the “crictl” command again to assess the status of the Container Runtime and the CNI Network Plugin. I expect to see “RuntimeReady” for the CRI-O Container Runtime and “NetworkReady” for the CNI Plugin:

crictl info

As a handy tip, I’ll generate auto-completion for the “crictl” command using the following:

crictl completion > /etc/bash_completion.d/crictl

source ~/.bashrc

This ensures that I have command auto-completion available in my Bash shell. Now, when I run the “crictl” command and press TAB, I can conveniently explore all available command completions:

crictl TAB

This step enhances the usability of “crictl” and streamlines my interactions with the CRI-O container runtime.

Creating Pod and Container Using crictl

Now that I have cri-tools installed on my system, let’s delve into creating a Pod and container using the “crictl” command. 

In this hands-on example, I’ll initiate the process by establishing a Pod for an Nginx container.

Firstly, I’ll create a new directory named “~/demo” with the command:

mkdir ~/demo/

Next, I’ll generate a new JSON configuration file to define the Pod sandbox for the container. The command below creates a file with essential metadata:

cat <<EOF | tee ~/demo/sandbox_nginx.json

{

    "metadata": {

        "name": "nginx-sandbox",

        "namespace": "default",

        "attempt": 1,

        "uid": "hdishd83djaidwnduwk28bcsb"

    },

    "linux": {

    },

    "log_directory": "/tmp"

}

EOF

To run the Pod sandbox, I’ll utilize the following “crictl” command:

sudo crictl runp ~/demo/sandbox_nginx.json

Checking the running Pods will confirm the creation of the “nginx-sandbox” with its associated Pod ID:

sudo crictl pods

To delve into the details of the Pod, I’ll use the “crictl inspectp” command. It’s crucial to replace the Pod ID, and in this instance, the “nginx_sandbox” pod has the IP address “10.85.0.3”:

sudo crictl inspectp --output table 7b0618800e251

With the Pod sandbox in place, I’ll proceed to create the Nginx container. Firstly, I’ll download the Nginx image and verify the available images:

cat <<EOF | tee ~/demo/container_nginx.json

{

  "metadata": {

      "name": "nginx"

    },

  "image":{

      "image": "nginx"

    },

  "log_path":"nginx.0.log",

  "linux": {

  }

}

EOF

To add the container to the Pod sandbox, I’ll use the following “crictl” commands:

sudo crictl create 7b0618800e251 ~/demo/container_nginx.json ~/demo/sandbox_nginx.json

sudo crictl start <container_id>

Verifying the running container status will confirm the Nginx container is “Running” inside the specified Pod:

sudo crictl ps

Finally, I can access the Nginx container via the IP address of the Pod sandbox “nginx_sandbox” using the curl command:

curl 10.85.0.3

This interactive process allows me to dynamically create and manage Pods and containers using the powerful “crictl” command.

Verify CRI-O Installation

I’ve successfully installed cri-tools to manage pods and containers on my system. Also, you can install crictl To ensure that the CRI-O installation is correct, I’ll follow these steps:

Firstly, I’ll execute the command to check the CRI-O version:

sudo crictl --runtime-endpoint unix:///var/run/crio/crio.sock version

The output should provide details like Version, RuntimeName, RuntimeVersion, and RuntimeApiVersion. For instance:

  • Version: 0.1.0
  • RuntimeName: cri-o
  • RuntimeVersion: 1.23.2
  • RuntimeApiVersion: v1alpha2

Now, I’ll verify if CRI-O is ready for deploying Pods and containers by running:

sudo crictl info

The output will show the kubernetes cluster status conditions, ensuring that both “RuntimeReady” and “NetworkReady” are marked as true. 

This indicates that CRI-O is ready to manage my containerized workloads.

Conclusion: How To Install CRI-O on Ubuntu For Kubernetes

Hopefully, this guide on “Install CRI-O on Ubuntu For Kubernetes” has told you how to successfully complete the installation and configuration of the CRI-O Container Runtime with the CNI Plugin on my Ubuntu 22.04 server

This sets the stage for utilizing CRI-O as the container runtime for my Kubernetes cluster. I can use CRI-O for container management in Kubernetes to Check Kubernetes Cluster Health Status.

Throughout the process, I’ve gained familiarity with the fundamental usage of the “crictl” command, enabling me to effortlessly create Pods and containers within the CRI-O Container Runtime. 

Also, I’ve gotten better at Python scripting for DevOps, which helps me automate tasks and make our infrastructure work smoother.

With this setup, I’m well-prepared to leverage the capabilities of Kubernetes for efficient container orchestration in my environment.

Furthermore, I have looked into how to make Docker images a lot. It’s crucial for smoothly using Kubernetes and organizing containers in my setup.

Ben Kelly

Ben Kelly

Ben Kelly is a hands-on cloud solution architect based in Florida with over 10 years of IT experience. He uses Ansible Technology to help innovative Automation DevOps, Cloud Engineers, System Administrators, and IT Professionals succeed in automating more tasks every day.

Leave a Reply

Your email address will not be published. Required fields are marked *