Chainguard OS on Raspberry Pi

Learning Lab for November 2025 on the new release of Chainguard OS for the Raspberry Pi. Learn how to get started!
  4 min read

The November 2025 Learning Lab with Erika Heidi covers the release of Chainguard OS for the Raspberry Pi, showing how Chainguard OS has evolved to power new environments.

Sections

  • 0:46 Presentation Starts
  • 2:55 How We Got Here: Wolfi and Chainguard OS
  • 5:47 Presenting Chainguard OS for the Raspberry Pi
  • 8:42 How to Set Up your Raspberry Pi with Chainguard OS
  • 11:23 Grype scan on the Raspberry images
  • 14:47 Demo Overview: Guardcraft Pi
  • 18:10 Grype scan on the Guardcraft image
  • 20:59 Live Demo: Minecraft server on the Raspberry Pi
  • 24:14 Demo Overview: Open Source LLM server
  • 30:51 Grype scan on the wolfi-llama image
  • 32:35 Live Demo: Open Source Llama.cpp Server
  • 35:03 Live image description with Qwen3-VL and Llama.cpp
  • 39:18 Live Demo: ALT description CLI tool using Llama.cpp server API
  • 41:24 What’s Next
  • 42:22 New Compliance Features for Chainguard VMs
  • 44:20 Chainguard VMs Roadmap
  • 45:40 Announcing the next Learning Labs

Quickstart

To get started with Chainguard OS on the Raspberry Pi, and to be able to run the demos in this presentation, you’ll need:

  • A Raspberry Pi 5
  • Power source for the Raspberry Pi 5
  • MicroSD card (and reader)
  • Ethernet connection
  • Micro-hdmi to hdmi cable (to connect your Pi to a display )
  • USB Keyboard

You also need to download the Chainguard Raspberry Pi Docker image by filling in the request form.

Creating the startup disk

Unpack the image contents:

gunzip rpi-generic-docker-arm64-*.raw.gz

Create the disk - this assumes your microSD card reader is on /dev/sda:

sudo dd if=rpi-generic-docker-arm64-*.raw of=/dev/sda bs=1M

After the disk is ready, plug it into the Pi and connect the board to the power source, ethernet cable, micro-hdmi, and keyboard. You can log in with user linky and password linky.

Then, you can run ip addr to find out your local network IP address and connect to the Pi via SSH from another computer.

Demo 1: Guardcraft Minecraft Server on Raspberry Pi

On the first demo, Erika demonstrates how to build a minimal Minecraft Java server with Chainguard Containers, running Chainguard OS on the Raspberry Pi.

From the Raspberry Pi, clone the Guardcraft repository:

git clone https://github.com/chainguard-demo/guardcraft-server.git && cd guardcraft-server

Build the image:

docker build . -t guardcraft-server

Then, run the Minecraft server with:

docker-compose up

This starts a Minecraft Java server using default settings configured via environment variables on the docker-compose.yaml file. You can connect from any compatible client on your local network.

Demo 2: Llama.cpp LLM Server on Raspberry Pi

On the second demo, Erika shows how to build a Llama.cpp container image using Chainguard Containers, and how to run the Llama.cpp server with vision-capable LLMs for generating rich ALT image descriptions, on the Raspberry Pi with Chainguard OS.

Step 1: Building the wolfi-llama image

From the Raspberry Pi, clone the Wolfi-llama repository:

git clone https://github.com/erikaheidi/wolfi-llama.git && cd wolfi-llama

Next, run the command to build the wolfi-llama container image. This step compiles Llama.cpp from source, which may take several minutes to complete.

docker build . -t wolfi-llama

Step 2: Downloading LLMs into the Pi

For this demo, we’re using the Qwen3-VL open source LLM, since that has vision capabilities. We picked the 2B-Instruct version since that runs well on the Raspberry Pi.

Access the models directory from the repository. This is where the model files should be stored in order to be shared with the container when the server is running:

cd models/

Download the LLM model from Huggingface:

curl -L -O https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-GGUF/resolve/main/Qwen3-VL-2B-Instruct-Q8_0.gguf?download=true

Next, download the mmproj file for that model, since that is required for advanced image features:

curl -L -O https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-GGUF/resolve/main/mmproj-F32.gguf?download=true

When download is complete, you can run the server.

Step 3: Running the Llama server

The docker-compose.yaml file includes a custom command directive that includes all options required for running the server using the models you just downloaded. These values are hardcoded so you don’t need to type a long docker run command every time you want to get the server up and running. As a reference, here is the command that you’ll be running via docker-compose:

docker run --rm --device /dev/dri/card1 --device /dev/dri/renderD128 \
 -v ${PWD}/models:/models -p 8000:8000 wolfi-llama:latest --no-mmap --no-warmup \
 -m /models/Qwen3-VL-2B-Instruct-Q8_0.gguf --mmproj /models/mmproj-F32.gguf \
 --port 8000 --host 0.0.0.0 -n 512 \
 --temp 0.7 \
 --top-p 0.8 \
 --top-k 20 \
 --presence-penalty 1.5

To start the server, run:

docker-compose up

When the server is up and running, you can access the chatbot interface from your browser by pointing it to the Raspberry Pi IP address in your local network, on port 8000.

Resources

Last updated: 2025-11-21 12:30