How to Create a Camelot-OS Project

A practical step-by-step guide for embedded developers

Camelot-OS provides a secure, microkernel-based operating system foundation for embedded devices. To build your own project with Camelot-OS, the recommended starting point is the sample-project template, which illustrates how to define and assemble a complete build including the kernel, runtime, and applications.

This guide shows you how to bootstrap a new Camelot-OS project, configure it, download dependencies, build your firmware, and customize your application.

Quickstart

Get and build the project

Start by downloading the sample project and entering it :

git clone https://github.com/camelot-os/sample-project
cd sample-project

Then pull the docker image that hold all the needed dependencies required to build the firmware:

docker pull ghcr.io/camelot-os/camelot-builder:latest

Now you can build the firmware using the docker image directly:

docker run -it   -v "$PWD":/workspace  -w /workspace  ghcr.io/camelot-os/camelot-builder:latest /bin/bash -c 'barbican download && barbican setup && ninja -C output/build || ninja -C output/build'

The firmware is then generated in theoutput/build/firmware.hex file.

If you wish to check the memory layout used by the firmware, you can check it in the output/build/camelot_private/layout.json file.

booting the firmware

Now that the project is built, you can directly use the firmware.hex file that has been generated in the output/build directory to flash your device.

You can use your favorite OCD tool, but here is an example flashing method using pyocd:

$ pip install pyocd
$ pyocd pack update
[...]
Downloading descriptors (001/001)
$ pyocd list
  #   Probe/Board       Unique ID                  Target            
---------------------------------------------------------------------
  0   STLINK-V3         004300483232510239353236   ✖︎ stm32u5a5zjtxq  
      NUCLEO-U5A5ZJ-Q
$ pyocd pack install stm32u5a5zjtxq
Downloading packs (press Control-C to cancel):
    Keil.STM32U5xx_DFP.3.0.0

Now that pyocd does found the board and its id, it can be started as a gdbserver agent, which localy listen on tcp/3333

pyocd gdbserver

Once started, any arm-compatible gdb such as gdb-multiarch package of usual distro can be used, at the project root directory:

gdb-multiarch

In gdb, the following command sequence can be used:

set arch arm
target remote localhost:3333
monitor reset halt
exec-file output/build/firmware.hex
load

Once the serial client is connected, usually on ttyACM0 when using nucleo board, the continue command can be lauched so that the board bootup. The firmware serial output is then accessible, using 115200/8n1 usual serial port configuration. Note that the firmware use unix return mode, meaning that the serial client has to emulate the carriage return. Such a setting is a standard configuration of usual serial clients.

The expected output is the following:

hello this is idle!
yielding for scheduler...
Sending message: Short msg
IPC received, calculating SHA256
SHA256: 0x7c3f950bfee666ebc2db4fb114a0b2b3fbada2310c99396a1a2a276df0c946ef
Sending message: This is a bit longer message
IPC received, calculating SHA256
SHA256: 0xad6400bb3ae05c354263ddd15535645cce1979d037180d5dce08f9fe641d9716
Sending message: yet another message to send via IPC
IPC received, calculating SHA256
SHA256: 0xaeb53aa119c5c4297f0d0bf3d60b23b2849c1b0064336ffb0522c45433f7ff28
Sending message: Tiny
IPC received, calculating SHA256
SHA256: 0x63b9415f88cad4d6cc7bbc8cb74652e598b5e9376623fa313cde5bb2d48716ac
Sending message: Medium length message for testing
IPC received, calculating SHA256
SHA256: 0x44c5788ec1b8b47e02ba0dc6a542efb5e35c64611db7d2f48ec6d9d5d40c1750
Sending message: another basic medium message to send
IPC received, calculating SHA256
SHA256: 0x40c50b0b73225288da676ad70f601f3d2a45ba148c57c425aa866976343b1c41

Build without Docker

Building without docker requires you to install all build-time dependencies.

System Packages

These are installed via your distribution package manger before Python/Rust tools. These packages are the following on Debian & Ubuntu, their name may vary depending on your distribution :

  • git — needed to clone project repositories

  • ca-certificates — for verifying HTTPS downloads

  • python3, python3-pip, python3-venv, python3-setuptools, python3-wheel — required for Python environments and Barbican

  • device-tree-compiler — used to compile device tree sources

  • g++, gcc — required by native components of some Python packages or parts of build

  • curl — used to fetch archives and toolchains

  • dirmngr, xz-utils, pkg-config — utilities for signature checking, extracting arm toolchain, and pkg-config probing

  • srecord — installed to support embedded image creation (common in ARM builds)

These toolchain packages provide the basics so that:

  • Python and Barbican can run

  • native code and build files can be compiled

  • sources can be fetched and extracted

Python Packages

Barbican is a Python tool and relies on several dependencies. Stage 2 installs Barbican via:

python3 -m pip install --break-system-packages --no-cache-dir \ meson==1.10.0 \ dunamai \ camelot-barbican

This brings in:

  • meson (1.10.0) — The build system generator used by Barbican generated builds

  • dunamai — Version tagging and semantic version utilities

  • camelot-barbican — The main tool for orchestrating project build, download, setup and integration

Behind the scenes, camelot-barbican itself pulls in Python modules such as:

  • Jinja2 for templates

  • ninja for executing build jobs

  • jsonschema and other helpers

  • GitPython for Git operations

  • device tree and DTS tooling
    (see the PyPI page for dependent packages)

ARM GNU Toolchain

you also need to install the Arm GNU Toolchain and makes it available under /opt/arm/arm-none-eabi. This is used to compile ARMv8-M code. It is extracted from a downloaded .tar.xz and verified via its SHA256 checksum.

You can download GNU toolchain from https://developer.arm.com/downloads/-/arm-gnu-toolchain-downloads.
You can install older version of the toolchain (tested starting with v10). If you aim to install the toolchain in another path that /opt/arm/arm-none-eabi, you need to fix the cm33-arm-none-eabi-gcc.ini accordingly, by updating the following line:

cross_toolchain = '/opt/arm/arm-none-eabi/'

Required Rust Tooling

Camelot-OS supports Rust-based userspace tasks and delivers a runtime in both C and Rust. As a consequence, in order to compile a Camelot-OS project, you must provide a fully functional embedded Rust toolchain with cross-compilation support.

The tested Rust stable release if Rust 1.86.0 that include both rustfmt and clippy. You also need the thumbv8m.main-none-eabi target to be installed.
To finish with, you need cargo-index, in order to delivers a standalone cargo repository aim to be used by embedded Rust application so that they can use the Camelot-OS crates, starting with the kernel userspace API.

First, install the corresponding rust release and targets first, using rustup tool:

curl https://sh.rustup.rs -sSf | sh -s -- -y \
    --default-toolchain 1.86.0 \
    --profile minimal \
    && rustup default nightly \
    && rustup target add thumbv8m.main-none-eabi 

Then install clippy and rustfmt 

rustup component add clippy --toolchain 1.86.0
rustup component add rustfmt --toolchain 1.86.0

To finish with, install cargo index:

cargo +1.86.0 install cargo-index

Now you're done ! You can now execute the very same commands as the ones done inside the docker container.

More about projects

Inside this directory, you’ll place the main configuration that describes your kernel, runtime, and applications. The following directories exist and are enough for the sample project build:

.
├── configs
│   ├── hello
│   │   └── hello.config
│   ├── hello_c
│   │   └── hello_c.config
│   ├── sentry
│   │   └── nucleo_u5a5.config
│   └── shield
│       └── shield.config
├── dts
│   └── sample.dts
├── project.toml
└── README.md

In a Camelot OS project, the following elements are required and are under the project integrator responsability:

  • For each component (application, kernel, runtime...), the Kconfig configuration associated to it, that correspond to the defconfig command resulting .config file, for each component. Each corresponding config file need to be generated inside each component source tree before being stored in the project directory, using either the deconfig or the menuconfig tool for automatic or manual configuration. Once installed, you only need to modify these configration if you upgrade the corresponding component or want to update its configuration
  • A project-wide device tree, that matches the corresponding target hardware configuration. This file define how the project is applicable to the target board and as such is under the responsability of the project integrator
  • the project configuration file denoted project.toml, that declare what are each component, and how they are associated to above configuration

Note that in the project device tree file, you need to declare where you want the applications to be mapped. You only need to declare a big-enough memory area, as the kernel memory placement tool will automatically position each application you declared into that area with compliance with the memory protection unit restrictions on your target architecture.

Memory zones are reserved memories that need to be tagged using the sentry,memory-pool compatible field, and denoted tasks_code for NVM memory area and tasks_ram for SRAM memory area.

Such a memory area declaration, as you can see in the sample.dts file, looks like this on typical STM32-based target boards:

    reserved-memory {
        # NVM memory area dedicated to application code mapping
        tasks_code: memory@0800d000 {
               reg = <0x0800d000 0x200000>;
               compatible = "sentry,memory-pool";
        };
       # SRAM memory area dedicated to application runtime memory (stack, bss, etc.)
        tasks_ram: memory@20008000 {
               reg = <0x20008000 0x280000>;
               compatible = "sentry,memory-pool";
        };
    };

About project.toml

At the core of a Camelot-OS project is a project.toml file that declares what the project contains — including kernel sources, a device tree, and application definitions.

Here’s an example configuration:

 
name = 'Dynamics demo Project'
license = 'Apache-2.0'
license_file = ['LICENSE.txt']
dts = 'dts/sample.dts'
crossfile = 'cm33-none-eabi-gcc.ini'
version = 'v0.0.1'

[kernel]
scm.git.uri = 'https://github.com/camelot-os/sentry-kernel.git'
scm.git.revision = 'main'
config = 'configs/sentry/nucleo_u5a5.config'

[runtime]
scm.git.uri = 'https://github.com/camelot-os/shield.git'
scm.git.revision = 'main'
config = 'configs/shield/shield.config'

[application.hello]
scm.git.uri = 'https://github.com/camelot-os/sample-rust-app.git'
scm.git.revision = 'main'
config = 'configs/hello/hello.config'
build.backend = 'cargo'
depends = []
provides = ['hello.elf']

[application.sample_c_app]
scm.git.uri = 'https://github.com/camelot-os/sample-c-app.git'
scm.git.revision = 'main'
config = 'configs/hello_c/hello_c.config'
build.backend = 'meson'
depends = []
provides = ['sample-c-app.elf']

This file describes:

  • the kernel repository and configuration to use

  • the runtime (system services) sources

  • two simple hello world applications that demonstrate an easy possible usage of a Camelot-OS
    Each component is declared with a Git URL, revision, and build configuration.

A little more about the barbican commands

Download Sources

Once the project is clone, use Barbican to fetch all required sources into your project:

barbican download 

This command will:

  • clone the kernel repo

  • fetch the runtime sources

  • pull down application code

Now you have a local workspace ready to build.

All components sources are deployed into a structured output directory, that can be easily removed if you need to reset the project.

Setup the Build

Based on the downloaded components,  Barbican can generate all the build infrastructure:

barbican setup 

This processes your project.toml and creates Meson build definitions for the kernel, runtime, and applications. It ensures that build rules, device tree integration, and configuration files are all wired together.

This steps generate a build.ninja file into the output/builddirectory

Compile the Project

Compiling the project is as easy as calling ninja once:

ninja -C output/build

This will compile:

  • the kernel (Sentry microkernel)

  • shield runtime component (equivalent to GNU/Linux's libc, for both C and Rust runtimes)

  • your application(s)

Artefacts such as hello.elf or firmware images will be placed under the build output folder, and can be used if needed as symbol files to debug each project component using, for example, gdb. A global firmware file denoted firmware.hex is also produced.

Caution: If you have errors when running ninja where rustc says that it didn't find the thumv8m.main target, just rerun ninja, rustc will automatically get back the proper target.