Xannyx's Blog

Code view profile
Blogs read posts
Recipes explore
Blog: Booting up your Freestanding Rust Kernel

Booting up your Freestanding Rust Kernel

Blog Post

This blog is about writing a freestanding basic Operating System in Rust. I chose Rust for its memory safety guarantees while still providing the low-level control needed for OS development.

You can find my own take to create a Kernel in this Github Repository I created another repository that is specifically made for this blog. p

Disclaimer: While I’m passionate about OS development, I’m still learning myself. This blog began as a project for my Computer Science master’s degree, but I enjoyed it so much that I expanded it considerably.

Introduction: Understanding the Basics

Before diving into code, let’s align on some key concepts:

An operating system serves as the interface between computer hardware and applications. It manages resources and provides services that programs rely on to function.

When we say freestanding, we mean our code doesn’t depend on any underlying OS features or libraries. We’re building something that runs on “bare metal” - directly on the hardware.

For this tutorial, we’ll target the x86 (i386) architecture. While x86_64 is more modern, x86 is simpler for beginners and has more accessible documentation. The principles we’ll learn apply to both architectures, but x86 involves fewer configuration challenges.

By the end of this tutorial series, you’ll have created a basic kernel that:

  • Boots successfully on actual hardware
  • Executes Rust code in a freestanding environment
  • Provides the foundation for building more complex OS features

Let’s begin!

Initiate Sequence

First, we will create a directory where the whole OS will be present.

Here’s the directory structure we’ll use:

.
├── .cargo
│   └── config.toml     # Cargo configuration for cross-compilation
├── Makefile            # Simplifies building and running our kernel
├── rustfmt.toml        # Ensures consistent code formatting
├── rust-toolchain.toml # Specifies our Rust version
├── shell.nix           # Creates a reproducible development environment
└── src
    ├── arch            # Architecture-specific code
    │   └── x86
    │       ├── boot.asm       # Assembly boot code
    │       ├── x86.json       # Target specification for x86
    │       └── x86.ld         # Linker script for memory layout
    └── kernel          # Rust kernel implementation
        ├── Cargo.toml        # Rust package configuration
        ├── grub.cfg          # Bootloader configuration
        └── src
            ├── bin.rs         # Kernel entry point
            ├── build.rs       # Pre-compilation tasks
            ├── kernel.rs      # Core implementation

Each file serves a specific purpose in our kernel development workflow:

  • The configuration files (.cargo/config.toml, rustfmt.toml, etc.) set up our development environment
  • The architecture-specific code (src/arch/x86/) handles hardware interaction
  • The kernel code (src/kernel/) contains our Rust implementation

Let’s start by creating these directories and files one by one.

Development Environment: Nix Shell

nix-shell provides a reproducible development environment. This ensures that no matter which computer you use, you should have the same setup as any other computer. nix-shell is great to easily cross-compile as well! Just one press of a button & it works like a charm. It is a great rabbit hole to get into.

If you haven’t already, you’ll need to install Nix on your system.

Let’s create our shell.nix file:

{
  pkgs ? import <nixpkgs> { },
}:
let
  crossToolchain = pkgs.pkgsCross.i686-embedded.buildPackages.gcc;
in
pkgs.mkShell {
  buildInputs = with pkgs; [
    crossToolchain  # Cross-compiler for i686 (32-bit x86) embedded targets
    nasm            # Netwide Assembler for x86 architecture assembly
    gdb             # GNU Debugger for stepping through code and finding bugs
    gnumake         # Make build automation tool for compiling the project
    binutils        # Essential tools for manipulating binary files (ld, as, etc.)
    xorriso         # Tool for creating ISO files for booting your kernel
    pkg-config      # Helper tool for compiling code with correct library flags
    rustup          # Rust toolchain installer and version manager
    clippy          # Rust linter for catching common mistakes and anti-patterns
    grub2           # Bootloader for creating bootable disk images
  ];
  shellHook = ''
    echo "Development environment ready!"
  '';
}

Now, run:

nix-shell shell.nix

The first time you run this, it will take some time as Nix downloads and builds the necessary tools, particularly the cross-compiler. This will take some time, get a cup of tea or coffee in the meantime.

Note: The terminal prompt will change to indicate you’re in the Nix shell. All commands from this point forward should be run inside this environment.

Cargo Configuration: Preparing for Bare Metal

Standard Rust applications assume they’ll run on an existing operating system with access to the standard library. For our kernel, we need to configure Cargo to build for a freestanding environment.

First, create the .cargo directory and add a config.toml file:

mkdir -p .cargo
touch .cargo/config.toml

Now, add the following configuration:

[unstable]
build-std = ["core"]

[build]
target="src/arch/x86/x86.json"

Let’s understand what this does:

  • build-std = ["core"] tells Cargo to compile Rust’s core library from source. The core library provides fundamental types and functions without assuming an operating system.

  • target="src/arch/x86/x86.json" points to our custom target specification (which we’ll create shortly). This tells the Rust compiler how to build for our specific bare-metal environment.

This minimal configuration is sufficient to start building our freestanding kernel.

Booting the Kernel

When a computer starts, it goes through several stages before our kernel code runs:

  1. The BIOS/UEFI initializes hardware and looks for bootable devices
  2. The bootloader (GRUB in our case) loads our kernel into memory
  3. Our assembly code sets up the initial environment
  4. Finally, we transfer control to our Rust code

Let’s implement each stage of this process.

Target Specification: Defining Our Platform

We need to create a target specification file that tells Rust how to compile for our bare-metal environment.

Create the file src/arch/x86/x86.json with the following content:

{
  "arch": "x86",
  "data-layout": "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128",
  "disable-redzone": true,
  "executables": true,
  "features": "-mmx,-sse,-sse2,-soft-float",
  "linker": "i686-elf-ld",
  "linker-flavor": "ld",
  "llvm-target": "i686-unknown-none",
  "os": "none",
  "panic-strategy": "abort",
  "target-c-int-width": "32",
  "target-endian": "little",
  "target-pointer-width": "32"
}

The key settings here are:

  • disable-redzone: true - The “red zone” is a stack optimization that’s unsafe in kernel code because interrupts can corrupt it
  • panic-strategy: abort - Simply halt execution on panic (we don’t have unwinding support)
  • os: "none" - We’re not building for any existing OS
  • features: "-mmx,-sse,-sse2,-soft-float" - Disables SIMD instructions we don’t need initially

Why We Need a Custom Target

Rust supports many target platforms out of the box, but none exactly match our needs for kernel development. By creating a custom target specification, we can:

  • Disable features we don’t need (like SIMD instructions)
  • Enable features we do need (like certain memory models)
  • Configure how the compiler generates code for our specific environment

Assembly Boot Code: The First Steps

Now, let’s create the assembly code that runs when our kernel first loads. Create a file named: boot.asm in the src/arch/x86/ directory. Here we will tell where you can find the multiboot header that allows GRUB where it can find our kernel. Once the multiboot found our kernel it will go through our functions to boot up properally. This is where we call our main rust function, which will then take over the hardware.

 ;        Multiboot header constants
 MBALIGN  equ 1<<0; Align loaded modules on page boundaries
 MEMINFO  equ 1<<1; Provide memory map
 MBFLAGS  equ MBALIGN | MEMINFO; Combine our flags
 MAGIC    equ 0x1BADB002; Magic number lets bootloader find the header
 CHECKSUM equ -(MAGIC + MBFLAGS); Checksum required by multiboot standard

 ;       First section: Multiboot header
 section .multiboot
 align   4; Header must be 4-byte aligned
 dd      MAGIC; Write the magic number
 dd      MBFLAGS; Write the flags
 dd      CHECKSUM; Write the checksum

 ; ----------------------------------------------

 ;       Second section: Stack setup
 section .bss
 align   16; Ensure proper alignment for the stack

stack_bottom:
 resb 16384; Reserve 16KB for our stack

stack_top:

 ; ----------------------------------------------

 section .text
 global  _start:function

_start:
 mov esp, stack_top

 ;      Call kernel
 extern kernel_main
 call   kernel_main

 cli ; Disable interrupts

.hang:
 hlt ; Halt the CPU
 jmp .hang

.end:
 global _start.end

I won’t go too much into detail on how it exact;y works, if you would like more information on how it exactly work, check out the OSDev wiki. This is a great tutorial on how to boot up your kernel.

Linker Script: Memory Layout

In the same directory, we will create a file named x86.ld in src/arch/x86/. This is our custom linker, we will tell where it needs to store our code in the binary. We would like to have our multiboot at the very beginning.

ENTRY(_start)

SECTIONS
  {
  /* Start at 2MB */
  . = 2M;

  /* Text section with multiboot header */
  .text BLOCK(4K) : ALIGN(4K)
    {
    KEEP(*(.multiboot))
    *(.text)
  }

  /* Read-only data. */
  .rodata BLOCK(4K) : ALIGN(4K)
    {
    *(.rodata)
  }

  /* Read-write data (initialized) */
  .data BLOCK(4K) : ALIGN(4K)
    {
    *(.data)
  }

  /* Read-write data (uninitialized) and stack */
  .bss BLOCK(4K) : ALIGN(4K)
    {
    *(COMMON)
    *(.bss)
  }
}

OSDev has in the link I sent earlier a good explaination on how the linker works.

Build Script: Compiling Assembly

The build.rs is a script for cargo to run before compiling the rust code. We will need this, because we have seperate assembly files that we need to compile.

use std::{
 env,
 process::{exit, Command},
};

fn main() {
 let out_dir = env::var("OUT_DIR").unwrap_or_else(|e| {
  eprint!("{}", e);
  exit(1);
 });

 let path = "../arch/x86/boot.asm";
 let file_stem = path;
 let output = format!("{}/{}.o", out_dir, file_stem);
 println!("cargo:warning=Compiling {}", path);

 let status = Command::new("nasm")
  .args(["-f", "elf32", path, "-o", &output])
  .status()
  .expect("Could not compile NASM correctly");

 if !status.success() {
  eprintln!("NASM compilation failed for {}", path);
  exit(1);
 }

 println!("cargo:rustc-link-arg={}", output);
 println!("cargo:rustc-link-search={}", out_dir);

 println!("cargo:rustc-link-arg=-m");
 println!("cargo:rustc-link-arg=elf_i386");
 println!("cargo:rustc-link-arg=--no-dynamic-linker");
 println!("cargo:rustc-link-arg=-static");
 println!("cargo:rustc-link-arg=-T../arch/x86/x86.ld");

 println!("cargo:rerun-if-changed=../arch/x86/boot.asm");
 println!("cargo:rerun-if-changed=../arch/x86/x86.ld");
}

Let’s go throught the file one-by-one. First things is first is to compile the assembly. We do this at the very beginning. I choise to use nasm instead of gas. I think nasm is much more readable & makes to me more sense to write. Feel free to change it to gas if you prefer that.

 let out_dir = env::var("OUT_DIR").unwrap_or_else(|e| {
  eprint!("{}", e);
  exit(1);
 });

 let path = "../arch/x86/boot.asm";
 let file_stem = path;
 let output = format!("{}/{}.o", out_dir, file_stem);
 println!("cargo:warning=Compiling {}", path);

 let status = Command::new("nasm")
  .args(["-f", "elf32", path, "-o", &output])
  .status()
  .expect("Could not compile NASM correctly");

 if !status.success() {
  eprintln!("NASM compilation failed for {}", path);
  exit(1);
 }

Next we tell our linker where & what it should link. We give it some additional arguments to ensure it links it correctly

Lastly, we tell cargo which files it should check to see if we need to re-compile. Efficient!

 println!("cargo:rustc-link-arg={}", output);
 println!("cargo:rustc-link-search={}", out_dir);

 println!("cargo:rustc-link-arg=-m");
 println!("cargo:rustc-link-arg=elf_i386");
 println!("cargo:rustc-link-arg=--no-dynamic-linker");
 println!("cargo:rustc-link-arg=-static");
 println!("cargo:rustc-link-arg=-T../arch/x86/x86.ld");

 println!("cargo:rerun-if-changed=../arch/x86/boot.asm");
 println!("cargo:rerun-if-changed=../arch/x86/x86.ld");

We can now compile both all the files into one, but we are missing two important files: kernel.rs & bin.rs. We will start witht bin.rs, since there is much we need to write.

Configuring the Package

Create src/kernel/Cargo.toml to configure our Rust package:

[package]
name = "my_kernel_project"
version = "0.1.0"
edition = "2021"

To compile our kernel we will need to first make the assembly files into objects & compile all rust code to a library. We will then compile everything into one binary with the help of our custom linker.

To achieve this, we need to tell cargo to compile both a library & a binary in the cargo.toml file. Let’s add the [lib] parameter & tell cargo where it can find the main function of our library:

[lib]
name = "kernel"
path = "src/kernel.rs"

We are telling cargo here to compile the library with the name kernel & you will find the kernel main function in src/kernel.rs.

Secondly, we will need to tell cargo where it can generate the binary from. It will point to src/bin.rs:

[[bin]]
name = "kernel-from-scratch"
path = "src/bin.rs"

Lastly, we need to tell cargo how to compile the binaries - one configuration for development & one for release. I’ve optimized both of them, which is quite unusual for the development configuration.

The reason I need to optimize the development build as well is because not optimizing might introduce some behaviors that are actually not an issue in the release file. It is generally recommended to optimize at level 2 for development & level 3 for release:

[profile.dev]
opt-level = 2

[profile.release]
opt-level = 3

We also need to add one more line to tell cargo how to build the assembly files:

[package]
name = "kernel-from-scratch"
version = "1.0.0"
edition = "2021"
build = "src/build.rs" # <- This line

This is how the complete Cargo.toml file would look:

[package]
name = "kernel-from-scratch"
version = "1.0.0"
edition = "2021"
build = "src/build.rs"

[lib]
name = "kernel"
path = "src/kernel.rs"

[[bin]]
name = "kernel-from-scratch"
path = "src/bin.rs"

[profile.dev]
opt-level = 2

[profile.release]
opt-level = 3

You may edit things as you wish, like adding the author of this kernel or changing the optimization levels.

Now that we have setup our cargo.toml file. You will notice that we point to three different files. build.rs, kernel.rs & bin.rs.

Kernel Code: Minimal Implementation

Now for our actual kernel code. First, create src/kernel/src/bin.rs:

#![no_std]
#![no_main]

extern crate kernel;

This minimal file tells Rust that:

  • We’re not using the standard library (#![no_std])
  • We don’t have a traditional main function (#![no_main])
  • We’re importing our kernel library

Finally, create src/kernel/src/kernel.rs:

#![no_std]
#![no_main]

#[no_mangle]
pub extern "C" fn kernel_main() -> ! {
    loop {}
}

Let’s break down what is going on in this file.

#[no_mangle] makes sure that rustc won’t hash our function, but keeps its naming. Since we call the kernel_main function externally we will need to keep its name so that our boot.asm can find the function.

pub extern "C" fn kernel_main() -> !, we tell rustc to make the kernel_main function ABI complient & to never return. We do not want to return anything, since the kernel should always be running, except if told otherwise.

We will then have an infinite-loop to ensure it will always run.

Compile & Run

Now that we have all our components in place, let’s build and run our minimal kernel:

# Navigate to the kernel directory
cd src/kernel

# Build the kernel
cargo build

This will produce a binary file, but we need to create a bootable ISO to run it.

GRUB Configuration

Create src/kernel/grub.cfg:

set timeout=0
set default=0

menuentry "Rust Kernel" {
    multiboot /boot/kernel.bin
    boot
}

This configuration:

  • Sets the boot timeout to 0 (boots immediately)
  • Creates a single menu entry for our kernel
  • Tells GRUB to load our kernel using the multiboot protocol

Now, let’s create our bootable ISO:

# Create directory structure for ISO
mkdir -p isodir/boot/grub

# Copy our kernel binary and GRUB config
cp target/x86/debug/kernel-from-scratch isodir/boot/kernel.bin
cp grub.cfg isodir/boot/grub/grub.cfg

# Create the bootable ISO
grub-mkrescue -o kernel.iso isodir

Finally, run it with QEMU:

qemu-system-i386 -cdrom kernel.iso

Congratulations! If everything worked correctly, you should see a QEMU window appear. It won’t display anything yet, but that’s expected! Our kernel is running its empty loop, which is exactly what we programmed it to do.

Next Steps

In the next tutorial, we’ll enhance our kernel by implementing basic VGA text output, allowing us to display messages on the screen & adding keyboard support. This will make debugging much easier and give us visible feedback as we develop our OS further.

Exercises

To deepen your understanding of kernel development, here are a few exercises you can try:

Exercise #1 - Set up rustfmt.toml
Create a rustfmt.toml file to define formatting preferences for the project. This ensures consistent code style throughout your kernel.

Exercise #2 - Create a Makefile
Design a Makefile that can:

  • Build the kernel
  • Clean build artifacts
  • Run the kernel in QEMU
  • Generate the ISO image

A good Makefile will significantly streamline your development workflow.

Exercise #3 - Automate ISO Creation
Modify your build process to automatically create the ISO after a successful build, saving you manual steps each time you test changes. Take a look at the cargo book. We can call scripts after compiling.

If you get stuck, you can check the example solutions in the repositories linked at the beginning of this guide. However, I recommend trying to implement them yourself first to better understand the kernel development process.