[yoe] Build
Fast tooling and builds. No cross-compiling headaches. Easy to customize/upgrade/debug. One tool for both system engineers and application developers to ship products faster.
[yoe] is a build system (focused on Embedded Linux for now) for teams shipping
modern edge products. Components in Go, Rust, Zig, Python, JS/TS, and C/C++ are
supported. [yoe] releases often and tracks upstream closely. The configuration
language is easily processed by humans and AI. Build on your laptop, on native
hardware, or in cloud CI — one integrated tool, same config, same results.
We took what we learned from many years of maintaining and building products with the Yoe Distribution, started over, and built the tool we always wanted.
Note: Not everything in the documentation has been implemented yet as this project is in the early stages.
Is [yoe] Right for You?
[yoe] is not for everyone. If you are building a mission-critical system that
requires bit-for-bit reproducible builds, long-term release freezes, or
extensive compliance certification, use Yocto —
it is battle-tested for those requirements.
[yoe] is designed for edge systems that behave more like cloud systems — AI
workloads and modern-language applications — and for teams that track upstream
closely and prioritize fast iteration over strict reproducibility. If your
product ships frequent updates, runs containerized services, or depends heavily
on Go/Rust/Python ecosystems, [yoe] may be a better fit.
🚀 Getting Started
Prerequisites: Linux or macOS with Git and Docker installed. Windows users: install WSL2 and use the Linux binary (Linux x86_64/Docker is the most tested configuration). Claude Code is highly recommended, but not required.
# Download the yoe binary (Linux x86_64)
curl -L https://github.com/YoeDistro/yoe/releases/latest/download/yoe-Linux-x86_64 -o yoe
# For other platforms, download from https://github.com/YoeDistro/yoe/releases/latest
chmod +x yoe
mkdir -p ~/bin
mv yoe ~/bin/
# Make sure ~/bin is in your PATH (add to ~/.bashrc or ~/.zshrc if needed)
export PATH="$HOME/bin:$PATH"
# Create a new project
yoe init yoe-test
cd yoe-test
# Start the TUI (see screenshot below)
yoe
# Navigate to the base-image and press 'b' to build
# When build is complete, press 'r' to run (requires `qemu-system-x86_64` installed on your host)
# Log in a user: root, no password
# Power off when finished (inside running image)
poweroff
There are also CLI variants of the above commands (build, run, etc.).

dev-image is another included image with a few more things in it.
What just happened:
yoe initcreated a project with aPROJECT.starconfig and a default x86_64 QEMU machine.- On first build,
yoeautomatically built a Docker container with the toolchain (gcc, make, etc.) and fetched the default unit modules from GitHub. - It built ~10 packages from source (busybox, linux kernel, openssl, etc.) inside the container, each isolated in its own bubblewrap sandbox.
- It assembled a bootable disk image from those packages.
yoe runlaunched the image in QEMU with KVM acceleration.
Everything is in the project directory — no global state, no hidden caches outside the tree.
🔧 Why This Is Possible Now
A decade ago, this combination wasn’t realistic. Several things have changed:
- ARM and RISC-V hardware is fast enough to build natively. Modern ARM boards and cloud instances (AWS Graviton, Hetzner CAX) build at full speed. For development, QEMU user-mode emulation runs ARM containers on x86 — no cross-toolchain needed.
- Modern languages bring their own package managers. Go, Rust, Zig, and
Python already handle dependency resolution, reproducible builds, and
caching.
[yoe]doesn’t reinvent any of that — application developers use the same Cargo, Go modules, or pip they already know. - AI can guide developers through the system. The hardest part of embedded
Linux is knowing what to configure and why.
[yoe]’s metadata is structured Starlark — queryable, not buried in shell scripts — so an AI assistant can create units, diagnose build failures, and audit security without the developer memorizing the build system’s quirks.
🧭 Values
- Be Pragmatic. Leverage what already exists where it makes sense. We don’t have any religion that everything needs to be built from source, or that we all need to build our own toolchains.
- The product developer experience is the top priority. Other solutions are often not optimized for developers building products. This includes application developers as well as system engineers. Clear and concise communication is essential when things go wrong. Unintelligible stack traces are unacceptable.
- Optimized for small teams.
[yoe]is a tool for small teams to do big things. Large enterprises are welcome, but not our exclusive focus. There are plenty of enterprise tools (Bazel, Buck2, Maven, etc.); we will use ideas from these tools, but[yoe]aims to be something different. - Scope is not limited to Embedded Linux. Although Embedded Linux is our
current focus, a tool like
[yoe]could be used for any problem where you pull a lot of pieces together. At its heart,[yoe]is a tool for building complex systems. - Track upstream closely. Modern edge systems are more like the cloud than
traditional embedded systems — they are connected, updated regularly, and
expected to receive security patches throughout their lifetime.
[yoe]assumes you will track upstream releases closely rather than freezing on a version for years. Updating a package should be easy and routine, not a high-risk event that requires a dedicated engineering effort. - Vendor Neutral.
[yoe]is a vendor neutral project and welcomes BSPs and other units from any vendor. The goal is to build an integrated ecosystem like Zephyr.
🤖 Why AI-Native
Embedded Linux is hard not because the concepts are complex, but because there are many concepts that interact in non-obvious ways: toolchain flags, dependency ordering, kernel configuration, package splitting, module composition, image assembly, device trees, bootloaders. Traditional build systems manage this complexity through complexity.
[yoe] takes a different approach: Simplify things as much as possible.
Starlark units are readable by both humans and AI. The dependency graph is
queryable. Build logs are structured. An AI assistant that understands all of
this can:
- Create units from a URL or description —
/new-unit https://github.com/example/myapp - Diagnose build failures by reading logs and the dependency graph —
/diagnose openssh - Trace why a package is in your image —
/why libssl - Simulate changes before building —
/what-if remove networkmanager - Audit for CVEs and license compliance —
/cve-check,/license-audit - Generate machine definitions from board names —
/new-machine "Raspberry Pi 5"
See AI Skills for the full catalog of AI-driven workflows.
💡 Inspirations
[yoe] draws selectively from existing systems, taking the best ideas from each
while avoiding their respective pain points:
- Yocto — machine abstraction, image composition, module architecture, OTA integration. Leave behind BitBake, sstate, cross-compilation complexity.
- Buildroot — the principle that simpler is better. Leave behind monolithic images and full-rebuild-on-config-change.
- Arch — rolling release, minimal base, PKGBUILD-style simplicity, documentation culture. Leave behind x86-centrism and manual administration.
- Alpine — apk package manager, busybox, minimal footprint, security defaults. Leave behind lack of BSP support.
- Nix — content-addressed caching, declarative configuration, hermetic builds, atomic rollback. Leave behind the Nix language and store-path complexity.
- Google GN — two-phase resolve-then-build model, config propagation through the dependency graph, build introspection commands, label-based target references for composability. Leave behind the C++-specific build model and Ninja generation.
- Bazel — Starlark as a build configuration language, hermetic sandboxed actions, content-addressed action caching, and remote build execution. Leave behind the monorepo bias, JVM runtime, and BUILD-file verbosity that make Bazel heavy for small teams.
See Comparisons for detailed analysis of how [yoe]
relates to each of these (and other) systems, including when you should use them
instead.
⚙️ Design
🏗️ A Single Tool
At its heart, [yoe] is a single tool — one Go binary that handles the entire
build flow, from fetching sources to assembling bootable images. It exposes
three interfaces: AI conversation, an interactive TUI, and a traditional CLI.
All three do the same things; use whichever fits the moment.
The tool handles:
- TUI — run
yoewith no arguments for an interactive unit list with inline build status, background builds, search, and quick actions (edit, diagnose, clean). - Build orchestration — invoke language-native build tools in the right
order, manage caching, assemble outputs. Multiple images and targets live in a
single build tree (like Yocto). No global lock or global resource: concurrent
yoeinvocations run in parallel, which is essential for rapid AI-driven development. - Machine/distro configuration — define target boards and distribution profiles in Starlark — Python-like, deterministic, sandboxed.
See The yoe Tool for the full CLI reference,
Unit & Configuration Format for the unit and config
spec, and Build Languages for the Starlark rationale.
Why Go: single static binary with no runtime dependencies, fast compilation, excellent cross-compilation support (useful for shipping the tool itself), and a strong standard library for file manipulation, process execution, and networking.
🚫 No Cross Compilation
Instead of maintaining cross-toolchains, [yoe] targets native builds:
- QEMU user-mode emulation — build ARM64 or RISC-V images on any x86_64
workstation. The build runs inside a genuine foreign-arch Docker container,
transparently emulated via binfmt_misc. One command to set up
(
yoe container binfmt), then--machine qemu-arm64just works. ~5-20x slower than native, but fine for iterating on a few packages. - Native hardware — build on the target architecture directly (ARM64 dev boards, RISC-V boards).
- Cloud CI — use native architecture runners (e.g., ARM64 GitHub Actions runners, AWS Graviton, Hetzner ARM boxes) for full-speed CI builds.
- Per-unit build environment — each unit runs in its own Docker container with bubblewrap sandboxing. Architecture is determined per unit, not globally, and build dependencies don’t pollute the host or leak between units.
This eliminates an entire class of build issues (sysroot management, host contamination, cross-pkg-config, etc.).
📦 Native Language Package Managers
Each language ecosystem manages its own dependencies:
| Language | Package Manager | Lock File |
|---|---|---|
| Go | Go modules | go.sum |
| Rust | Cargo | Cargo.lock |
| Python | pip / uv | requirements.lock |
| JavaScript | npm / pnpm | package-lock.json |
| Zig | Zig build | build.zig.zon |
[yoe] plays nicely with existing language caching infrastructure so builds are
fast and repeatable without re-downloading the internet.
🖥️ Kernel and System Image Tooling
While application builds use native language tooling, the system-level pieces still need orchestration:
- Kernel builds — configure, build, and package kernels for target boards.
- Root filesystem assembly — combine built artifacts into a bootable image (ext4, squashfs, etc.).
- Device tree / bootloader management — board-specific configuration.
- OTA / update support — image-based device management (full image updates, OSTree, BDiff) integrated with update frameworks (RAUC, SWUpdate, etc.). Container workloads on the target device are on the roadmap.
This is where [yoe] tooling (written in Go and Starlark) provides value —
similar to what bitbake and wic do in Yocto, but simpler and more
opinionated.
📋 Package Management: apk
[yoe] uses apk
(Alpine Package Keeper) as its package manager. It is important to distinguish
between units and packages — these are separate concepts:
- Units are build-time definitions (Starlark
.starfiles in the project tree) that describe how to build software. See Unit & Configuration Format. - Packages are installable artifacts (
.apkfiles) that units produce. They are what gets installed into root filesystem images and onto devices.
This separation means units are a development/CI concern, while packages are a deployment/device concern. You can build packages once and install them on many devices without needing the unit tree. Rebuilding from source is first class but not required — every package is fully traceable to its unit, with no golden images.
Why apk over apt and dnf:
- Speed — apk operations are near-instantaneous. Install, remove, and upgrade are measured in milliseconds, not seconds.
- Simple format — an
.apkpackage is a signed tar.gz with a.PKGINFOmetadata file. No complex archive-in-archive wrapping. - Small footprint — apk-tools is tiny, appropriate for embedded targets.
- Active development — apk 3.x adds content-addressed storage and atomic
transactions, aligning with
[yoe]’s Nix-inspired reproducibility goals. - Works with glibc — apk is not tied to musl; it works with any libc.
[yoe]runs its own package repositories, not Alpine’s. - On-device package management — devices can pull updates from a
[yoe]package repository, enabling incremental OTA updates (install only changed packages) alongside full image updates.
The [yoe] build tooling invokes units to produce .apk packages, which are
published to a repository. Image assembly then uses apk to install packages
into a root filesystem, just as Alpine does.
🧱 Base System
The base userspace today is busybox on top of a C library (musl today, glibc targeted), with busybox’s built-in init as PID 1:
- C library — the project currently uses musl (inherited from Alpine’s toolchain), with a planned move to glibc for maximum compatibility with pre-built binaries, language runtimes (Go, Rust, Python, Node.js), and third-party libraries.
- busybox — provides the core userspace utilities (sh, coreutils, etc.) and init in a single small binary. Keeps the base image minimal while still giving a functional shell environment for debugging and scripting.
- Init (current: busybox init) — busybox’s built-in init handles PID 1 duties today. systemd will be an option in the future: it is well-understood, has rich service management, and provides integrated journal logging, network management, device management (udev), and container integration. The trade-off is size and complexity.
This combination gives a small but fully functional base system that can run real-world services without surprises.
🔒 Reproducibility
[yoe] targets functional equivalence, not bit-for-bit reproducibility.
Same inputs produce functionally identical outputs — same behavior, same files,
same permissions — but the bytes may differ due to embedded timestamps, archive
member ordering, or compiler non-determinism.
This is a deliberate trade-off:
- Bit-for-bit reproducibility (what Nix aspires to) requires patching
upstream build systems to eliminate timestamps (
__DATE__,.pycmtime), enforce file ordering in archives, and strip or fix build IDs. This is enormous effort — Nix still hasn’t fully achieved it after 20 years — and the primary benefit (verifying a binary matches its source by rebuilding) is relevant mainly for high-assurance supply-chain contexts. - Functional equivalence gets the practical benefits — reliable caching, hermetic builds, provenance tracking — without the patching burden. Bubblewrap isolation prevents host contamination. Content-addressed input hashing — combining hashes of the unit, its source, and its dependencies — ensures cache hits are reliable. Starlark evaluation is deterministic by design. The remaining non-determinism (timestamps, ordering within packages) doesn’t affect functionality or caching.
The caching model does not depend on output determinism. Cache keys are computed
from inputs (unit content, source hash, dependency .apk hashes, build
flags), not outputs. If inputs haven’t changed, the cached output is used
regardless of whether a fresh build would produce identical bytes.
📚 Documentation
- AI Skills — AI-driven workflows for unit creation, build debugging, security auditing, and more
- The
yoeTool — CLI reference for building, imaging, and flashing - Unit & Configuration Format — Starlark unit and configuration spec
- Naming and Resolution — how modules, units, and dependencies are named, referenced, and resolved
- File Templates — moving inline file content out of Starlark units into external templates
- Starlark Packaging and Image Assembly — composable Starlark tasks for packaging and image assembly
- Build Dependencies and Caching — containers for host tools, apk sysroot for libraries, language-native package managers for everything else
- Build Environment — bootstrap, host tools, and build isolation
- Build Languages — analysis of Starlark, CUE, Nix, and other embeddable languages for unit definitions
- Development Environments — the no-SDK model,
yoe shellfor interactive dev, andyoe bundlefor air-gapped distribution - Testing — testing strategy across Go logic, package QA, image smoke tests, and on-device test runs
- apk Signing — keypair generation, signature verification, and on-device trust
- On-Device Package Management — using
apkon booted yoe systems to install and upgrade packages - Feed Server and
yoe deploy— dev-loop for serving the project apk repo and installing units on running devices - Containers on yoe Images — design for running Docker / Podman / containerd workloads on yoe-built devices
- libc, init, and the Rootfs Base — the default base of musl, busybox, and OpenRC, and the path to glibc/systemd for edge-AI hardware
- units-alpine — wrapping prebuilt Alpine packages as yoe units
- Comparisons — how
[yoe]relates to Yocto, Buildroot, Alpine, Arch, and NixOS - Roadmap — existing units and what’s needed for a complete base system
🤝 Contributing
Contributions are welcome — especially BSPs for new boards and units for new packages. AI-assisted contributions are fine; just make sure the result actually works, and keep PRs small and reviewable.
💚 Sponsors
[yoe] is supported by:
📄 License
[yoe] is licensed under the Apache License 2.0.
The [yoe] Tool
yoe is the single CLI tool that drives all [yoe] workflows — building
packages and images from units, managing caches and source downloads, and
flashing devices. It is a statically-linked Go binary with no runtime
dependencies.
Installation
Prerequisites: Linux or macOS with Git and Docker installed. Windows users: install WSL2 and use the Linux binary (Linux x86_64/Docker is the most tested configuration). Claude Code is highly recommended, but not required.
# Download the yoe binary (Linux x86_64)
curl -L https://github.com/YoeDistro/yoe/releases/latest/download/yoe-Linux-x86_64 -o yoe
# For other platforms, download from https://github.com/YoeDistro/yoe/releases/latest
chmod +x yoe
mkdir -p ~/bin
mv yoe ~/bin/
# Make sure ~/bin is in your PATH (add to ~/.bashrc or ~/.zshrc if needed)
export PATH="$HOME/bin:$PATH"
Since yoe is a Go binary, it cross-compiles trivially — build on your x86
workstation, run on an ARM build server.
Command Overview
yoe Launch the interactive TUI
yoe init Create a new `[yoe]` project
yoe build Build units (packages and images)
yoe shell Open an interactive shell in a unit's build sandbox [planned]
yoe dev Manage source modifications (extract, diff, status)
yoe flash Write an image to a device/SD card
yoe run Run an image in QEMU
yoe serve Serve the project's apk repo over HTTP+mDNS
yoe deploy Build and install a unit on a running yoe device
yoe device Manage repo configuration on a target device
yoe module Manage external modules (fetch, sync, list)
yoe repo Manage the local apk package repository
yoe cache Manage the build cache (local and remote) [planned]
yoe bundle Export/import content-addressed bundles (air-gapped) [planned]
yoe source Download and manage source archives/repos
yoe config View and edit project configuration
yoe desc Describe a unit, package, or target
yoe refs Show reverse dependencies
yoe graph Visualize the dependency DAG
yoe log Show build log (most recent or specific unit)
yoe diagnose Launch Claude Code to diagnose a build failure
yoe clean Remove build artifacts
yoe container Manage the build container (build, binfmt, status)
All commands except init, version, and container run inside an Alpine
build container automatically. The container is built on first use from
containers/Dockerfile.build. See
Build Environment
for details.
Commands
yoe init
Scaffolds a new [yoe] project directory with the standard layout.
yoe init my-project
Creates:
my-project/
├── PROJECT.star
├── machines/
├── units/
├── classes/
└── overlays/
Optionally specify a machine to start with:
yoe init my-project --machine beaglebone-black
yoe build
Builds one or more units. Package units (unit(), autotools(), etc.) produce
.apk packages and publish them to the local repository. Image units
(image()) assemble a root filesystem and produce a disk image. The class
function used in the .star file determines the behavior — the command is the
same for both.
# Build a single package unit
yoe build openssh
# Build multiple units
yoe build openssh zlib openssl
# Build an image unit (assembles rootfs, produces disk image)
yoe build base-image
# Build an image for a specific machine
yoe build base-image --machine raspberrypi4
# Build for ARM64 on an x86_64 host (uses QEMU user-mode emulation)
yoe build base-image --machine qemu-arm64
# Build all units (packages and images)
yoe build --all
# Build all image units for all machines (full matrix)
yoe build --all --class image # planned: --class filter
# Build a unit and all its dependencies
yoe build --with-deps myapp # planned: --with-deps flag
# Rebuild even if the cache is fresh
yoe build --force openssh
# Skip remote cache — only check local cache
yoe build --no-remote-cache openssh # planned: remote cache
# Skip all caches — force build from source
yoe build --no-cache openssh
# Dry run — show what would be built and why
yoe build --dry-run --all
# List available image/machine combinations
yoe build --list-targets # planned
What happens during a build:
Inspired by Google’s GN, yoe build uses a two-phase resolve-then-build
model. The entire dependency graph is resolved and validated before any build
work starts. This catches missing dependencies, cycles, and configuration errors
up front rather than mid-build.
- Sync modules — fetch or update external modules declared in
PROJECT.star(skipped if already up to date). Seeyoe module sync. - Evaluate Starlark — load and evaluate all
.starunit files (including those from modules) to produce the set of build targets. Each class function call (unit(),autotools(),image(), etc.) registers a target. - Resolve dependencies — topologically sort the build order from declared dependencies. Validate that all referenced units exist and the graph is acyclic. If any errors are found, stop here — no partial builds.
- Check cache — compute a content hash of the unit + source + build
dependencies. If a cached
.apkwith that hash exists (locally or in a remote cache), skip the build. - Fetch source — download the source archive or clone the git repo (see
yoe sourcebelow). Sources are cached in$YOE_CACHE/sources/. - Prepare build environment — set up an isolated build root with only
declared build dependencies installed via
apk. This ensures hermetic builds. - Execute build steps — run the build commands defined by the class
function in the build root. The environment provides:
$PREFIX— install prefix (typically/usr)$DESTDIR— staging directory for installed files$NPROC— number of available CPU cores$ARCH— target architecture
- Package — collect files from
$DESTDIR, generate.PKGINFOfrom the unit metadata, and create the.apkarchive. - Publish — add the
.apkto the local repository and update the repo index.
For image units (image() class), steps 5-9 are replaced with image
assembly:
- Sync modules — same as above.
- Evaluate Starlark — same as above.
- Resolve dependencies — same as above.
- Check cache — same as above.
- Read machine definition — evaluate
machines/<name>.starfor architecture, kernel, bootloader, and partition layout. - Create empty rootfs — set up a temporary directory.
- Install packages — run
apk add --root <rootfs>with the[yoe]repository to install all declared packages. apk handles dependency resolution. - Apply configuration — set hostname, timezone, locale, and enable services per the image unit’s configuration (via the active init system — busybox init today, systemd a possible future option).
- Apply overlays — copy files from
overlays/into the rootfs. - Install kernel + bootloader — build (or fetch from cache) the kernel and bootloader per the machine definition, install into the rootfs/boot partition.
- Generate disk image — partition the output image per the partition layout and populate each partition.
Output format can be specified with --format:
yoe build base-image --format sdcard # raw disk image with partitions
yoe build base-image --format rootfs # tar.gz of the rootfs only
yoe build base-image --format squashfs # squashfs for read-only roots
yoe flash
Writes a built image to a block device or SD card.
# Flash to SD card (auto-detects the most recent image build)
yoe flash /dev/sdX
# Flash a specific image unit's output
yoe flash base-image /dev/sdX
# Flash for a specific machine
yoe flash base-image --machine beaglebone-black /dev/sdX
# Dry run — show what would happen
yoe flash --dry-run /dev/sdX
Safety: yoe flash requires explicit confirmation before writing and refuses to
write to mounted devices or devices that look like system disks.
yoe run
Launches a built image in QEMU for development and testing. When the host and target architecture match, QEMU uses KVM hardware virtualization for near-native speed. For cross-architecture images (e.g., ARM64 on x86_64), QEMU runs in software emulation mode automatically.
# Run the most recently built image (auto-detects machine/image)
yoe run
# Run a specific image unit
yoe run dev-image --machine qemu-x86_64
# Run an ARM64 image on an x86_64 host (software emulation)
yoe run base-image --machine qemu-arm64
# Forward an extra host port (default qemu machines already forward 2222→22,
# 8080→80, and 8118→8118 — `--port` adds to that list)
yoe run --port 9000:9000
# Allocate more memory
yoe run --memory 2G
# Run with graphical output (default is serial console)
yoe run --display
# Run headless in the background
yoe run --daemon
What happens:
- Detect architecture — read the machine definition to determine the target architecture (x86_64, aarch64, riscv64).
- Select QEMU binary — map to the correct
qemu-system-*binary. - Configure machine — for x86_64, use the
q35machine type with UEFI firmware (OVMF). For aarch64, usevirtwith UEFI (AAVMF). For riscv64, usevirtwith OpenSBI. - Enable KVM — hardware virtualization is always used since host and guest architectures match.
- Attach image — use the built disk image as a virtio block device.
- Route console — by default, connect the serial console to the terminal
(
-nographic). The guest kernel must haveconsole=ttyS0(x86) orconsole=ttyAMA0(aarch64) in its command line. - Set up networking — use QEMU user-mode networking with port forwarding.
The qemu-x86_64 and qemu-arm64 machines forward
2222:22(SSH),8080:80, and8118:8118by default, so SSH to the guest works without any extra flags.--portadds to that list.
QEMU machine definitions:
Projects can define QEMU-specific machines alongside hardware ones:
# machines/qemu-x86_64.star
machine(
name = "qemu-x86_64",
arch = "x86_64",
kernel = kernel(
unit = "linux-qemu",
cmdline = "console=ttyS0 root=/dev/vda2 rw",
),
qemu = qemu_config(
machine = "q35",
cpu = "host",
memory = "1G",
firmware = "ovmf",
display = "none",
),
)
When yoe run is given a machine with a qemu configuration, it uses those
settings directly. When given a hardware machine without qemu configuration,
it falls back to a reasonable default QEMU configuration for the machine’s
architecture.
yoe serve
Runs an HTTP server rooted at the project’s repo/ tree and advertises it on
mDNS as _yoe-feed._tcp.local. so devices and yoe deploy discover it
automatically.
# Serve at the default port (8765) with mDNS advertisement
yoe serve
# Bind to a specific interface or change the port
yoe serve --bind 192.168.1.10 --port 9000
# Skip mDNS (e.g., inside a container without host networking)
yoe serve --no-mdns
The default port is pinned (8765) so the URL written by yoe device repo add
on a target survives yoe serve restarts. apks and APKINDEX.tar.gz are
already signed by the project key, so plain HTTP transport is fine for
development. See feed-server.md for the full dev-loop guide.
yoe deploy
Builds a unit, exposes the project’s repo as a feed (reusing a running
yoe serve if one is up, otherwise spinning up an ephemeral feed on the same
pinned port), then ssh’s to the device and runs apk add --upgrade <unit>.
Transitive dependencies resolve on the device against the same APKINDEX.tar.gz
production OTA uses.
# Build myapp and install it on dev-pi over the LAN
yoe deploy myapp dev-pi.local
# Deploy to a QEMU vm started with `yoe run` (default 2222→22 forward)
yoe deploy myapp localhost:2222
# Non-root ssh user
yoe deploy myapp pi@dev-pi.local
# Cross-subnet or mDNS-hostile network — advertise an explicit IP
yoe deploy myapp 10.0.5.42 --host-ip 10.0.5.1
The repo file /etc/apk/repositories.d/yoe-dev.list is left in place after
deploy, so the device stays configured to pull from the dev host on any future
apk add from the device. Use yoe device repo remove <host> to tear it down.
Image targets error with a pointer to yoe flash.
yoe device
Configures /etc/apk/repositories.d/ on a target device so apk add from the
device pulls from your dev feed. Useful standalone (without an immediate
yoe deploy) to set up a fresh device, configure several devices for a
multi-device QA bench, or inspect what’s currently configured.
# Auto-discover the running yoe serve on the LAN, configure dev-pi
yoe device repo add dev-pi.local
# Same, plus push the project signing pubkey to /etc/apk/keys/ on the
# target — needed if the device was flashed before the project key existed
yoe device repo add dev-pi.local --push-key
# Configure a QEMU vm started with `yoe run` (default 2222→22 forward)
yoe device repo add localhost:2222
# Explicit feed URL (colleague's serve, or non-mDNS network)
yoe device repo add 192.168.4.30 --feed http://laptop.local:8765/myproj
# Tear down
yoe device repo remove dev-pi.local
# Inspect /etc/apk/repositories and /etc/apk/repositories.d/*.list
yoe device repo list dev-pi.local
After yoe device repo add, run apk update && apk add htop (or any unit your
project builds) directly on the device. yoe deploy writes the same file by
default (yoe-dev.list), so the first deploy doubles as the persistent feed
config.
yoe module
Manages external modules — the Git repositories declared in PROJECT.star that
provide units, classes, and machine definitions.
Status:
yoe module syncandyoe module listare implemented.yoe module info,yoe module check-updates, andyoe module list --tree(transitive tree output) are planned — the CLI dispatches them today with a “not yet implemented” stub message.
# Fetch/update all modules to the refs declared in PROJECT.star
yoe module sync
# List all modules with status (fetched, local override, version)
yoe module list
# Show the full resolved module tree (including transitive deps from MODULE.star)
yoe module list --tree # planned
# Show details for a specific module
yoe module info @vendor-bsp # planned
# Check for updates — show if upstream has newer tags
yoe module check-updates # planned
What happens during yoe module sync:
- Read PROJECT.star — parse the
moduleslist. - Read MODULE.star from each module — discover transitive dependencies.
- Resolve versions — PROJECT.star versions override transitive deps. If a required transitive dep is missing, error with an actionable message.
- Fetch/update — clone or update each module’s Git repo into
$YOE_CACHE/modules/. Checkout the declared ref. - Verify — confirm that each module’s
MODULE.star(if present) is valid Starlark.
Module caching: Modules are cached in $YOE_CACHE/modules/ as bare Git
repositories with worktree checkouts at the pinned ref. yoe module sync
performs incremental fetches — only downloading new objects.
Automatic sync: yoe build automatically runs module sync if any module is
missing or if PROJECT.star has changed since the last sync. You rarely need to
run yoe module sync manually.
Local overrides: Modules with local = "..." in PROJECT.star skip fetching
entirely and use the local directory. yoe module list shows these as
(local: ../path).
Example output of yoe module list:
Module Ref Status
@units-core v1.0.0 up to date
@vendor-bsp-imx8 v2.1.0 up to date
└─ @hal-common v1.3.0 up to date (transitive)
└─ @firmware-imx v5.4 up to date (transitive)
@my-local-module main (local: ../my-module)
yoe repo
Manages the local apk package repository.
Status:
yoe repo list,yoe repo info, andyoe repo removeare implemented.yoe repo pushandyoe repo pull(S3-compatible remote repository sync) are planned — there is no S3 backend yet.
# List all packages in the repository
yoe repo list
# Show details of a specific package
yoe repo info openssh
# Remove a package from the repository
yoe repo remove openssh-9.5p1-r0
# Push local repository to a remote (S3-compatible)
yoe repo push # planned
# Pull packages from a remote repository
yoe repo pull # planned
The local repository lives at repo/<project-name>/ within the project
directory. It’s a standard apk-compatible repository — you can point apk on a
running device at it directly.
yoe cache (planned)
Status: Not implemented.
cmd/yoe/main.gohas nocachecase in its command switch — invokingyoe cacheprints “Unknown command”. Content addressing and a local build cache exist inside the build executor, but there is no user-facing cache subcommand, no remote/S3 cache, no signing, and noyoe cache stats/gc/push/pull. The surface below describes the planned design.
Manages the local and remote build caches.
# Show cache status — local size, remote config, hit rate
yoe cache status
# List cached packages (local)
yoe cache list
# Show what's cached for a specific unit
yoe cache list openssh
# Push locally-built packages to the remote cache
yoe cache push
# Push specific packages
yoe cache push openssh zlib
# Pull packages from the remote cache into local
yoe cache pull
# Remove local cache entries older than retention period
yoe cache gc
# Remove all local cache entries
yoe cache gc --all
# Verify integrity of cached packages (check hashes and signatures)
yoe cache verify
# Show cache hit/miss statistics for the last build
yoe cache stats
Cache push/pull vs. repo push/pull: yoe repo manages the apk package
repository (the repo index that apk consumes during image assembly).
yoe cache manages the build cache (content-addressed build outputs keyed
by input hash). In practice, both store .apk files, but the cache is keyed by
build inputs while the repo is indexed by package name/version. Pushing to the
cache shares build avoidance with CI/team. Pushing to the repo shares
installable packages with devices.
yoe source
Manages source downloads. Sources are cached locally to avoid repeated downloads.
# Download sources for a unit
yoe source fetch openssh
# Download sources for all units
yoe source fetch --all
# List cached sources
yoe source list
# Verify source integrity (check sha256)
yoe source verify
# Clean stale sources
yoe source clean
Sources are stored in $YOE_CACHE/sources/ with content-addressed naming. For
git sources, bare clones are cached and updated incrementally.
yoe config
View and edit project configuration.
# Show current configuration
yoe config show
# Set the default machine
yoe config set defaults.machine raspberrypi4
# Set the default image
yoe config set defaults.image dev
# Show resolved configuration for a build
yoe config resolve --machine beaglebone-black --image base
yoe desc
Describes a unit, showing its resolved configuration, dependencies, build inputs
hash, and package output. Inspired by GN’s gn desc.
# Show full details of a unit
yoe desc openssh
# Example output:
# Unit: openssh
# Version: 9.6p1
# Source: https://cdn.openbsd.org/.../openssh-9.6p1.tar.gz
# Build deps: zlib, openssl
# Runtime deps: zlib, openssl
# Input hash: a3f8c2...
# Cached .apk: yes (openssh-9.6p1-r0.apk)
# Config: CFLAGS=-O2 -march=armv8-a (propagated from machine)
# Show only the resolved config for a unit
yoe desc openssh --config
# Show the build inputs that contribute to the hash
yoe desc openssh --inputs
yoe refs
Shows reverse dependencies — what units or images depend on a given unit.
Inspired by GN’s gn refs.
# What depends on openssl?
yoe refs openssl
# Example output:
# Build deps:
# openssh (build + runtime)
# curl (build + runtime)
# python (build)
# Images:
# base (via openssh, curl)
# dev (via openssh, curl, python)
# Show only direct dependents
yoe refs openssl --direct
# Show the full transitive tree
yoe refs openssl --tree
This is essential for answering “if I update openssl, what needs to rebuild?”
yoe graph
Visualizes the dependency DAG.
# Print the dependency graph as text
yoe graph
# Output DOT format for graphviz
yoe graph --format dot | dot -Tpng -o deps.png
# Show graph for a single unit and its deps
yoe graph openssh
# Show only units that need rebuilding
yoe graph --stale
yoe (no args)
Running yoe with no arguments launches an interactive terminal UI showing all
units with their build status.
`[yoe]` Machine: qemu-x86_64 Image: base-image
NAME CLASS STATUS
→ base-files unit ● cached
busybox unit ● cached
linux unit ▌building...
musl unit ● waiting
ncurses autotools ● cached
openssh unit ● failed
openssl autotools ● cached
util-linux autotools
zlib autotools ● cached
b build e edit d diagnose l log c clean / search q quit
Status indicators
| Indicator | Color | Meaning |
|---|---|---|
| (none) | — | Never built |
● cached | dim/gray | Built and cached |
● waiting | yellow | Queued, deps building first |
▌building... | flashing green | Actively compiling |
● failed | red | Last build failed |
When you build a unit, its dependencies appear as “waiting” (yellow), then transition to “building” (flashing green) as the executor reaches them. Multiple deps can flash green simultaneously.
Key bindings (unit list)
| Key | Action |
|---|---|
b | Build selected unit in background |
e | Open unit’s .star file in $EDITOR |
d | Launch claude diagnose for the unit |
l | Open unit’s build log in $EDITOR |
a | Launch claude /new-unit |
c | Clean selected unit’s build artifacts (with confirm) |
/ | Search/filter units by name |
Enter | Show detail view (build output + log tail) |
B | Build all units in background |
C | Clean all build artifacts (with confirm) |
j/k | Navigate up/down |
q | Quit |
Detail view
Pressing Enter on a unit shows a split-pane detail view:
- BUILD OUTPUT (top) — executor progress: dependency resolution, cache hits, build status for each dep
- BUILD LOG (bottom) — tail of the unit’s
build.log, updated in real time during a build
| Key | Action |
|---|---|
Esc | Return to unit list |
b | Build this unit in background |
d | Launch claude diagnose |
l | Open build log in $EDITOR |
Search
Press / to enter search mode. Type to filter — only matching units are shown.
Press Enter to accept the filter, Esc to cancel and show all units.
Builds call build.BuildUnits() directly (in-process, no subprocess). The
executor sends events to the TUI as each unit starts and finishes building.
The TUI is built with Bubble Tea.
yoe log
Shows a build log. With no arguments, shows the most recently modified build log. Specify a unit name to view that unit’s log.
yoe log # show most recent build log
yoe log openssl # show openssl build log
yoe log openssl -e # open openssl build log in $EDITOR
The -e / --edit flag opens the log in your editor (defaults to vi).
yoe diagnose
Launches Claude Code to diagnose a build failure. With no arguments, diagnoses the most recent build failure. Specify a unit name to diagnose that unit.
yoe diagnose # diagnose most recent failure
yoe diagnose util-linux # diagnose util-linux build failure
Requires claude to be in your PATH. Claude Code reads the build log and
iteratively identifies root causes, applies fixes, and rebuilds until the unit
succeeds.
Custom Commands
Projects can define custom commands in commands/*.star that become first-class
yoe subcommands. This is similar to Zephyr’s west extensions but uses
Starlark instead of Python classes.
# commands/deploy.star
command(
name = "deploy",
description = "Deploy image to target device via SSH",
args = [
arg("target", required=True, help="Target device hostname/IP"),
arg("--image", default="base-image", help="Image to deploy"),
arg("--reboot", type="bool", help="Reboot after install"),
],
)
def run(ctx):
img = ctx.args.image
target = ctx.args.target
ctx.log("Deploying", img, "to", target)
ctx.shell("scp", "build/output/" + img + ".img", "root@" + target + ":/tmp/update.img")
ctx.shell("ssh", "root@" + target, "rauc", "install", "/tmp/update.img")
if ctx.args.reboot == "true":
ctx.shell("ssh", "root@" + target, "reboot")
Usage:
yoe deploy 192.168.1.100 --image production-image --reboot
Custom commands show up alongside built-in commands. If yoe doesn’t recognize
a command, it checks commands/*.star before printing “unknown command”.
The context object provides:
| Method | Description |
|---|---|
ctx.args.<name> | Parsed command-line arguments |
ctx.shell(cmd, ...) | Execute a shell command (returns output) |
ctx.log(msg, ...) | Print a message |
ctx.project_root | Path to the project root |
Commands from modules:
Vendor BSP modules can ship custom commands (e.g., flash-emmc, enter-dfu)
that become available when the module is added to the project.
Key difference from unit evaluation: Unit .star files are sandboxed — no
I/O, deterministic. Command .star files have full I/O access via ctx.shell()
because they are actions, not build definitions.
yoe dev
Work with unit source code directly. Every unit’s build directory is a git repo
— upstream source is committed with an upstream tag, and existing patches are
applied as commits on top. Local edits are just git commits.
There is no “dev mode” to enter or exit. If the build directory has commits
beyond upstream, yoe build uses them directly instead of re-fetching source.
# After building, edit source in place
yoe build openssh
cd build/openssh/src
vim auth.c
git commit -am "fix auth timeout handling"
# Rebuild uses your local commits
yoe build openssh
# See what you've changed
yoe dev diff openssh
# Extract commits as patch files
yoe dev extract openssh
# Writes patches/openssh/0001-fix-auth-timeout-handling.patch
# Prints updated patches list for your unit
# Check which units have local modifications
yoe dev status
Subcommands:
| Subcommand | Description |
|---|---|
yoe dev extract <unit> | Run git format-patch upstream..HEAD, write to patches/<unit>/, print updated patches list |
yoe dev diff <unit> | Show git log upstream..HEAD — your local commits |
yoe dev status | List all units with commits beyond upstream |
Rebasing on upstream updates:
# Update unit version
$EDITOR units/openssh.star # bump version to 9.7p1
# Rebuild fetches new source, applies patches via rebase
yoe build openssh
# If patches conflict, resolve in the git repo
cd build/openssh/src
git rebase --continue
yoe dev extract openssh # re-extract clean patches
Why this is simpler than Yocto’s devtool:
- No separate workspace — the build directory is the workspace
- No mode to enter/exit — local commits are automatically detected
- No state files — git is the only state
- Extracting patches is
git format-patch— a command developers already know - Each patch = one git commit, so the patch series is the git log
yoe shell (planned)
Status: Not implemented. The command below describes the intended interactive entry point into a unit’s build sandbox — the piece that makes the no-SDK model (see Development Environments) complete.
Opens an interactive shell inside the build sandbox for a unit. The shell
attaches to the same container, environment variables, and mounted sysroot that
yoe build uses — but with a TTY and no automatic build steps.
# Shell into the sandbox for a unit (uses the unit's container + default machine)
yoe shell myapp
# For a specific machine (cross-arch via QEMU)
yoe shell myapp --machine raspberrypi4
# Shell without targeting a unit — uses the machine's default toolchain container
yoe shell --machine beaglebone-black
Inside the shell, $SRCDIR, $DESTDIR, $PREFIX, $ARCH, and $NPROC are
set exactly as yoe build would set them, and the unit’s resolved -dev
dependencies are already installed into the sandbox via apk. Exiting the shell
tears down the sandbox — it is not persistent, so probing with apk add <pkg>
for exploration does not pollute subsequent builds.
This replaces the traditional SDK shell (Yocto’s environment-setup-*). See
Development Environments for the full model.
yoe bundle (planned)
Status: Not implemented. The
yoe bundlesubcommand below is the air-gapped distribution story described in Development Environments. Today there is no export/import path, no bundle format, and no signing.
Exports and imports content-addressed bundles — the subset of the build cache, source cache, module checkouts, and container images needed to reproduce a set of targets without network access.
# Export a bundle for a specific image (includes all transitive deps)
yoe bundle export base-image --out bundle-base-v1.0.tar
# Export everything reachable from PROJECT.star
yoe bundle export --all --out bundle-full.tar
# Sign the bundle with the project's cache signing key
yoe bundle export base-image --sign keys/bundle.key --out bundle.tar
# Import on an air-gapped machine (verifies signatures if present)
yoe bundle import bundle-base-v1.0.tar --verify keys/bundle.pub
# Show the contents of a bundle without importing
yoe bundle inspect bundle.tar
A bundle contains built .apks, source archives, module checkouts, and
toolchain container OCI archives — all keyed by content hash. After
yoe bundle import, subsequent yoe build runs resolve everything from the
local cache with no network access required.
yoe clean
Removes build artifacts.
# Remove build intermediates (keep cached packages)
yoe clean
# Remove everything (build dirs, packages, sources)
yoe clean --all
# Remove only packages for a specific unit
yoe clean openssh
Environment Variables
| Variable | Default | Description |
|---|---|---|
YOE_PROJECT | . (cwd) | Path to the [yoe] project root |
YOE_CACHE | cache/ | Cache directory for sources, builds, packages |
YOE_JOBS | nproc | Parallel build jobs |
YOE_LOG | info | Log level (debug, info, warn, error) |
YOE_CACHE_SIGNING_KEY | (none) | Path to private key for signing cached packages |
YOE_NO_REMOTE_CACHE | false | Disable remote cache lookups |
AWS_ACCESS_KEY_ID | (none) | S3 credentials for remote cache |
AWS_SECRET_ACCESS_KEY | (none) | S3 credentials for remote cache |
AWS_ENDPOINT_URL | (none) | S3 endpoint override (for MinIO / non-AWS) |
Dependency Resolution
yoe resolves dependencies at two levels:
-
Build-time — unit
depsentries form a DAG.yoe build --with-depstopologically sorts this graph and builds in order, parallelizing where the DAG allows. -
Install-time — unit
runtime_depsentries are written into the.apk’s.PKGINFO. Whenapk addruns during image assembly, it pulls in runtime dependencies automatically.
This means:
- Build dependencies are resolved by
yoe(it knows the unit graph). - Runtime dependencies are resolved by
apk(it knows the package graph). - The unit author declares both; the tools handle the rest.
Config Propagation (planned)
Status: Not implemented. There is no
public_configfield on units, no machine-to-unit CFLAGS/optimization propagation, and no resolved-config view inyoe desc. Units today receive architecture via the build environment and nothing else is automatically propagated through the DAG. The section below describes the planned GN-inspired design.
Inspired by GN’s public_configs, machine-level configuration automatically
propagates through the dependency graph. When you build for a specific machine,
settings like architecture flags, optimization level, and kernel headers path
flow to every unit without each unit declaring them:
machine (beaglebone-black)
→ arch = "arm64"
→ CFLAGS = "-O2 -march=armv8-a"
→ KERNEL_HEADERS = "/usr/src/linux-6.6/include"
↓ propagates to
unit (zlib) → builds with arm64 flags
unit (openssl) → builds with arm64 flags
unit (openssh) → builds with arm64 flags + sees kernel headers
Units can also declare public_config settings that propagate to their
dependents. For example, a zlib unit might export its include path so that
openssh (which depends on zlib) automatically gets -I/usr/include without
the unit author specifying it.
This is resolved during the graph resolution phase (phase 1) so the full
resolved config for every unit is known before any build starts. Use
yoe desc <unit> --config to inspect the resolved configuration.
Design note: unit-level, not task-level dependencies. Unlike BitBake, which
models dependencies between individual tasks across units (e.g.,
B:do_configure depends on A:do_install), yoe treats each unit as an atomic
unit — unit A depends on unit B means B must be fully built before A starts.
This is a deliberate simplicity trade-off. BitBake’s task-level graph enables
fine-grained parallelism (start fetching C while B is still compiling) and
per-task caching (sstate), but it is also the primary source of Yocto’s
debugging complexity. Unit-level dependencies are easier to reason about, and
the parallelism loss is minor since independent units still build concurrently
across the DAG. Per-unit caching via content-addressed .apk hashes provides
sufficient granularity for fast incremental rebuilds.
Caching Strategy
Builds are cached at multiple levels:
- Source cache — downloaded tarballs and git clones in
$YOE_CACHE/sources/. Keyed by URL + hash. - Build cache — content-addressed by hashing the unit, source, and all
build dependency
.apkhashes. If the combined hash matches, the build is skipped and the cached.apkis used. - Package repository — built
.apkfiles in the local repo. Once published, packages are available for image assembly and on-device updates. - Remote cache (planned — optional) — push/pull packages to an S3-compatible store so CI and team members share build results. Not yet implemented: there is no remote cache backend, no S3 integration, and no cache signing today. See the Caching Architecture section for the planned S3 configuration, cache signing, and the multi-level fallback chain.
Cache invalidation is hash-based, not timestamp-based. Changing a unit, updating
a source, or rebuilding a dependency all produce a new hash and trigger a
rebuild. Use yoe build --dry-run to see what would be rebuilt and why, or
yoe cache stats to review hit/miss rates from the last build.
Example Workflow
# Start a new project
yoe init my-product --machine beaglebone-black
# Add a unit for your application
$EDITOR units/myapp.star
# Build everything (packages and images)
yoe build --all
# Flash to an SD card
yoe flash base-image /dev/sdX
# Later, update just your app and rebuild the image
$EDITOR units/myapp.star # bump version
yoe build myapp
yoe build base-image # only myapp's .apk changed, fast rebuild
# Or update the device directly
scp repo/myapp-1.3.0-r0.apk device:/tmp/
ssh device apk add /tmp/myapp-1.3.0-r0.apk
AI-First Tooling for [yoe]
[yoe] is designed as an AI-first build system. While every operation has a
CLI equivalent, the primary interface for many workflows is a conversation with
an AI assistant that understands the build system deeply. This document defines
the skills (AI-driven workflows) that ship with [yoe].
Why AI-First
Embedded Linux development has a steep learning curve — not because the concepts are hard, but because there are many concepts and they interact in non-obvious ways. An AI assistant that understands units, dependencies, machine definitions, build isolation, and packaging can:
- Lower the barrier to entry. A developer can describe what they want in natural language and get working units, machine definitions, and image configurations.
- Reduce debugging time. Build failures in embedded systems often involve subtle interactions between toolchain flags, dependency ordering, and cross-module overrides. An AI that can read the full dependency graph and build logs can diagnose issues faster than manual investigation.
- Automate routine maintenance. Version bumps, security patches, license audits, and dependency updates are tedious but critical. AI skills can automate these with human review.
- Make the build system self-documenting. Instead of reading docs, ask the assistant “how does openssh get into my image?” and get a traced answer through the actual dependency graph.
Skill Categories
Unit Development
/new-unit
Create a new unit from a description or upstream URL. The AI determines the
build system (autotools, cmake, meson, etc.), fetches the source to inspect it,
identifies dependencies, and generates a complete .star file.
/new-unit https://github.com/example/myapp
/new-unit "I need an MQTT broker for IoT devices"
/new-unit "add libcurl with HTTP/2 support"
/update-unit <name>
Bump a unit to the latest upstream version. Checks for new releases, updates the version and sha256, runs a test build, and reports any patch conflicts or dependency changes.
/update-unit openssl
/update-unit --all --dry-run
/audit-unit <name>
Review a unit for common issues: missing runtime dependencies, incorrect license, unnecessary build dependencies, suboptimal configure flags, missing sub-package splits.
/audit-unit openssh
Image & Machine Configuration
/new-machine
Generate a machine definition from a board name or SoC. Looks up kernel defconfig, device trees, bootloader configuration, and QEMU settings (if applicable).
/new-machine beagleplay
/new-machine "Raspberry Pi 5"
/new-machine "custom board with i.MX8M Plus"
/new-image
Design an image unit interactively. Asks about the use case (gateway, HMI,
headless sensor, development), suggests appropriate packages, configures
services, and generates the .star file.
/new-image "industrial gateway with MQTT and OPC-UA"
/new-image "minimal headless sensor node"
/image-size
Analyze an image unit and estimate the installed size. Break down by package, identify the largest contributors, and suggest ways to reduce size (remove debug packages, switch to smaller alternatives, strip unnecessary features).
/image-size base-image
/image-size dev-image --compare base-image
Dependency Analysis
/why <package>
Trace why a package is included in an image. Shows the full dependency chain from image unit to the specific package, including which packages pull it in as a runtime dependency.
/why libssl
/why dbus --image dev-image
/what-if
Simulate the impact of a change without building. “What if I remove networkmanager from the image?” “What if I update glibc to 2.40?”
/what-if remove networkmanager from base-image
/what-if update glibc to 2.40
/what-if add python3 to dev-image
Build Debugging
/diagnose
Analyze a build failure. Reads the build log, identifies the root cause (missing dependency, configure flag issue, patch conflict, toolchain mismatch), and suggests a fix.
/diagnose openssh
/diagnose # diagnose the most recent failure
/build-log <unit>
Summarize a build log — highlight warnings, errors, and anything unusual. Filter out noise (compiler progress, make output) and surface what matters.
/build-log linux
/build-log openssl --warnings-only
Security & Maintenance
/cve-check
Scan units against known CVEs. Reports which packages have outstanding vulnerabilities, their severity, and whether newer upstream versions fix them.
/cve-check
/cve-check openssl
/cve-check --image base-image
/license-audit
Audit all packages in an image for license compliance. Flag incompatible license combinations, missing license declarations, and packages that need special handling (GPL with linking exceptions, etc.).
/license-audit base-image
/license-audit --format spdx
/security-review
Review an image configuration for security issues: services running as root, unnecessary packages, missing hardening flags (ASLR, stack protector, fortify), world-readable sensitive files, default passwords.
/security-review base-image
Module Management
/new-module
Scaffold a new module with MODULE.star, directory structure, and example units.
/new-module vendor-bsp "BSP module for our custom board"
/new-module product "Product-specific units and images"
/module-diff
Compare two versions of a module. Show what units changed, what versions bumped, what new units were added, and what was removed.
/module-diff @units-core v1.0.0 v1.1.0
Development Environment
[yoe] does not ship a separate SDK — yoe itself is the dev environment. See
Development Environments for the full model.
/dev-setup
Guide a developer through getting yoe + Docker installed and their editor
configured for Starlark (syntax highlighting, language server, formatters).
Verify the toolchain works by building a small unit end to end.
/dev-setup
/dev-setup --for rust # also install Rust-native tooling on the workstation
/devshell <unit>
Wrapper over yoe shell — drops into the unit’s build sandbox with the same env
vars, container, and mounted sysroot that yoe build uses. Useful for debugging
configure issues, probing deps, or testing build commands manually.
/devshell openssh
/devshell linux --machine beaglebone-black
Documentation & Learning
/explain <concept>
Explain a [yoe] concept in context. Not just documentation — the AI reads the
project’s actual configuration and explains how the concept applies to this
specific project.
/explain "how does caching work for my project"
/explain "what happens when I run yoe build base-image"
/explain "how do modules compose in my project"
/diff-from-yocto
For developers coming from Yocto, explain how a Yocto concept maps to [yoe].
References the actual Yocto documentation and provides side-by-side comparisons.
/diff-from-yocto bbappend
/diff-from-yocto "MACHINE_FEATURES"
/diff-from-yocto sstate-cache
Implementation Notes
Skills are implemented as Claude Code plugins that ship with the yoe tool.
Each skill:
- Has access to the full project state via
yoe desc,yoe refs,yoe graph, and direct Starlark file reading - Can invoke
yoeCLI commands to gather information (build logs, dependency graphs, cache status) - Can create and modify
.starfiles with the user’s approval - Runs in the context of the current project directory
Skills that modify files (like /new-unit or /update-unit) always show the
proposed changes and ask for confirmation before writing. Skills that only read
and analyze (like /why or /diagnose) run without confirmation.
Unit & Configuration Format
[yoe] uses Starlark — a
deterministic, sandboxed dialect of Python — for all build definitions. Units,
classes, machine definitions, and project configuration are all .star files.
See Build Languages for the rationale behind this choice.
Units vs. Packages
These are distinct concepts in [yoe]:
- Units —
.starfiles in the project tree that describe how to build software. They live in version control and are a development/CI concern. - Packages —
.apkfiles that units produce. They are installable artifacts published to a repository and consumed byapkduring image assembly or on-device updates.
The build flow is: unit → build → .apk unit(s) → repository → image / device.
Units are inputs to the build system. Packages are outputs. A developer edits units; a device only ever sees packages.
Sub-packages (planned)
Status: Today
[yoe]produces exactly one.apkper unit —internal/artifact/apk.gopackages$DESTDIRinto a single archive, and the Starlarksubpackages =field is not yet parsed. This section describes the intended future model so units and classes can be written with it in mind.
A single unit will be able to produce a small number of .apk packages from one
source build. The goal is targeted — keep runtime images lean — not exhaustive
like Yocto’s auto-split of every recipe into 7+ packages.
The only two splits [yoe] plans to support as subpackages:
| Sub-package | Contents | Why it’s a subpackage |
|---|---|---|
<name> | Binaries, runtime libs, default conf | The default artifact |
-dev | Headers, .a, .pc, CMake configs | Never wanted at runtime on a constrained device; needed on build hosts |
-dbg | Detached DWARF debug info | Installable after a field incident; should not occupy flash on the device |
What is deliberately not a subpackage:
- Docs, man pages, info pages, locale data, examples. Classes strip these
from
$DESTDIRby default (e.g.,autotoolsremoves/usr/share/{doc,man,info,locale,gtk-doc,bash-completion}and/usr/share/*/examples). A unit that genuinely needs man pages on the device can opt out of the strip; most don’t. -src,-staticdev,-locale-*,-bin/-commonstyle splits. Yocto produces these automatically;[yoe]does not. The cognitive cost (which-of-seven-packages-holds-this-file) and per-unit metadata surface isn’t worth it for yoe’s target audience.- Library SONAME splits (
libfoo0separate fromfoo). Debian splits these to allow multiple ABI versions to coexist;[yoe]is rolling and ships one ABI at a time, so the split is unnecessary.
Rationale. Yocto’s auto-split-everything model exists because recipe authors
cannot be trusted to strip docs/locale/staticdev consistently, so the build
system does it mechanically. That logic doesn’t apply to [yoe]: the class
library is small, AI-written units follow the class, and the image is already
targeting single-digit MB. A rm -rf $DESTDIR/usr/share/{doc,man,…} in the
class does what Yocto’s -doc subpackage does, with one package instead of two.
Planned unit surface:
load("//classes/autotools.star", "autotools")
autotools(
name = "openssl",
version = "3.2.1",
source = "https://www.openssl.org/source/openssl-3.2.1.tar.gz",
deps = ["zlib"],
# Opt in to the two subpackages that matter on constrained devices.
subpackages = ["dev", "dbg"],
)
With no subpackages field, the unit produces a single .apk containing
everything in $DESTDIR after the class’s default strip. That is the expected
case for most units.
Planned split rules:
-devclaims/usr/include/**,/usr/lib/*.a,/usr/lib/pkgconfig/**,/usr/lib/cmake/**,/usr/share/aclocal/**,/usr/share/pkgconfig/**,/usr/bin/*-config(e.g.,xml2-config).-dbgclaims/usr/lib/debug/**(produced by runningobjcopy --only-keep-debug/strip --only-keep-debugon ELF binaries in$DESTDIRbefore packaging).- Everything else stays in the main package.
For custom splits (e.g., separating openssh-server from openssh-client
because an image ships one but not both), the plan is to allow explicit file
lists:
autotools(
name = "openssh",
subpackages = ["dev", "dbg"],
extra_subpackages = {
"server": files(
"/usr/sbin/sshd",
"/etc/ssh/sshd_config",
),
"client": files(
"/usr/bin/ssh",
"/usr/bin/scp",
"/usr/bin/sftp",
),
},
)
This path is lower priority; most services can be shipped as one package and enabled/disabled by the image.
In image units (planned consumption):
image(
name = "production-image",
artifacts = [
"openssh",
"networkmanager",
],
)
image(
name = "dev-image",
artifacts = [
"openssh",
"openssh-dev", # headers for on-device development
"gdb",
],
)
Alpine’s apk already supports subpackages natively (Alpine’s openssl APKBUILD
produces openssl, openssl-dev, openssl-dbg, etc.), so the plumbing in apk
is already proven — what [yoe] needs to build is the Starlark surface, the
split engine, and the default strip logic in the shared classes.
Dependency resolution at image time
There are two places dependency information lives in [yoe], and they serve
different phases:
- Unit metadata (
deps,runtime_depsin.starfiles) — drives the build graph. Tells the build executor what order to compile things in and what goes into each unit’s sysroot. - Package metadata (
.PKGINFOinside each.apk; aggregated into anAPKINDEX) — drives the install graph. Tells apk what to pull in when a package is added to a rootfs.
The unit author writes runtime_deps = [...] once; the build emits those into
.PKGINFO as depend = lines. From that point the package metadata is
authoritative for installation: image assembly invokes
apk add --root <rootfs> -X <local-repo> inside the build container, and
apk-tools resolves the install graph from APKINDEX. The Starlark-side
_resolve_runtime_deps is still used to flatten the artifact list for the build
DAG (so all required apks get built first), but apk-tools owns install-time
ordering, file-conflict detection, and /lib/apk/db/installed population.
Why this is the right split:
- Subpackages. When
opensslsplits intoopensslandopenssl-dev, the unit graph no longer has a node namedopenssl-dev. The depopenssl-dev → openssl = ${version}lives only in the generated PKGINFO. A unit-graph walker cannot see it; apk’s resolver can. provides:/replaces:/conflicts:. apk’s metadata supports virtual packages and alternatives (e.g., two SSH implementations bothprovides = ssh, onereplacesthe other). A Starlark-only walker would have to re-implement apk’s resolver to honor these.- External repositories compose cleanly. A project that pulls packages from an Alpine aports mirror or a vendor BSP repo has no Starlark unit to walk — only APKINDEX metadata. apk treats yoe-built packages and external-repo packages identically.
- Single source of truth on the device. What the image builder sees is what
the on-device
apk upgradesees: same metadata, same resolver.
Why Starlark
- One language — units, classes, machines, and project config are all
.starfiles. No TOML + shell + something-else stack. - Python-like syntax — most developers can read it immediately.
- Deterministic — no side effects, no mutable global state. Critical for content-addressed caching.
- Sandboxed — units cannot perform arbitrary I/O or network access.
- Go-native — the
go.starlark.netlibrary embeds directly in theyoebinary. - Composable — functions,
load(), and**kwargsprovide natural composition for modules and overrides. - Battle-tested — used by Bazel (Google), Buck2 (Meta), and Pants.
Unit Types
Machine Definition (machines/<name>.star)
Describes a target board or platform.
machine(
name = "beaglebone-black",
arch = "arm64",
description = "BeagleBone Black (AM3358)",
kernel = kernel(
repo = "https://github.com/beagleboard/linux.git",
branch = "6.6",
defconfig = "bb.org_defconfig",
device_trees = ["am335x-boneblack.dtb"],
),
bootloader = uboot(
repo = "https://github.com/beagleboard/u-boot.git",
branch = "v2024.01",
defconfig = "am335x_evm_defconfig",
),
)
QEMU machines include emulation configuration:
machine(
name = "qemu-x86_64",
arch = "x86_64",
kernel = kernel(
unit = "linux-qemu",
cmdline = "console=ttyS0 root=/dev/vda2 rw",
),
qemu = qemu_config(
machine = "q35",
cpu = "host",
memory = "1G",
firmware = "ovmf",
display = "none",
),
)
Image Unit (units/<name>.star)
An image is a unit that assembles a root filesystem from packages and produces a
disk image. Image units use the image() class function instead of unit().
They participate in the same DAG, use the same caching, and are built with
yoe build.
load("//classes/image.star", "image")
image(
name = "base-image",
version = "1.0.0",
description = "Minimal bootable system",
# Packages installed into the rootfs.
# The base system (C library + busybox + init) is implicit unless excluded.
artifacts = [
"openssh",
"networkmanager",
"myapp",
"monitoring-agent",
],
hostname = "yoe",
timezone = "UTC",
locale = "en_US.UTF-8",
services = ["sshd", "NetworkManager", "myapp"],
partitions = [
partition(label="boot", type="vfat", size="64M",
contents=["MLO", "u-boot.img", "zImage", "*.dtb"]),
partition(label="rootfs", type="ext4", size="fill", root=True),
],
)
Image Composition and Variants
Image variants use plain Starlark variables and list concatenation — no special inheritance mechanism:
load("//classes/image.star", "image")
BASE_PACKAGES = [
"openssh",
"networkmanager",
"myapp",
"monitoring-agent",
]
BASE_SERVICES = ["sshd", "NetworkManager", "myapp"]
BBB_PARTITIONS = [
partition(label="boot", type="vfat", size="64M",
contents=["MLO", "u-boot.img", "zImage", "*.dtb"]),
partition(label="rootfs", type="ext4", size="fill", root=True),
]
image(
name = "base-image",
version = "1.0.0",
packages = BASE_PACKAGES,
services = BASE_SERVICES,
partitions = BBB_PARTITIONS,
hostname = "yoe",
)
image(
name = "dev-image",
version = "1.0.0",
description = "Development image with debug tools",
packages = BASE_PACKAGES + ["gdb", "strace", "tcpdump", "vim"],
exclude = ["monitoring-agent"],
services = BASE_SERVICES,
partitions = BBB_PARTITIONS,
hostname = "yoe-dev",
)
Conditional packages per machine:
artifacts = ["openssh", "myapp"]
if machine.arch == "arm64":
packages += ["arm64-firmware"]
Package Unit (units/<name>.star)
Describes how to build a system-level package (C/C++ libraries, system daemons,
etc.) and produce an .apk. Uses a class function like autotools(),
cmake(), or the generic unit().
load("//classes/autotools.star", "autotools")
autotools(
name = "openssh",
version = "9.6p1",
description = "OpenSSH client and server",
license = "BSD",
source = "https://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-9.6p1.tar.gz",
sha256 = "...",
configure_args = ["--sysconfdir=/etc/ssh"],
deps = ["zlib", "openssl"],
runtime_deps = ["zlib", "openssl"],
services = ["sshd"],
conffiles = ["/etc/ssh/sshd_config"],
)
Or using the generic unit() for custom build steps:
unit(
name = "openssh",
version = "9.6p1",
source = "https://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-9.6p1.tar.gz",
sha256 = "...",
deps = ["zlib", "openssl"],
runtime_deps = ["zlib", "openssl"],
build = [
"./configure --prefix=$PREFIX --sysconfdir=/etc/ssh",
"make -j$NPROC",
"make DESTDIR=$DESTDIR install",
],
services = ["sshd"],
conffiles = ["/etc/ssh/sshd_config"],
)
Patches
Units can apply patches to upstream source after fetching and before building.
Patches are listed in order and applied with git apply or patch -p1:
unit(
name = "busybox",
version = "1.36.1",
source = "https://busybox.net/downloads/busybox-1.36.1.tar.bz2",
patches = [
"patches/busybox/fix-ash-segfault.patch",
"patches/busybox/add-custom-applet.patch",
],
build = ["make -j$NPROC", "make DESTDIR=$DESTDIR install"],
)
Patch file paths are relative to the project root. Patch contents are included in the unit’s cache hash — changing a patch triggers a rebuild.
Module overrides for patches work through the standard function composition pattern:
# upstream: @units-core/busybox.star
def busybox(extra_patches=[], **overrides):
unit(
name = "busybox",
version = "1.36.1",
source = "https://busybox.net/downloads/busybox-1.36.1.tar.bz2",
patches = [
"patches/busybox/fix-ash-segfault.patch",
] + extra_patches,
build = ["make -j$NPROC", "make DESTDIR=$DESTDIR install"],
**overrides,
)
# vendor module: adds a patch without modifying upstream
load("@units-core//busybox.star", "busybox")
busybox(extra_patches=["patches/vendor-busybox-audit.patch"])
Alternatives to patches:
- Git-based sources — fork the repo, apply changes as commits, point the unit at your branch/tag. Cleaner history, easier to rebase on upstream updates.
- Overlay files — for config file changes on the target, the
overlays/directory is simpler than patching source.
Tasks and Per-Task Containers (planned)
Status:
task()and unit-levelcontainer =are shipped — every built-in class inmodules/units-core/classes/(autotools, cmake, go, container, image) already generatestasks = [task(...)]and the build executor (internal/build/executor.go) runs each task’s steps inside the unit’s resolved container. The per-taskcontainer=override described below is planned: the task struct in Starlark accepts the field but the executor currently ignores it and uses the unit-level container for every task in the unit. Wire-through is the remaining work.
Units can define named build tasks via task(), each with an optional Docker
container. This replaces the flat build = [...] string list with structured
steps that can each run in different environments.
Container resolution order: task container → package container → bwrap
(default).
# Simple — build list works as before (bwrap, no containers)
autotools(name = "zlib", source = "...", ...)
# Package-level container — all tasks inherit it
go_binary(
name = "myapp",
container = "golang:1.22-alpine",
tasks = [
task("build", run="go build -o $DESTDIR/usr/bin/myapp"),
task("test", run="go test ./..."),
],
)
# Task-level override — codegen uses a different container
unit(
name = "complex-app",
container = "golang:1.22-alpine", # default for all tasks
tasks = [
task("codegen",
container="protoc:latest", # overrides package default
run="protoc --go_out=. api/*.proto"),
task("compile",
run="go build -o $DESTDIR/usr/bin/app"), # inherits golang
task("install",
run="install -D app.service $DESTDIR/usr/lib/systemd/system/"),
],
)
# Mix of container and bwrap in one unit
unit(
name = "hybrid-tool",
tasks = [
task("generate",
container="codegen-tools:latest",
run="generate-code --out src/"),
task("compile", run="make -j$NPROC"), # no container → bwrap
task("install", run="make DESTDIR=$DESTDIR install"),
],
)
The build = [...] field remains for backward compatibility — internally
converted to unnamed tasks without containers. Classes generate tasks:
# classes/autotools.star generates three tasks
def autotools(name, version, source, configure_args=[], **kwargs):
unit(
name=name, version=version, source=source,
tasks = [
task("configure",
run="test -f configure || autoreconf -fi && "
"./configure --prefix=$PREFIX " + " ".join(configure_args)),
task("compile", run="make -j$NPROC"),
task("install", run="make DESTDIR=$DESTDIR install"),
],
**kwargs,
)
Extending a class’s tasks — when a unit passes tasks=[...] to a class
(autotools, cmake, go_binary), the class merges the overrides into its
default task list rather than replacing them entirely. Merge rules:
- Same name → replace at the existing position (the override’s
stepsfully replace the base’s; merging steps is not supported). - New name → append to the end.
task("name", remove=True)→ drop that task from the base list.
# Adds an init-script task without restating the class's default build task.
go_binary(
name = "simpleiot",
...
tasks = [
task("init-script", steps = [
"mkdir -p $DESTDIR/etc/init.d",
install_file("simpleiot.init",
"$DESTDIR/etc/init.d/simpleiot", mode = 0o755),
]),
],
)
Merging is implemented by merge_tasks(base, overrides) in
modules/units-core/classes/tasks.star. Custom classes that want the same
behavior should load("//classes/tasks.star", "merge_tasks") and call it before
passing tasks to unit().
See per-unit containers plan for the full design.
Application Unit (units/<name>.star)
Applications built with language-native build systems use language-specific class functions that delegate to the language toolchain.
load("//classes/go.star", "go_binary")
go_binary(
name = "myapp",
version = "1.2.3",
description = "Edge data collection service",
license = "Apache-2.0",
source = "https://github.com/example/myapp.git",
tag = "v1.2.3",
package = "./cmd/myapp",
services = ["myapp"],
conffiles = ["/etc/myapp/config.toml"],
environment = {"DATA_DIR": "/var/lib/myapp"},
)
Language-specific classes handle the build details — go_binary() sets up
GOMODCACHE, runs go build, and packages the result.
Status: Only
go_binary()(inmodules/units-core/classes/go.star) is implemented today. Similar classes for Rust (rust_binary()), Zig (zig_binary()), Python (python_unit()), and Node.js (node_unit()) are planned but not yet shipped. Applications in those languages can still be built by usingunit()directly with explicit build steps.
Project Configuration (PROJECT.star)
Top-level configuration that ties everything together.
project(
name = "yoe",
version = "0.1.0",
description = "`[yoe]` embedded Linux distribution",
defaults = defaults(
machine = "qemu-arm64",
image = "base-image",
),
cache = cache(
path = "/var/cache/yoe-ng/build",
remote = [
s3_cache(
name = "team",
bucket = "yoe-cache",
endpoint = "https://minio.internal:9000",
region = "us-east-1",
),
],
retention_days = 90,
signing = "keys/cache.pub",
),
sources = sources(
go_proxy = "https://proxy.golang.org",
),
modules = [
# Module in a subdirectory of a repo — path specifies where MODULE.star is
module("https://github.com/YoeDistro/yoe-ng.git",
ref = "main",
path = "modules/units-core"),
# Module at the root of its own repo
module("git@github.com:vendor/bsp-units.git", ref = "main"),
],
)
Classes
Classes are Starlark functions that define build pipelines for different unit types. They encapsulate the how to build logic so that units only declare what to build.
Built-in Classes
These ship with the units-core module (at modules/units-core/classes/) or
are under the (planned) roadmap:
| Class | Status | Description |
|---|---|---|
unit() | shipped | Generic package — custom build steps as shell |
autotools() | shipped | configure / make / make install |
cmake() | shipped | CMake build |
go_binary() | shipped | Go application |
container() | shipped | Build a Docker/OCI container image |
image() | shipped | Root filesystem image assembly |
meson() | planned | Meson + Ninja build |
rust_binary() | planned | Rust application (Cargo) |
zig_binary() | planned | Zig application |
python_unit() | planned | Python package (pip/uv) |
node_unit() | planned | Node.js package (npm/pnpm) |
Class Composition
Classes compose through function calls. A unit can use multiple classes, and classes can wrap other classes:
load("//classes/autotools.star", "autotools")
load("//classes/systemd.star", "systemd_service")
# Use both autotools and systemd classes
autotools(
name = "openssh",
version = "9.6p1",
configure_args = ["--sysconfdir=/etc/ssh"],
deps = ["zlib", "openssl"],
)
systemd_service(
name = "openssh",
unit = "sshd.service",
conffiles = ["/etc/ssh/sshd_config"],
)
Or create a combined class:
# classes/systemd_autotools.star
load("//classes/autotools.star", "autotools")
load("//classes/systemd.star", "systemd_service")
def systemd_autotools(name, unit, conffiles=[], **kwargs):
autotools(name=name, **kwargs)
systemd_service(name=name, unit=unit, conffiles=conffiles)
Custom Classes
Projects can define their own classes in classes/ for patterns specific to
their codebase:
# classes/my_go_service.star
load("//classes/go.star", "go_binary")
load("//classes/systemd.star", "systemd_service")
def my_go_service(name, version, source, **kwargs):
"""Standard pattern for our Go microservices."""
go_binary(
name = name,
version = version,
source = source,
**kwargs,
)
systemd_service(
name = name,
unit = name + ".service",
conffiles = ["/etc/" + name + "/config.toml"],
)
Extensibility: Starlark and Go
Starlark is not a standalone language — it runs embedded inside the yoe Go
binary. Every built-in function (unit(), machine(), image(), etc.) is a Go
function registered into the Starlark environment. When Starlark code calls
unit(name="openssh", ...), it executes Go code that has full access to the
host runtime.
This means the system is extensible in two directions:
Go to Starlark (primitives): The yoe binary provides built-in functions
that Starlark code can call. These have capabilities Starlark alone cannot —
filesystem I/O, network access, executing system tools (apk, bwrap, git),
managing the build engine state. Adding a new built-in is a Go function with the
right signature:
// In Go: register a new built-in function
func (e *Engine) fnDeploy(thread *starlark.Thread, fn *starlark.Builtin,
args starlark.Tuple, kwargs []starlark.Tuple) (starlark.Value, error) {
target := kwString(kwargs, "target")
// Full access to Go runtime — HTTP, filesystem, exec, etc.
return starlark.None, nil
}
// Register it in builtins():
"deploy": starlark.NewBuiltin("deploy", e.fnDeploy),
Now any .star file can call deploy(target="production").
Starlark to Starlark (composition): Users define functions in .star files
that compose the Go-provided primitives. Classes, macros, and helpers are just
Starlark functions that call built-in functions:
# classes/my_service.star — user-defined class wrapping Go builtins
def my_service(name, version, **kwargs):
go_binary(name=name, version=version, **kwargs) # calls Go
systemd_service(name=name, unit=name + ".service") # calls Go
The architecture mirrors Bazel: Go provides the primitives (package
creation, image assembly, sandbox execution, cache management), Starlark
provides the composition layer (classes, conditionals, module overrides,
shared variables). Starlark code cannot perform arbitrary I/O — it can only call
the Go functions that yoe explicitly exposes, maintaining the sandboxed,
deterministic evaluation model.
Directory Structure
A typical [yoe] project layout:
my-project/
├── PROJECT.star
├── machines/
│ ├── beaglebone-black.star
│ ├── raspberrypi4.star
│ └── qemu-arm64.star
├── units/
│ ├── base-image.star # image() class
│ ├── dev-image.star # image() class, extends base
│ ├── openssh.star # autotools() class
│ ├── zlib.star
│ ├── openssl.star
│ ├── myapp.star # go_binary() class
│ └── monitoring-agent.star
├── classes/ # reusable build rule functions
├── commands/ # custom yoe subcommands
│ ├── my_go_service.star
│ └── ...
└── overlays/
└── custom-configs/ # files copied directly into rootfs
└── etc/
└── myapp/
└── config.toml
Build Flow
units/*.star (all unit types: package and image)
│
▼
yoe build (evaluate Starlark, resolve DAG, build)
│
├─ unit() ──▶ compile source ──▶ *.apk artifacts ──▶ repository/
│
└─ image() ──▶ apk install deps into rootfs
──▶ apply overlays + config
──▶ partition + format
──▶ disk image (.img / .wic)
Modules
Modules are external Git repositories that provide units, classes, and machine definitions. They are the primary mechanism for reusing and sharing build definitions across projects — BSP vendors ship modules, and product teams compose them.
Declaring Modules in PROJECT.star
project(
name = "my-product",
version = "1.0.0",
modules = [
# Module in a subdirectory of a repo
module("https://github.com/YoeDistro/yoe-ng.git",
ref = "main",
path = "modules/units-core"),
# Module at the root of its own repo
module("git@github.com:vendor/bsp-imx8.git", ref = "v2.1.0"),
],
)
Each module() call declares a Git repository URL and a ref (tag, branch, or
commit SHA). The optional path field specifies a subdirectory within the repo
where MODULE.star lives — this allows a single repo to contain multiple
modules or a module to be part of a larger project. The yoe tool fetches and
caches these repositories, making them available as @module-name in load()
statements. The module name is derived from the last component of path (if
set) or the URL.
Module Manifests (MODULE.star) (planned)
Status: The
module_info()Starlark builtin is wired up ininternal/starlark/builtins.goand theModuleInfostruct is populated when aMODULE.staris evaluated, but the module resolver ininternal/module/never reads those declareddeps. Transitive module resolution — both the v1 “error on missing” and v2 “auto-fetch” behaviors below — is planned. Today only the top-levelmodules = [...]list inPROJECT.staris fetched.
Modules can declare their own dependencies via a MODULE.star file in the
repository root. This enables BSP vendors to ship self-contained modules without
requiring users to manually discover transitive dependencies.
# In github.com/vendor/bsp-imx8/MODULE.star
module_info(
name = "vendor-bsp-imx8",
description = "i.MX8 BSP units and machine definitions",
deps = [
module("github.com/vendor/hal-common", ref = "v1.3.0"),
module("github.com/vendor/firmware-imx", ref = "v5.4"),
],
)
Dependency Resolution Rules
Module dependencies follow the Go modules model — the root project has final authority over versions:
-
PROJECT.star always wins. If PROJECT.star and a MODULE.star both reference the same repository, the version in PROJECT.star takes precedence. This gives the project owner full control over the dependency tree.
-
Transitive deps are checked, not silently fetched (v1). In the initial implementation,
yoereads each module’sMODULE.starand errors if a required dependency is missing from PROJECT.star, rather than silently fetching it. The error message tells the user exactly what to add. This is explicit and debuggable. -
Automatic transitive resolution (v2). In a future version, transitive dependencies declared in
MODULE.starare fetched automatically when not overridden by PROJECT.star.yoe module listshows the full resolved tree so nothing is hidden. -
Diamond dependencies resolve to the highest version. If two modules depend on different versions of the same repository,
yoeselects the higher version (semver comparison) unless PROJECT.star pins a specific version.
Example — v1 behavior (missing transitive dep):
$ yoe build --all
Error: module "vendor-bsp-imx8" requires "github.com/vendor/hal-common" (ref v1.3.0)
but it is not declared in PROJECT.star.
Add this to your PROJECT.star modules list:
module("github.com/vendor/hal-common", ref = "v1.3.0"),
Example — PROJECT.star overriding a transitive version:
# PROJECT.star
modules = [
module("github.com/yoe/units-core", ref = "v1.0.0"),
module("github.com/vendor/bsp-imx8", ref = "v2.1.0"),
# Override the version that bsp-imx8 requests (v1.3.0 → v1.4.0)
module("github.com/vendor/hal-common", ref = "v1.4.0"),
]
Local Module Overrides
During development, you often want to work on a module locally instead of
fetching from Git. The local parameter overrides the remote URL:
modules = [
# Local override — point at a checkout on disk instead of fetching
module("https://github.com/YoeDistro/yoe-ng.git",
local = "../yoe-ng",
path = "modules/units-core"),
# Local override for a standalone module
module("git@github.com:vendor/bsp-imx8.git", local = "../bsp-imx8"),
]
When local is set, yoe uses the local directory directly (no fetch, no ref
checking). If path is also set, it is appended to the local path. This is
equivalent to Go’s replace directive in go.mod.
Label-Based References
Inspired by Bazel’s label system and GN’s //path/to:target, [yoe] uses a
label scheme for referencing units and classes across repositories:
# Local references (within the current project)
load("//classes/autotools.star", "autotools") # from project root
load("//units/openssh.star", "openssh_config") # load shared config
# External references (from modules)
load("@units-core//openssh.star", "openssh")
load("@vendor-bsp//kernel.star", "vendor_kernel")
Module names (@units-core, @vendor-bsp) map to the modules declared in
PROJECT.star. When yoe evaluates units, it fetches and caches external
modules, then resolves all load() references to concrete files.
Module Composition
Modules enable the vendor BSP / product overlay pattern without modifying upstream units:
# Module 1: @units-core/openssh.star — base unit as a function
def openssh(extra_deps=[], extra_configure_args=[], **overrides):
autotools(
name = "openssh",
version = "9.6p1",
deps = ["zlib", "openssl"] + extra_deps,
configure_args = ["--sysconfdir=/etc/ssh"] + extra_configure_args,
**overrides,
)
# Module 2: @vendor-bsp/openssh.star — vendor extends it
load("@units-core//openssh.star", "openssh")
openssh(extra_deps=["vendor-crypto"])
# Module 3: product unit — further customization
load("@vendor-bsp//openssh.star", "openssh")
openssh(extra_configure_args=["--with-pam"])
Each module is explicit about what it modifies and where the base comes from. This is more traceable than Yocto’s bbappend system — you can grep for the function call to find all modifications.
Design Notes
- Starlark over TOML/YAML — pure data formats accumulate escape hatches (conditional deps, shell in strings, inheritance). Starlark makes the implicit explicit while remaining readable for simple cases. See Build Languages for the full analysis.
- Prefer git sources over tarballs — git sources give you upstream history,
clean
git rebasefor patch updates, naturalyoe devworkflow (edit, commit, extract patches), and no SHA256 to maintain. Usesource = "https://...git"with atagto pin the version. - One file per unit — each unit is its own
.starfile. This keeps diffs clean and makes it easy to add/remove components. - Units and packages are separate concerns — units are version-controlled
build instructions; packages are binary artifacts. This separation enables
building once and deploying many times, sharing packages across teams, and
on-device incremental updates via
apk. - Classes as functions — build patterns (autotools, cmake, go) are Starlark functions, not a type system. Multiple classes compose through function calls. This is simpler and more flexible than Yocto’s class inheritance.
- Unified unit directory — system packages, application packages, and images
all live in
units/. The class function determines the output:unit()/autotools()/ etc. produce.apkfiles,image()produces disk images. One concept (unit), one directory, one DAG. - apk for image assembly — image units declare their packages as
dependencies.
yoe build <image>creates a clean rootfs and runsapk addto populate it from the repository, exactly like Alpine’s image builder. This leverages apk’s dependency resolution rather than reimplementing it.
Naming and Resolution
How modules, units, and dependencies are named, referenced, and resolved in
[yoe].
See metadata-format.md for the full unit/class/module Starlark API. See build-environment.md for how build isolation and caching work.
Modules
A module is a Git repository (or subdirectory of one) that provides units,
classes, machine definitions, and images. Modules are declared in
PROJECT.star:
project(
name = "my-product",
modules = [
module("https://github.com/YoeDistro/yoe-ng.git",
ref = "main",
path = "modules/units-core"),
module("https://github.com/vendor/bsp-imx8.git",
ref = "v2.1.0"),
],
)
Module name is derived from the path field’s last component if set,
otherwise the URL’s repository name. Examples:
| URL | path | Derived name |
|---|---|---|
github.com/YoeDistro/yoe-ng.git | modules/units-core | units-core |
github.com/vendor/bsp-imx8.git | (none) | bsp-imx8 |
Module names are used in load() statements:
load("@units-core//classes/autotools.star", "autotools").
Module directory structure
<module-root>/
MODULE.star # module metadata and dependencies
classes/ # build pattern functions (autotools, cmake, etc.)
units/ # unit definitions (.star files)
machines/ # machine definitions (.star files)
images/ # image definitions (.star files)
Evaluation order
- Phase 1 —
PROJECT.staris evaluated. Modules are synced (cloned/fetched). - Phase 1b — Machine definitions from all modules are evaluated.
- Phase 2 — Units and images from all modules are evaluated.
ARCH,MACHINE,MACHINE_CONFIG, andPROVIDESare available as predeclared variables.
Within each phase, modules are evaluated in declaration order. Within a module,
.star files are evaluated in filesystem walk order.
Units
A unit is a named build definition declared via unit(), image(), or a
class function like autotools() or cmake(). Each unit produces one or more
.apk packages.
Current naming model
Unit names are flat strings with no module namespace. Within a single module
the name must be unique — defining unit(name = "zstd", ...) twice in one
module is an error. Across modules, a same-named unit is a shadow: the
higher-priority unit wins and a notice is emitted on stderr. Priority follows
the project’s module list order (project root > last module > … > first module).
See Unit replacement via name shadowing
for the full rule and use cases.
Dependencies
Units declare two kinds of dependencies:
deps— build-time. The dependency’s output is available in the build sysroot during compilation. Resolved by theyoeDAG.runtime_deps— install-time. Recorded in the.apkpackage metadata and resolved byapkduring image assembly or on-device install.
Both reference units by name:
autotools(
name = "curl",
deps = ["openssl", "zlib", "zstd"],
runtime_deps = ["openssl", "zlib", "zstd"],
)
Transitive dependencies
Build-time deps are resolved transitively by the DAG. If curl depends on
openssl and openssl depends on zlib, curl’s build sysroot includes both.
Runtime deps are resolved transitively by apk at install time.
Load references
Starlark load() statements use three forms:
| Form | Resolves to | Example |
|---|---|---|
@module//path | Named module root | load("@units-core//classes/autotools.star", "autotools") |
//path | Current module root (context-aware) | load("//classes/cmake.star", "cmake") |
relative/path | Relative to current file | load("../utils.star", "helper") |
The // form is context-aware: if the file is inside a module, // resolves to
that module’s root. Otherwise it resolves to the project root. This means a unit
in units-core can load("//classes/autotools.star", ...) and it resolves
within units-core, not the project root.
Virtual packages (PROVIDES)
The PROVIDES predeclared variable maps virtual names to concrete unit names.
This allows images to reference abstract capabilities rather than specific
units:
# Machine definition contributes:
machine(
name = "raspberrypi4",
kernel = kernel(unit = "linux-rpi4", provides = "linux"),
)
# Unit can also declare provides — apk-style list of virtual names:
unit(name = "linux-rpi4", provides = ["linux"], ...)
# Image uses the virtual name:
image(name = "base-image", artifacts = ["busybox", "linux", "init"], ...)
# "linux" resolves to "linux-rpi4" via PROVIDES
# "init" resolves to whichever init system the project includes
This pattern extends to any swappable core component. For example, the init system can be abstracted behind a virtual name, with thin configuration modules providing the concrete implementation:
# modules/config-systemd/units/init.star
unit(name = "systemd", ..., provides = ["init"])
# modules/config-busybox-init/units/init.star
unit(name = "busybox-init", ..., provides = ["init"])
The project selects which init system to use by including the appropriate module:
# projects/product-a.star
project(name = "product-a", modules = [
module("...", path = "modules/units-core"),
module("...", path = "modules/config-systemd"),
])
# projects/product-b.star
project(name = "product-b", modules = [
module("...", path = "modules/units-core"),
module("...", path = "modules/config-busybox-init"),
])
Images reference init in their artifacts — they don’t need to know whether the
product uses systemd or busybox init.
PROVIDES is populated in two stages:
- After phase 1 (machines) —
kernel.providesentries are added - After phase 2 (units) — unit
providesfields are added
See Collision Detection for scoping and priority rules.
Unit replacement via name shadowing
The simplest way to replace an upstream unit is to define one with the same name in a higher-priority module. The higher-priority unit shadows the upstream — only it is registered in the DAG; the lower-priority unit is discarded with a notice on stderr.
Priority follows declaration order in project(). The project root has the
highest priority overall; among modules, later in the list wins:
project(name = "product", modules = [
module("...", path = "modules/units-alpine"), # lowest priority
module("...", path = "modules/soc-module"), # overrides units-alpine
module("...", path = "modules/som-module"), # highest priority among modules
])
# Project root (units/ in the project directory) overrides all three.
Concrete example — replacing Alpine’s prebuilt musl with a from-source build:
# @units-alpine//units/musl.star
alpine_pkg(name = "musl", version = "1.2.5-r0", ...)
# @my-overrides//units/musl.star (listed after units-alpine)
unit(name = "musl", source = "https://git.musl-libc.org/git/musl",
tag = "v1.2.5", tasks = [...])
Every other unit’s deps = ["musl"] and runtime_deps = ["musl"] resolve to
the winner automatically — there is nothing to change in consumers when an
override happens. The build emits:
notice: unit "musl" from module "my-overrides" shadows the same name from module "units-alpine"
Use shadowing for 1:1 replacement — “my musl instead of yours.” It is the right tool whenever a module wants to swap an upstream unit for a different implementation while keeping consumers unchanged.
Unit replacement via provides
provides is for a different problem: N:1 alternative selection. Several
units in the same project can each satisfy a virtual role, and the project (or
machine) selects which one wins at evaluation time. The canonical case is a
kernel — a single module ships linux-rpi4 and linux-bb, both declaring
provides = ["linux"], and the active machine picks one.
# @units-core//units/kernels.star
unit(name = "linux-rpi4", provides = ["linux"], ...)
unit(name = "linux-bb", provides = ["linux"], ...)
# machines/raspberrypi4.star
machine(name = "rpi4", kernel = kernel(unit = "linux-rpi4", provides = "linux"))
# machines/beaglebone.star
machine(name = "bbb", kernel = kernel(unit = "linux-bb", provides = "linux"))
# Images reference the virtual name; resolution picks the right kernel.
image(name = "base", artifacts = ["busybox", "linux"])
Both kernel units coexist in the namespace — they have distinct real names — and
PROVIDES["linux"] is set per machine. This is something shadowing can’t
express: shadowing requires identical real names, so multiple alternatives can’t
both be present.
The same module-priority rule applies when two modules each contribute a
provides for the same virtual name — the higher-priority module wins, with a
stderr notice. But for the common “override an upstream unit” case, prefer
shadowing: it requires no virtual-name layer, and reading the override file
tells the whole story.
When NOT to use provides
provides is powerful but has a hidden cost: the build cache hashes resolved
deps recursively, so a provides swap forks every transitive consumer into
a machine-specific apk variant. Used carelessly it can turn a clean cross-
machine apk repo into hundreds of near-identical packages.
The rule that keeps the apk repo lean:
providesis for leaf artifacts referenced by other units only asruntime_deps— kernel, base-files, init, bootloader. It is not for build-time libraries, and not for runtime alternatives that can be selected at boot.
This means:
- Don’t
providesa build-time library. Swappingopenssl↔libresslviaprovideswould fan out everycurl,openssh,pythonapk per selection. If you need a different crypto library, give it a different name and have consumers reference it explicitly. - Don’t put machine-flavored units in a generic library’s build-time
deps. A library should depend on other libraries, never onlinux,base-files, or any unit that varies by machine — otherwise the library’s apk forks per machine even though its compiled output is identical. - Don’t use
providesfor runtime alternatives. For pairs likemdev(busybox) vseudev,udhcpc(busybox) vsdhcpcd, or busyboxntpdvsntp-client, install both packages and pick which daemon runs at boot from an init script. The init script lives in a config unit (e.g.,network-config) that’s already project- or machine-flavored, so the choice doesn’t propagate into generic library hashes.
In short: keep machine variability at the edges of the DAG (kernel, bootloader, machine config, init scripts). Generic libraries and tools should have one hash regardless of which machine the project targets.
Shadow files (REPLACES)
When two packages legitimately ship the same file path — most often a real
implementation overriding a busybox stub — the owning package needs to opt into
the shadow with replaces. apk refuses to install a package whose files
conflict with already-installed ones unless the installing package declares it’s
allowed to overwrite the loser.
# util-linux ships real /bin/dmesg, /bin/mount, /bin/umount, /sbin/fsck,
# /sbin/hwclock, /sbin/losetup, /sbin/switch_root, /usr/bin/logger,
# /usr/bin/nsenter, /usr/bin/unshare — all paths busybox also claims.
unit(
name = "util-linux",
...
replaces = ["busybox"],
)
Mechanics worth remembering:
- Direction is per-file: the package that overwrites is the one that
declares. If util-linux installs after busybox and overwrites busybox’s
stubs, util-linux declares
replaces = ["busybox"]. Declaring it on busybox would only help if busybox were the one installing later. - apk install order is set by the dep graph. ncurses precedes busybox in the
dev-image not because of the artifact list but because ncurses is a runtime
dep of util-linux, less, vim, htop, and procps-ng — apk has to install it
first. busybox is a dependency-graph leaf, so it lands later and is the one
whose
clear/resetoverwrite ncurses’. Hencebusyboxdeclaresreplaces = ["ncurses"]. replacesis not a package fork. The annotation lives on a single generic .apk that every project shares. apk uses it to decide who owns the file in/lib/apk/db/installed, so future operations on either package do the right thing.
When you see a “trying to overwrite X owned by Y” install error, the fix is one of:
- Add
replaces = ["Y"]to the unit that owns the overwriting package. - Stop the duplication at its source — e.g., split a package into a subpackage
that doesn’t ship the conflicting paths (subpackages are a future apk-compat
phase; until then
replacesis the lever). - Disable the offending applet in the loser via runtime config — only if it can be done without forking the unit’s build, which is rarely possible for fine-grained busybox knobs.
Keep units generic — resolve variation at runtime
The previous section is one expression of a broader principle: a unit produces one .apk that every project and every machine shares. When two images need different behavior from the same package, the answer is almost never “fork the package.” It’s “resolve the difference at runtime, in a component that’s allowed to vary.”
Concretely, when you reach for a per-project or per-machine variant of a generic unit, prefer instead:
- Init scripts that detect what’s installed.
S10networkcheckscommand -v dhcpcdand falls back to busyboxudhcpcwhen it’s missing — one network-config unit, two viable runtimes, no DHCP-client fork. - Conditional config files in a project- or machine-scoped config unit
(e.g.,
base-files-<project>,network-config). Those units are already flavored, so they’re the right place for choices that have to vary. replaces:annotations on the unit that owns the shadow. When busybox and ncurses both ship/usr/bin/clear, declaringreplaceson one of them lets apk pick a winner without touching either build. Both apks stay generic.- Runtime alternative selection at boot — install both candidates, start one from an init script.
Reach for build-flag forking only when runtime resolution is genuinely
impossible: kernel defconfig (the kernel binary literally varies by machine),
bootloader target, machine-specific firmware blobs. Everything else — busybox
config knobs, library build flags, optional features — has to stay one .apk for
every consumer.
The cost of forking generic units is real: build cache surface multiplies,
binary reuse across projects breaks, and complexity moves from a few clean
conditionals in one config unit into N parallel build configurations scattered
across the tree. The cost of runtime resolution is a small init script or a
one-line replaces annotation — pay that instead.
Module composition
Modules extend upstream units without modifying them by importing the unit as a callable function:
# @units-core provides openssh as a function with a default name
def openssh(name="openssh", extra_deps=[], **overrides):
autotools(name = name, deps = ["zlib", "openssl"] + extra_deps, **overrides)
openssh() # registers "openssh" — units-core works standalone
# @vendor-bsp extends it with a different name
load("@units-core//units/openssh.star", "openssh")
openssh(name = "openssh-vendor", extra_deps = ["vendor-crypto"])
The downstream unit has a distinct name (openssh-vendor), so there is no
collision with the upstream openssh. Images that need the vendor variant
reference openssh-vendor in their artifacts list. This is explicit and
traceable — grep for the function call to find all extensions. See
metadata-format.md for details.
Collision Detection
Unit name duplicates
Within a single module (or within the project root), defining two units with the same name is a hard error at evaluation time:
unit "zstd" already defined (first defined in module "units-core")
Across modules, a same-named unit is treated as a shadow: the higher-priority unit wins, the lower-priority one is dropped from the unit map, and a notice is emitted to stderr. Priority is project root > last module in the list > … > first module in the list. See Unit replacement via name shadowing.
PROVIDES duplicates
If two units from the same module provide the same virtual name, the build
errors. If two units from different modules provide the same virtual name,
the higher-priority module (later in the module list) wins and a notice is
emitted to stderr. The active set is scoped to the selected machine — units from
unselected machines do not participate. This allows multiple machines to each
provide linux via different kernel units without conflict:
# machine/raspberrypi4.star — only active when this machine is selected
machine(name = "raspberrypi4",
kernel = kernel(unit = "linux-rpi4", provides = "linux"))
# machine/beaglebone.star — only active when this machine is selected
machine(name = "beaglebone",
kernel = kernel(unit = "linux-bb", provides = "linux"))
# base-image.star — "linux" resolves to whichever kernel the selected machine provides
image(name = "base-image", artifacts = ["busybox", "linux", "openssh"])
Images reference provides names directly — no prefix or namespace. The image declares what should be installed; resolution handles where it comes from.
Projects as module scoping
A project defines which modules are active for a build. Only units from included modules participate in the DAG. This is the primary mechanism for controlling which units can override or conflict with each other — if a module isn’t in the project’s module list, its units don’t exist for that build.
This reduces the collision problem: instead of needing replaces or shadow
semantics, a project simply includes only the modules it needs. A vendor module
that provides its own openssh-vendor with provides = ["openssh"] works
cleanly when the project doesn’t include a second module that also provides
openssh.
A single repository may define multiple projects (similar to KAS YAML files in yoe-distro), each selecting a different subset of modules for different products or build configurations:
# projects/dev.star
project(
name = "dev",
modules = [
module("...", path = "modules/units-core"),
module("...", path = "modules/dev-tools"),
],
)
# projects/customer-a.star
project(
name = "customer-a",
modules = [
module("...", path = "modules/units-core"),
module("...", path = "modules/vendor-bsp"),
module("...", path = "modules/customer-a"),
],
)
The --project flag selects a project file:
yoe --project projects/customer-a.star build. It is available on all
subcommands. When omitted, yoe uses PROJECT.star at the repo root.
A default project (PROJECT.star at the repo root) can delegate to another
project using standard Starlark load(). Two cases:
Use a project as-is — load it for the side effect (its project() call
registers the project):
# PROJECT.star
load("projects/customer-a.star")
Extend a project with additional modules — load the exported module list and build on it:
# projects/customer-a.star
MODULES = [
module("...", path = "modules/units-core"),
module("...", path = "modules/vendor-bsp"),
module("...", path = "modules/customer-a"),
]
project(name = "customer-a", modules = MODULES)
# PROJECT.star
load("projects/customer-a.star", "MODULES")
project(
name = "default",
modules = MODULES + [
module("...", path = "modules/dev-tools"),
],
)
This lets a developer run yoe build without specifying --project while
keeping per-product project definitions separate. No new concepts needed —
Starlark’s load() handles composition naturally.
Per-project APK repo
The APK repo is scoped per project. If two projects share a single repo (e.g.,
one uses systemd, the other busybox-init), switching projects would leave stale
packages in the APKINDEX. Since apk resolves runtime dependencies from the
index, it could transitively pull in packages from the wrong project.
Build output is scoped as:
repo/<project>/APKINDEX.tar.gz
Each project gets a clean repo containing only packages from its resolved module and unit set. Individual unit builds are still cached by content hash — if two projects build the same unit with the same inputs, the build runs once and the resulting apk is placed into both project repos.
The build cache handles provides swapouts automatically: each unit’s cache key
includes the hashes of its resolved dependencies (recursively). When init
resolves to systemd in one project but busybox-init in another, any unit
that depends on init gets a different cache key because the resolved
dependency’s hash differs. No special virtual-name logic is needed in the hasher
— it just hashes the resolved unit, not the virtual name string.
File Templates
Move inline file content out of Starlark units into external template files
processed by Go’s text/template. A unified map[string]any context serves as
both the template data and the hash input — one source of truth.
Problem
Units currently embed multi-line file content as heredocs inside shell step strings. This is hard to read, hard to edit, and prevents tools (syntax highlighters, linters) from understanding the embedded content.
Examples of inline content today:
base-files.star— inittab, rcS, os-release, extlinux.confnetwork-config.star— udhcpc default.script, S10network init scriptimage.star— sfdisk partition tables, extlinux install scripts
Design
Template Files
Templates live in a directory named after the unit, alongside the .star file:
modules/units-core/
units/
base/
base-files.star
base-files/ # same name as the unit
inittab.tmpl
rcS
os-release.tmpl
extlinux.conf.tmpl
net/
network-config.star
network-config/
udhcpc-default.script
S10network
simpleiot.star
simpleiot/
simpleiot.init
Files without .tmpl extension are copied verbatim via install_file(). Files
with .tmpl are processed through Go’s text/template via
install_template().
Unit Context (map[string]any)
A single map[string]any is used for both template rendering and hash
computation. The executor auto-populates standard fields, and any extra kwargs
passed to unit() are captured into the same map. No separate vars field —
just add fields directly to the unit:
unit(
name = "my-app",
version = "1.0.0",
port = 8080,
log_level = "info",
debug = True,
...
)
Templates access all fields: {{.port}}, {{.log_level}}, {{.name}}.
Auto-populated fields (injected by the executor, not declared in the unit):
| Key | Source | Example |
|---|---|---|
name | unit name | "base-files" |
version | unit version | "1.0.0" |
release | unit release | 0 |
arch | target architecture | "x86_64" |
machine | active machine name | "qemu-x86_64" |
console | serial console from kernel cmdline | "ttyS0" |
project | project name | "my-project" |
Unit kwargs override auto-populated fields if there’s a name collision (explicit wins).
Go implementation: registerUnit() captures all unrecognized kwargs into a
map[string]any on the Unit struct. The executor merges auto-populated fields
(lower priority) with unit fields (higher priority) to build the context map.
Classes pass **kwargs through to unit(), so custom fields flow naturally:
autotools(
name = "my-lib",
version = "1.0",
source = "...",
custom_flag = "enabled", # flows through **kwargs to unit()
)
Template Syntax
Go text/template with the unit context map:
# inittab.tmpl
::sysinit:/bin/mount -t proc proc /proc
::sysinit:/bin/mount -t sysfs sys /sys
::sysinit:/bin/hostname -F /etc/hostname
::sysinit:/etc/init.d/rcS
{{.console}}::respawn:/sbin/getty -L {{.console}} 115200 vt100
::ctrlaltdel:/sbin/reboot
::shutdown:/bin/umount -a -r
# os-release.tmpl
NAME=Yoe
ID=yoe
PRETTY_NAME="Yoe Linux ({{.machine}})"
HOME_URL=https://github.com/YoeDistro/yoe
# config.toml.tmpl (custom vars)
[server]
port = {{.port}}
log_level = "{{.log_level}}"
debug = {{.debug}}
Starlark API
Two new builtins are step-value constructors, not side-effecting calls. They return a value that the build executor recognises and dispatches when the task runs, in the same step list as shell strings and Starlark callables:
# install_file(src, dest, mode=0o644) -> InstallStep
# Copies src verbatim from the unit's files directory to dest.
# install_template(src, dest, mode=0o644) -> InstallStep
# Renders src through Go text/template with the unit's context map, then
# writes the result to dest.
They are used directly in task(..., steps=[...]), no fn=lambda: wrapper
required:
task("build", steps = [
"mkdir -p $DESTDIR/etc $DESTDIR/etc/init.d $DESTDIR/boot/extlinux",
install_template("inittab.tmpl", "$DESTDIR/etc/inittab"),
install_file("rcS", "$DESTDIR/etc/init.d/rcS", mode = 0o755),
install_template("os-release.tmpl", "$DESTDIR/etc/os-release"),
])
src paths are relative to the calling .star file’s template directory:
<dir(file)>/<basename(file) without .star>/. For a call written in
units/base/base-files.star, "inittab.tmpl" resolves to
units/base/base-files/inittab.tmpl. Paths that escape that directory
("../../etc/passwd") are rejected.
Resolving relative to the call site — not to the resulting unit’s unit() call
site — is what lets a helper function package its templates next to itself and
reuse them across many units. For example, base_files() in
units/base/base-files.star can be called from images/dev-image.star with
name = "base-files-dev"; the install steps it returns still find their
templates in units/base/base-files/, not in images/base-files-dev/.
dest has environment variables ($DESTDIR, $PREFIX, etc.) expanded from the
task’s build environment. Unknown variables expand to the empty string — there
is no fallback to the host process environment, to preserve reproducibility.
Install steps are pure data — install_template(...) can be bound to a name,
stored in a list, or generated from a helper function before being placed in
steps=[...]. They evaluate at unit-load time; execution happens later, in the
executor, when the step is reached.
Example: base-files with templates
Before (inline heredocs):
task("build", steps=[
"mkdir -p $DESTDIR/etc",
"""cat > $DESTDIR/etc/inittab << INITTAB
::sysinit:/bin/mount -t proc proc /proc
::sysinit:/bin/hostname -F /etc/hostname
${CONSOLE}::respawn:/sbin/getty -L ${CONSOLE} 115200 vt100
::ctrlaltdel:/sbin/reboot
::shutdown:/bin/umount -a -r
INITTAB""",
"""cat > $DESTDIR/etc/os-release << OSRELEASE
NAME=Yoe
ID=yoe
PRETTY_NAME="Yoe Linux ($MACHINE)"
HOME_URL=https://github.com/YoeDistro/yoe
OSRELEASE""",
])
After (external templates):
base-files/inittab.tmpl:
::sysinit:/bin/mount -t proc proc /proc
::sysinit:/bin/mount -t sysfs sys /sys
::sysinit:/bin/hostname -F /etc/hostname
::sysinit:/etc/init.d/rcS
{{.console}}::respawn:/sbin/getty -L {{.console}} 115200 vt100
::ctrlaltdel:/sbin/reboot
::shutdown:/bin/umount -a -r
base-files/os-release.tmpl:
NAME=Yoe
ID=yoe
PRETTY_NAME="Yoe Linux ({{.machine}})"
HOME_URL=https://github.com/YoeDistro/yoe
base-files/rcS:
#!/bin/sh
for s in /etc/init.d/S*; do
[ -x "$s" ] && "$s" start
done
unit(
name = "base-files",
version = "1.0.0",
tasks = [
task("build", steps = [
"mkdir -p $DESTDIR/etc $DESTDIR/root $DESTDIR/proc $DESTDIR/sys"
+ " $DESTDIR/dev $DESTDIR/tmp $DESTDIR/run"
+ " $DESTDIR/etc/init.d $DESTDIR/boot/extlinux",
install_template("inittab.tmpl", "$DESTDIR/etc/inittab"),
install_file("rcS", "$DESTDIR/etc/init.d/rcS", mode = 0o755),
install_template("os-release.tmpl", "$DESTDIR/etc/os-release"),
install_template("extlinux.conf.tmpl",
"$DESTDIR/boot/extlinux/extlinux.conf"),
]),
],
)
Example: simpleiot init script
simpleiot/simpleiot.init:
#!/bin/sh
case "$1" in
start) /usr/bin/siot &;;
stop) killall siot;;
esac
go_binary(
name = "simpleiot",
version = "0.18.5",
services = ["simpleiot"],
tasks = [
task("build", steps = [...]),
task("init-script", steps = [
"mkdir -p $DESTDIR/etc/init.d",
install_file("simpleiot.init",
"$DESTDIR/etc/init.d/simpleiot", mode = 0o755),
]),
],
)
Example: custom app with extra fields
unit(
name = "my-app",
version = "2.0.0",
port = 8080,
workers = 4,
enable_tls = True,
tasks = [
task("config", steps = [
"mkdir -p $DESTDIR/etc/my-app",
install_template("app.conf.tmpl", "$DESTDIR/etc/my-app/app.conf"),
]),
],
)
my-app/app.conf.tmpl:
# Generated by Yoe for {{.machine}}
listen_port = {{.port}}
workers = {{.workers}}
{{if .enable_tls}}tls_cert = /etc/ssl/certs/ca-certificates.crt{{end}}
Hashing
The unit context map (map[string]any) is JSON-serialized with sorted keys and
included in the unit hash. This means:
- Changing any unit field changes the hash and triggers a rebuild
- Auto-populated fields (arch, machine) already affect the hash through existing mechanisms, but including them in the context map makes it explicit
- No separate hash logic needed for template fields vs build fields
Additionally, all files in the unit’s files directory
(<DefinedIn>/<unit-name>/) are hashed by content. Changing a template file
changes the hash.
Path Resolution
Template paths resolve to <DefinedIn>/<unit-name>/<relPath>:
func resolveTemplatePath(unit *Unit, relPath string) string {
return filepath.Join(unit.DefinedIn, unit.Name, relPath)
}
This matches the existing container convention:
| Unit file | Associated directory |
|---|---|
containers/toolchain-musl.star | containers/toolchain-musl/ |
units/base/base-files.star | units/base/base-files/ |
units/net/network-config.star | units/net/network-config/ |
Go Implementation
Install steps are pure data values produced at Starlark evaluation time and executed by the build executor. There is no thread-local wiring, no placeholder builtins, and no “must be called inside a task fn” error path — they’re third-class steps alongside shell strings and Starlark callables.
New file: internal/build/templates.go
BuildTemplateContext— build the per-unitmap[string]anyfrom unit identity fields,Extra, and auto-populatedarch/machine/console/projectdoInstallStep— execute a resolvedInstallStepagainst a unit: read from<DefinedIn>/<unit-name>/<src>, render (if template) or copy, write to expanded destresolveTemplatePath— resolve<DefinedIn>/<unit-name>/<relPath>with escape protectionexpandEnv— expand$DESTDIRetc. in destination paths using the task’s build env (no host fallback, for reproducibility)
Custom Go template functions (e.g. sizeMB, sfdiskType) are out of scope for
this spec and belong to the starlark-packaging-images work that migrates
image.star partition templates.
Modified: internal/starlark/builtins.go
- Register
install_fileandinstall_templateas ordinary global builtins that return*InstallStepValue. No placeholder-delegate pattern needed — they have no side effects. - Capture unrecognized
unit()kwargs intoExtra map[string]anyon the Unit struct.
Modified: internal/starlark/types.go
- New
InstallStepValue— astarlark.Valueimplementation carrying(Kind, Src, Dest, Mode). Frozen on construction; implementsHashso tasks containing install steps are deterministic. - New
InstallStep— Go-native mirror of the above, referenced byStep. Stepgains anInstall *InstallStepfield.Unitgains anExtra map[string]anyfield.ParseTaskListrecognises*InstallStepValueentries insteps=[...]and converts each toStep{Install: &InstallStep{...}}.
Modified: internal/build/executor.go
- Build a per-unit
map[string]anytemplate context viaBuildTemplateContext. - Task step loop gains a third case:
step.Install != nil→doInstallStep(unit, step.Install, ctxData, env). Command and Fn cases are unchanged.
Modified: internal/resolve/hash.go
- JSON-serialize the context map (sorted keys) and include in the unit hash.
- Hash contents of all files in the unit’s files directory.
What is NOT needed (vs. an earlier side-effecting design)
- No thread-local
TemplateContextkey on the build thread - No
SetTemplateContexthelper - No placeholder/delegate builtins in
internal/starlark/builtins.go - No
BuildPredeclaredentries forinstall_file/install_template - No
fn=lambda: _install()boilerplate in unit files
What Stays in Go
Template rendering runs on the host (Go executor), not in the container. This keeps template data (machine config, unit metadata) accessible without passing it through environment variables. The rendered files are placed in the build directory, then the container mounts them.
Implementation Order
Extrafield on Unit — capture unrecognized kwargs inregisterUnit().InstallStepValue+ constructors — Starlark value type and theinstall_file/install_templateglobal builtins. Pure, side-effect-free.Step.Install+ParseTaskListdispatch — extend the GoSteptype and recognise install-step values insidesteps=[...].- Executor dispatch +
doInstallStep—BuildTemplateContext, executor case forstep.Install, anddoInstallStepI/O. This step also removes the earlier thread-local wiring (TemplateContextthread key,SetTemplateContext) now that it is dead. - Hashing — include context map JSON (sorted keys) and files-directory contents in the unit hash.
- Migrate base-files — inittab, rcS, os-release, extlinux.conf as install steps.
- Migrate network-config — udhcpc script and
S10networkas install steps. - Migrate simpleiot — init-script task becomes a one-line install step.
Non-Goals
- Jinja2 or other template engines. Go
text/templateis in stdlib and sufficient. - Template inheritance or includes. Keep templates flat and simple.
- Build-time template rendering in the container. Templates are rendered by the Go executor on the host.
Starlark Packaging and Image Assembly
Move packaging (APK creation, repo management) and image assembly (rootfs population, disk generation) from hardcoded Go into composable Starlark tasks. This makes packaging format a project-level policy choice and image assembly fully customizable per-image.
Status: Spec
Motivation
Today, APK packaging and image assembly are hardcoded in Go:
internal/artifact/apk.go— creates .apk via Go tar/gzipinternal/repo/— publishes .apk and generates APKINDEXinternal/image/rootfs.go— installs packages, applies config, overlaysinternal/image/disk.go— partitions, formats, installs bootloaderinternal/bootstrap/— Stage 0/1 orchestration
This means:
- Packaging format is not configurable. Every unit produces an APK. To support deb, rpm, or “no packaging” (direct sysroot install, like Buildroot), you’d need to fork the Go code.
- Image assembly is opaque. The disk layout, filesystem types, bootloader choice, and rootfs population strategy are all buried in Go. Customizing an image (e.g., Home Assistant with Docker on a Raspberry Pi) requires modifying Go internals.
- Classes can’t compose. A unit calls one class function that does everything. There’s no way to say “build with cmake, then package with apk.”
Design
Composable Task Lists
Classes become functions that return task lists, not functions that register units. Units compose tasks by concatenation:
# Simple: class registers the unit directly (convenience)
cmake(name = "zlib", version = "1.3.1")
# Composable: unit assembles tasks from multiple classes
load("//classes/cmake.star", "cmake_tasks")
load("//classes/apk.star", "apk_tasks")
unit(
name = "zlib",
version = "1.3.1",
tasks = cmake_tasks(cmake_args = ["-DBUILD_SHARED_LIBS=ON"]) + apk_tasks(),
)
Each class function (e.g., cmake_tasks(), autotools_tasks()) returns a list
of task(...) entries. The convenience wrappers (cmake(), autotools()) call
unit() internally with the combined tasks.
Project-Level Packaging Policy
The unit() Go builtin auto-appends packaging tasks based on project config:
# PROJECT.star
project(
name = "my-project",
packaging = "apk", # "apk", "deb", "rpm", "none"
...
)
// In Go registerUnit():
if u.Class == "unit" && proj.Packaging != "none" {
u.Tasks = append(u.Tasks, packagingTasks(proj.Packaging))
}
"apk"— append APK creation + repo publish tasks (default)"none"— skip packaging, install destdir directly into sysroot (Buildroot style)
yoe is intentionally apk-only. The package format, repo index, signing model,
and on-device installer are all wired through apk end-to-end (/etc/apk/keys
in the rootfs, apk add at image-assembly time, the apk-tools unit for
on-device OTA). Adding deb or rpm would mean a parallel pipeline for each
without a real use case — yoe targets embedded Linux, not the desktop / server
distros where those formats live.
Units can opt out: unit(..., package = False) skips auto-appended packaging.
Package Metadata
Package metadata uses existing top-level unit fields — no separate struct needed:
unit(
name = "zlib",
version = "1.3.1",
description = "Compression library",
license = "Zlib",
runtime_deps = ["musl"],
tasks = cmake_tasks() + apk_tasks(),
)
apk_tasks() reads description, license, version, and runtime_deps from
the unit to generate .PKGINFO. deb_tasks() would read the same fields to
generate control. The metadata is packaging-format-agnostic and already part
of the unit schema.
Go Builtins for Packaging
APK creation requires tar/gzip/SHA operations that are impractical in pure Starlark. These stay in Go as builtins callable from Starlark tasks:
| Builtin | Purpose |
|---|---|
apk_create(destdir, output, metadata) | Create .apk from destdir |
apk_publish(apk_path, repo_dir) | Copy to repo, regenerate APKINDEX |
hash_file(path, algorithm) | SHA256/SHA1 of a file |
These are thin wrappers around the existing artifact.CreateAPK() and
repo.Publish(). The Starlark task calls the builtin; the builtin does the
heavy lifting:
def apk_tasks():
return [
task("package", fn = lambda: apk_create(
destdir = "${DESTDIR}",
output = "${OUTPUT}",
)),
task("publish", fn = lambda: apk_publish(
apk = "${OUTPUT}/${NAME}-${VERSION}.apk",
repo = "${REPO}",
)),
]
Image Assembly in Starlark
The image class becomes a Starlark function that generates tasks for rootfs population and disk image creation:
# classes/image.star
def image(name, artifacts, hostname = "", timezone = "UTC", **kwargs):
unit(
name = name,
unit_class = "image",
artifacts = artifacts,
tasks = [
task("populate", fn = lambda: _populate_rootfs(artifacts)),
task("configure", fn = lambda: _configure_rootfs(hostname, timezone)),
task("partition", fn = lambda: _partition_disk()),
task("assemble", fn = lambda: _assemble_image()),
],
hostname = hostname,
timezone = timezone,
**kwargs,
)
Populate (install packages into rootfs)
Currently installPackages() in Go — extracts .apk files via tar xzf. This is
a shell command per package. The dependency resolution (resolvePackageDeps())
becomes a Starlark builtin or uses the DAG that already resolves deps:
def _populate_rootfs(artifacts):
rootfs = "${BUILD}/rootfs"
run("rm -rf " + rootfs)
run("mkdir -p " + rootfs)
# apk_install resolves transitive deps and extracts into rootfs
apk_install(rootfs = rootfs, packages = artifacts, repo = "${REPO}")
apk_install() is a Go builtin that wraps the existing installPackages() +
resolvePackageDeps().
Configure (hostname, timezone, services)
Currently applyConfig() in Go — writes files and creates symlinks. Trivially
expressible as shell commands or Starlark file operations:
def _configure_rootfs(hostname, timezone):
rootfs = "${BUILD}/rootfs"
run("echo '{}' > {}/etc/hostname".format(hostname, rootfs))
run("ln -sf /usr/share/zoneinfo/{} {}/etc/localtime".format(timezone, rootfs))
Partition and Assemble (disk image)
Currently GenerateDiskImage() in Go — shells out to sfdisk, mkfs.ext4,
mkfs.vfat, dd, mcopy, extlinux via RunInContainer(). These are already
shell commands; they map directly to Starlark run(host = True) calls (running
in the container):
def _partition_disk():
run("truncate -s {}M ${{BUILD}}/{}.img".format(total_mb, name))
run("sfdisk ${{BUILD}}/{}.img <<EOF\n...\nEOF".format(name), host = True)
def _assemble_image():
run("mkfs.ext4 -d ${BUILD}/rootfs ${BUILD}/rootfs.img", host = True)
run("dd if=${BUILD}/rootfs.img of=${BUILD}/${NAME}.img bs=1M seek=1 conv=notrunc", host = True)
Per-Task Container Selection
To support mixed toolchains (e.g., build containerd with glibc, CLI with Go), tasks can override the unit-level container:
unit(
name = "docker",
version = "27.0",
container = "toolchain-glibc",
tasks = [
task("build-containerd", steps = ["make -C containerd"]),
task("build-cli", container = "toolchain-go",
steps = ["go build ./cmd/docker"]),
],
)
The executor resolves container image per-task, falling back to the unit-level default.
Source Fetching
Currently in internal/source/ — almost entirely shell commands (git clone,
git checkout, tar xf, git am). Moves to Starlark naturally:
# Hypothetical — source prep as tasks on the unit
# Today this is implicit; making it explicit is optional
task("fetch", fn = lambda: git_clone(url = SRC_URI, ref = SRC_REV)),
task("patch", fn = lambda: git_am("${PATCHES}/*.patch")),
Source fetching could remain implicit (Go handles it before task execution) or become explicit tasks. The implicit approach is simpler and avoids boilerplate. Recommendation: keep source fetching in Go for now, move later if needed.
Bootstrap
internal/bootstrap/ orchestrates Stage 0 (host toolchain) and Stage 1
(self-hosted rebuild). This is build sequencing that could become a Starlark
“bootstrap” class or remain in Go. Since bootstrap is run rarely and has complex
ordering requirements, recommendation: keep in Go for now.
What Stays in Go
| Component | Reason |
|---|---|
| Build executor (DAG, caching, hashing) | Graph algorithms, concurrency, content-addressed caching |
| APK tar/gzip/hash operations | Crypto and archive formats need Go stdlib |
| Repo index generation | Reads APK internals, writes APKINDEX.tar.gz |
| Source fetching and caching | Complex caching logic, HTTP client, hash verification |
| Bootstrap orchestration | Rarely customized, complex ordering |
| bwrap/container invocation | Security boundary, needs careful Go control |
These are exposed as Starlark builtins (apk_create, apk_install,
apk_publish, hash_file) so Starlark tasks can call them.
What Moves to Starlark
| Component | Current Location | Benefit |
|---|---|---|
| Packaging task composition | Hardcoded in executor | Pluggable packaging formats |
| Image rootfs population | image/rootfs.go | Custom rootfs strategies |
| Image disk generation | image/disk.go | Custom partition layouts, bootloaders |
| Image configuration | image/rootfs.go | Per-image hostname, services, overlays |
| Sysroot assembly | build/sandbox.go | Custom sysroot layouts |
| Per-task container selection | N/A (unit-level only) | Mixed toolchain builds |
Implementation Order
- Composable task lists — refactor classes to return task lists, add convenience wrappers. No Go changes needed.
- Per-task container — add optional
containerfield totask(), executor resolves per-task. - Packaging builtins — expose
apk_createandapk_publishas Starlark builtins. Addpackagingfield to project config. Auto-append packaging tasks inunit(). - Image assembly in Starlark — expose
apk_installbuiltin. Rewrite image class as Starlark tasks calling builtins + shell commands. packaging = "none"mode — skip APK, install destdir directly into sysroot. Enables Buildroot-style builds.
Non-Goals
- Replacing APK with deb/rpm now. The infrastructure supports it, but the immediate goal is making it possible, not implementing every format.
- Moving DAG resolution to Starlark. Graph algorithms and content-addressed caching are Go strengths.
- Moving source fetching to Starlark. The caching and hash verification logic is complex and rarely needs customization.
Build Dependencies and Caching
Traditional embedded build systems maintain a sharp boundary between “building the OS” and “developing applications.” The OS team produces an SDK — a frozen snapshot of the sysroot, toolchain, and headers — and hands it to application developers. From that point on, the two worlds drift: the SDK ages, libraries diverge, and “it works on my machine” becomes “it works with my SDK version.”
[yoe] eliminates this boundary by recognizing that there are distinct kinds of
build dependencies, and they should be managed differently:
- Host tools (compilers, build utilities, code generators) — these come from
Docker containers. Every unit can specify its own container, so one team’s
toolchain requirements don’t constrain another. A kernel unit can use a
minimal C toolchain container. A Go application can use the official
golang:1.23image. A Rust service can pin a specific Rust nightly. - Library dependencies (headers, shared libraries your code links against) —
these come from a shared sysroot populated by apk packages. Each unit produces
an apk package when it builds; that package is either built locally or pulled
from a cache (team-level or global). Before a unit builds, its declared
dependencies are installed from these packages into the sysroot — the same way
apt install libssl-devpopulates/usr/includeand/usr/libon a Debian system. Most developers never build OpenSSL themselves; they pull the cached package and get the headers and libraries they need in seconds. - Language-native dependencies (Go modules, npm packages, Cargo crates, pip
packages) — these are managed by the language’s own package manager, not the
sysroot. A Go unit runs
go buildand Go fetches its own modules. A Node unit runsnpm install. Cargo handles Rust crates. These ecosystems already solve dependency resolution, caching, and reproducibility —[yoe]doesn’t reimplement any of that. The container provides the language runtime (Go compiler, Node, rustc), and the language’s package manager handles the rest. When a language unit also needs a C library (e.g., a Rust crate linking against libssl via cgo or FFI), that C library comes from the sysroot as usual.
Caching is symmetric at the unit level. Every unit — regardless of language — produces an apk package that is cached and shared across developers, CI, and build machines. Most people never rebuild a unit; they pull the cached apk.
The difference shows up when you do rebuild: a C unit finds its dependencies
already in the sysroot (from other units’ cached apks), while a Rust unit has
Cargo recompile its crate dependencies using its local cache. This is fine — the
person rebuilding a Rust unit is the developer actively working on it, and their
local Cargo cache handles repeat builds. Go builds so fast it does not matter.
Some ecosystems go further: PyPI distributes pre-compiled wheels globally, so
pip install pulls binaries for most packages without compiling anything.
[yoe] doesn’t need to replicate what these ecosystems already provide.
Native builds unlock existing package ecosystems. This is especially clear
with Python. In traditional cross-compilation systems like Yocto or Buildroot,
PyPI wheels are useless — pip runs on the x86_64 host but the target is ARM, so
pre-compiled aarch64 wheels can’t be installed. Instead, every Python package
needs a custom recipe that cross-compiles C extensions against the target
sysroot, effectively reimplementing pip. In [yoe], pip runs inside a
native-arch container (real ARM64 or QEMU-emulated), so pip install numpy just
downloads the aarch64 wheel from PyPI and unpacks it — no compilation, no
custom recipe. The same advantage applies to any language ecosystem that
distributes pre-built binaries by architecture.
Note, there are risks with safety or mission-critical systems of using packages from a compromised global package system. We could force building of Python packages in some cases or verify the binaries via a hash mechanism. This point is for developers, we should be able to leverage all the conveniences modern language ecosystems provide.
Containers provide the tools to build. The sysroot provides C/C++ libraries
to link against. Language-native package managers handle everything else. For
any given unit, the developer, the system team, and CI all use the same
container — that’s how you stay in sync. A new developer clones the repo, runs
yoe build, and gets working build environments pulled automatically.
Docker containers are already the standard way teams manage development
environments. [yoe] leans into this rather than inventing a parallel universe
of SDKs.
Build Environment
How [yoe] manages host tools, build isolation, and the bootstrap process.
Architecture
[yoe] uses a tiered build environment with three tiers:
┌─────────────────────────────────────────────────────┐
│ Tier 0: Host / Alpine Container │
│ Provides: apk-tools, bubblewrap, yoe (Go binary) │
│ libc: doesn't matter (musl or glibc) │
├─────────────────────────────────────────────────────┤
│ Tier 1: `[yoe]` Build Root (chroot/bwrap) │
│ Populated by: apk from `[yoe]`'s package repo │
│ Provides: glibc, gcc, make, cmake, language SDKs │
│ libc: glibc (`[yoe]`'s own packages) │
├─────────────────────────────────────────────────────┤
│ Tier 2: Per-Unit Build Environment │
│ Populated by: apk with only declared build deps │
│ Isolated via bubblewrap │
│ Produces: .apk artifacts │
└─────────────────────────────────────────────────────┘
Tier 0: Bootstrap Module (Automatic Container)
All build operations run inside a Docker/Podman container. The host provides
ONLY the yoe binary and a container runtime. No build tools, no compilers, no
package managers — nothing from the host leaks into builds.
The yoe binary on the host detects that it’s not inside the build container
and re-executes itself inside one automatically. Developers never need to think
about this — they run yoe build and it works.
The only host requirements are:
- The
yoeGo binary (statically linked, runs anywhere) - Docker or Podman
On first use, yoe builds the versioned container image yoe-ng:<version> from
a Dockerfile embedded in the binary itself. The yoe binary copies itself into
the container — no source checkout or Go toolchain is needed on the host.
Subsequent invocations reuse the cached image. When the container version
changes (i.e., a new yoe binary with updated container dependencies), the
image is rebuilt automatically.
How it works:
Host Container (Alpine)
┌─────────────┐ ┌──────────────────────────┐
│ yoe build │ ──docker run──▶ │ yoe build openssh │
│ openssh │ -v $PWD:/project│ (has bwrap, apk, gcc...) │
│ │ -v cache:/cache │ │
│ (no bwrap, │ │ Tier 1: build root │
│ no apk) │ │ Tier 2: per-unit bwrap │
└─────────────┘ └──────────────────────────┘
The yoe CLI always runs on the host. The container is a stateless build worker
invoked only when container-provided tools (gcc, bwrap, mkfs, etc.) are needed.
Most commands (config, desc, refs, graph, source, clean) run
entirely on the host with no container overhead.
# All commands run on the host:
yoe init my-project
yoe version
yoe config show
yoe source fetch
yoe desc openssh
# Build commands invoke the container for compilation:
yoe build openssh # [yoe] container: bwrap ... make -j$(nproc)
# Manage the container image:
yoe container build # rebuild the container image
yoe container binfmt # register QEMU user-mode for cross-arch builds
yoe container status # show container image status
When the container is invoked, it mounts:
- Project directory →
/project(read-write) - Build source/dest →
/build/src,/build/destdir(per-unit mounts) - Sysroot →
/build/sysroot(read-only, deps’ headers/libraries)
Build output uses --user uid:gid so files created by the container are owned
by the host user, not root.
External Dependencies
Host requirements (the developer’s machine):
| Dependency | Purpose |
|---|---|
yoe binary | Statically linked Go binary |
docker/podman | Run the build container |
That’s it. Everything else is inside the container.
Container-provided tools (installed by containers/Dockerfile.build):
| Tool | Package | Used by | Purpose |
|---|---|---|---|
bwrap | bubblewrap | internal/build/sandbox.go | Per-unit build isolation (namespace sandbox) |
bash | bash | internal/build/sandbox.go | Execute unit build step shell commands |
git | git | internal/source/, dev.go | Clone/fetch repos, manage workspaces, apply/extract patches |
tar | tar | internal/source/workspace.go | Extract .tar.xz archives (.tar.gz/.bz2 handled by Go stdlib) |
nproc | coreutils | internal/build/sandbox.go | Detect CPU count for $NPROC build variable |
uname | coreutils | internal/build/sandbox.go | Detect host architecture for $ARCH variable |
make | make | Unit build steps | C/C++ builds |
gcc | gcc | Unit build steps | C compilation |
g++ | g++ | Unit build steps | C++ compilation |
patch | patch | Fallback for patch application | When git apply is not suitable |
Called indirectly (by user-defined build steps, not by yoe itself):
- Language toolchains (
go,cargo,cmake,meson,python3,npm) — installed into the Tier 1 build root as needed - Any command available in the build sandbox — unit build steps are arbitrary shell commands
ctx.shell()in custom commands can invoke any host tool
Tier 1: [yoe] Build Root
A glibc-based environment populated from [yoe]’s own package repository. This
is where the actual compilers, toolchains, and language SDKs live.
# yoe creates this automatically during build
apk --root /var/yoe/buildroot \
--repo https://repo.yoe-ng.org/packages \
add glibc gcc g++ make cmake go rust
This build root is:
- glibc-based —
[yoe]’s own packages, not Alpine’s. - Persistent — created once, updated as needed. Not torn down between builds.
- Architecture-native — on an ARM64 machine, it’s an ARM64 build root. No cross-compilation.
- Managed by apk — adding or updating a host tool is just
apk add --root ... <tool>.
Tier 2: Per-Unit Isolation
Each unit builds in an isolated environment with only its declared dependencies. This ensures hermetic builds — a unit cannot accidentally depend on a tool it didn’t declare.
# yoe creates a minimal environment for each unit build
bwrap \
--ro-bind /var/yoe/buildroot / \
--bind /tmp/build/$RECIPE /build \
--bind /tmp/destdir/$RECIPE /destdir \
--dev /dev \
--proc /proc \
-- bash -c "$BUILD_STEPS"
Bubblewrap provides:
- Unprivileged isolation — no root or Docker daemon required.
- Read-only base — the build root is mounted read-only; units can’t modify host tools.
- Minimal overhead — bubblewrap is a thin namespace wrapper, not a full container runtime. Build performance is near-native.
- Declared dependencies only — the build environment is assembled from only
the packages listed in the unit’s
deps.
Why Not Docker for Builds?
Docker is used for Tier 0 (the bootstrap) but not for Tier 1/2 (the actual builds). This is deliberate:
| Docker | bubblewrap + apk | |
|---|---|---|
| Requires root/daemon | Yes (dockerd) | No (unprivileged) |
| Startup overhead | ~200ms per container | ~1ms per sandbox |
| Layering granularity | Image layers (coarse) | apk packages (fine) |
| Dependency management | Dockerfile (imperative) | apk (declarative) |
| Nested builds | Docker-in-Docker (fragile) | Just works |
| CI integration | Needs DinD or socket mount | Runs inside any container |
Docker is great for the “zero setup” onboarding story: docker run yoe/builder
and you have a working environment. But for the build system itself, bubblewrap
- apk is simpler, faster, and more granular.
Bootstrap Process
There is a chicken-and-egg problem: [yoe] needs glibc, gcc, and other base
packages in its repository before it can build anything inside a [yoe] chroot.
This is solved with a staged bootstrap, the same approach used by Alpine, Arch,
Gentoo, and every other self-hosting distribution.
Stage 0: Cross-Pollination
Build the initial base packages using an existing distribution’s toolchain.
Alpine’s gcc (or any host gcc) builds the first generation of [yoe] packages.
# Inside Alpine (or any Linux with gcc)
yoe bootstrap stage0
# This builds:
# glibc → glibc-2.39-r0.apk
# binutils → binutils-2.42-r0.apk
# gcc → gcc-14.1-r0.apk
# linux-headers → linux-headers-6.6-r0.apk
# busybox → busybox-1.36-r0.apk
# apk-tools → apk-tools-2.14-r0.apk
# bubblewrap → bubblewrap-0.9-r0.apk
These packages are built with Alpine’s musl-based gcc targeting glibc. The
output is a minimal set of .apk files — enough to create a self-hosting
[yoe] build root.
Stage 1: Self-Hosting
Rebuild the base packages using the Stage 0 packages. Now the [yoe] build root
is building itself.
yoe bootstrap stage1
# Creates a `[yoe]` build root from Stage 0 packages, then rebuilds:
# glibc, gcc, binutils, etc. — now built with `[yoe]`'s own gcc + glibc
After Stage 1, the bootstrap is complete. All packages in the repository were
built by [yoe]’s own toolchain. The Alpine dependency is gone.
Stage 2: Normal Operation
From this point on, all builds use the [yoe] build root. New units build
inside Tier 2 isolated environments. The bootstrap is a one-time cost per
architecture.
# Normal development — no bootstrap needed
yoe build myapp
yoe build base-image
yoe flash base-image /dev/sdX
Pre-Built Bootstrap
For most users, the bootstrap is not needed at all. [yoe] publishes pre-built
base packages for each supported architecture:
x86_64— built in CIaarch64— built on ARM64 CI runnersriscv64— built on RISC-V hardware or QEMU
A new project pulls these from the [yoe] package repository and starts
building immediately. The bootstrap process is only needed by:
[yoe]distribution developers maintaining the base packages.- Users who need to verify the full build chain for compliance/traceability.
- Users targeting a new architecture.
Pseudo-Root via User Namespaces
Image assembly requires root-like operations — setting file ownership to
root:root, creating device nodes, setting setuid bits. Traditionally this is
solved with fakeroot or Yocto’s pseudo, both of which use LD_PRELOAD to
intercept libc calls. These approaches are fragile:
| Approach | Mechanism | Breaks with Go/static bins | Database corruption | Parallel safety |
|---|---|---|---|---|
| fakeroot | LD_PRELOAD | Yes | N/A | Fragile |
| pseudo (Yocto) | LD_PRELOAD + SQLite | Yes | Yes (known issue) | Better |
| User namespaces | Kernel | No | N/A (stateless) | Yes |
[yoe] uses user namespaces (via bubblewrap, already in the stack for build
isolation) for all operations that need pseudo-root access. Inside a user
namespace, the process sees itself as uid 0 and can perform all root-like
filesystem operations — no LD_PRELOAD, no daemon, no database.
How Image Units Use This
# Image assembly inside a user namespace
bwrap --unshare-user --uid 0 --gid 0 \
--bind /tmp/rootfs /rootfs \
--bind /tmp/output /output \
--dev /dev \
--proc /proc \
-- sh -c '
# Install packages — apk sets ownership to root:root
apk --root /rootfs add musl busybox openssh myapp
# Create device nodes
mknod /rootfs/dev/null c 1 3
mknod /rootfs/dev/console c 5 1
# Set permissions
chmod 4755 /rootfs/usr/bin/su
# Generate filesystem image with correct ownership
mksquashfs /rootfs /output/rootfs.squashfs
'
Because this is kernel-native:
- Works with everything — Go binaries, Rust binaries, statically linked tools, anything. No libc interception needed.
- Stateless — no SQLite database to corrupt, no daemon to crash. The kernel tracks ownership within the namespace.
- Fast — namespace creation is ~1ms. No overhead per filesystem operation.
- Already available — bubblewrap is already a Tier 0 dependency for build isolation. No new tools needed.
Disk Image Partitioning
For the final step of creating a partitioned disk image (GPT/MBR with boot and
rootfs partitions), yoe needs a partitioning tool on the host or inside the
build container.
systemd-repart
is a candidate if [yoe] ever ships systemd as part of the base system — its
declarative partition definitions align well with the partition definitions in
image units, it handles GPT/MBR/filesystem creation in one step, and it runs
unprivileged with user namespaces. Today, [yoe] does not use systemd, so
disk image assembly uses the standard sfdisk/mkfs.* tools from the build
container.
The combination is: bubblewrap for rootfs population (installing packages,
setting ownership, creating device nodes) and a partitioning tool (sfdisk +
mkfs.* today, systemd-repart as a future option) for disk image assembly
(partitioning, filesystem creation, writing the final .img).
Reducing Dependence on Docker’s /dev (planned)
Status: Today,
yoeuses option 5 below. Themknod /dev/loop0..31workaround is implemented inmodules/units-core/classes/image.star(_install_syslinux) and mirrored ininternal/image/disk.go. Options 1–4 are future directions — none are implemented yet.
Installing the bootloader on an x86 image currently runs
losetup/mount/extlinux inside the --privileged build container. This
depends on behavior that varies across container runtimes: Docker’s /dev is a
tmpfs and does not auto-populate /dev/loop* (recent Docker releases tightened
this further, requiring mknod inside the script), while Podman’s
--privileged bind-mounts host /dev and “just works”. The same fragility
surfaces with /dev/kvm, rootless mode, and various CI runners.
Options for decoupling image assembly from container-runtime /dev behavior,
ordered by how cleanly they sidestep the issue:
- Avoid loop devices entirely (preferred). Build the partition table,
populate ext4 with
mkfs.ext4 -d(already used), write MBR and VBR bytes directly, and installldlinux.sysby splicing bytes into the image — all in pure Go on the host. A Go library likego-diskfscovers partition tables and filesystems; the syslinux VBR layout is well-documented. This is what Buildroot’sgenimageand Yocto’swicdo. It removeslosetup,mount, and--privilegedfrom the image-assembly path entirely and aligns with[yoe]’s principles (no intermediate code generation, host runs Go / container runs compilation). - Host-side image assembly. Run
losetup/mount/mkfs/extlinuxon the host instead of in the container. Cleanest implementation, but breaks the “host needs only git + docker + yoe” promise — the host would needutil-linux,e2fsprogs, andsyslinux. - Purpose-built image tools.
genimage,wic,diskimage-builder, orguestfishconstruct disk images in userspace with no loop mounts. Adds a build-time dependency but avoids writing partition/filesystem code. - Make the assembly container less Docker-dependent. Prefer Podman
(rootful) for image assembly, or drive the step with
systemd-nspawn/ bubblewrap on the host. Both expose the real/devand work across runtimes. - Pin Docker behavior explicitly (current approach). Keep the existing
container flow but pre-create
/dev/loop0..31viamknodbeforelosetup. Still Docker-compatible, no longer dependent on Docker’s shifting defaults, but retains the loop/mount/privileged surface.
Direction: move toward option 1 — a Go image assembler — as the long-term answer. This removes a whole class of “works on my machine” failures across Docker versions, kernels, rootless setups, and CI runners, and fits the existing host-runs-Go / container-runs-compilation split.
Build Environment Lifecycle
First time setup (only requires yoe binary + git + docker/podman):
yoe init my-project ← runs on host, no container needed
cd my-project
yoe build --all ← auto-builds container on first run, then builds
Day-to-day development:
$EDITOR units/myapp.star
yoe build myapp ← builds in isolated bwrap sandbox
yoe build base-image ← assembles rootfs with apk
yoe flash base-image /dev/sdX
Adding a host tool:
$EDITOR units/cmake.star ← write a unit for the tool
yoe build cmake ← produces cmake.apk
(cmake is now available as a build dependency for other units)
Updating the base toolchain:
yoe build --force gcc ← rebuild gcc unit
yoe build --all ← rebuild everything against new gcc
Caching Architecture
[yoe] uses a unified, content-addressed object store for both source archives
and built packages. The design is inspired by Nix’s /nix/store and Git’s
object database: immutable blobs keyed by cryptographic hashes, with a
multi-level fallback chain for local and remote storage.
Object Store Layout
All cached artifacts live under $YOE_CACHE (default: cache//):
$YOE_CACHE/
├── objects/
│ ├── sources/
│ │ ├── ab/cd1234...5678.tar.gz # tarball, keyed by content SHA256
│ │ ├── ef/01abcd...9012.tar.xz # another tarball
│ │ └── 34/567890...abcd.git/ # bare git repo, keyed by url#ref hash
│ └── packages/
│ ├── x86_64/
│ │ ├── a1/b2c3d4...e5f6.apk # built .apk, keyed by unit input hash
│ │ └── 78/90abcd...1234.apk
│ └── aarch64/
│ └── ...
├── index/
│ ├── sources.json # URL → content hash mapping
│ └── packages.json # unit name+version → input hash mapping
└── tmp/ # atomic writes land here first
Key design points:
- Two-character prefix directories (like Git) prevent any single directory from accumulating millions of entries.
- Sources are keyed by content hash — the SHA256 of the actual file, which
units already declare in their
sha256field. Two different URLs serving identical tarballs share one cache entry. - Git sources are keyed by
sha256(url + "#" + ref)— since a git repo is a directory (not a single file), content-addressing isn’t practical. The URL+ref key ensures different tags/branches get separate clones. - Packages are keyed by unit input hash — the same hash computed by
internal/resolve/hash.gofrom unit fields, source hash, dependency hashes, and architecture. This is the Nix-like property: if the inputs haven’t changed, the cached output is valid. - Index files provide human-readable reverse lookups (hash → name) for
debugging and
yoe cache list. They are not authoritative — the object store is the source of truth.
Build Flow with Cache
yoe build openssh
│
├─ 1. Resolve DAG, compute input hashes for all units
│ (internal/resolve/hash.go — already implemented)
│
├─ 2. For each unit in topological order:
│ │
│ ├─ Check local object store: objects/packages/<arch>/<hash>.apk
│ │ Hit → publish to build/repo/, skip to next unit
│ │
│ ├─ Check remote cache: GET s3://bucket/packages/<arch>/<hash>.apk
│ │ Hit → download to local object store, publish to repo, skip
│ │
│ ├─ Cache miss → need to build:
│ │ │
│ │ ├─ Check source cache: objects/sources/<hash>.<ext>
│ │ │ Hit → extract to build/<unit>/src/
│ │ │ Miss → download, verify SHA256, store in object store
│ │ │
│ │ ├─ Build unit (sandbox or direct)
│ │ │
│ │ ├─ Package output as .apk
│ │ │
│ │ ├─ Store .apk in local object store under input hash
│ │ │
│ │ ├─ Push to remote cache (if configured): PUT s3://bucket/...
│ │ │
│ │ └─ Publish .apk to build/repo/ for image assembly
│ │
│ └─ Next unit
│
└─ 3. Assemble image (if target is an image unit)
The critical property: a cache hit on a package skips the entire build, including source download. This is why CI builds are fast — most packages come from the remote cache, and only the changed unit (plus anything that transitively depends on it) actually builds.
Cache Key Computation
The cache key for a unit is computed by internal/resolve/hash.go. It is a
SHA256 hash of:
- Unit identity: name, version, class
- Architecture
- Source: URL, SHA256, tag, branch, patches
- Build configuration: build steps, configure args, Go package
- Dependency hashes (transitive): the input hash of every dependency
The transitive dependency hashes are the key property. If glibc is rebuilt
(new version, new patch, new build flags), its hash changes. That propagates to
every package that depends on glibc, which all get new hashes, which all
become cache misses. This is automatic — there are no stale entries, only unused
ones.
For image units, the hash also includes the package list, hostname, timezone, locale, and service list.
Cache Levels
┌──────────────────────────────────────────────────┐
│ Level 1: Local Object Store │
│ $YOE_CACHE/objects/ │
│ Fastest — no network. Populated by local builds │
├──────────────────────────────────────────────────┤
│ Level 2: LAN / Self-Hosted Cache (optional) │
│ MinIO or S3-compatible on local network │
│ ~1ms latency. Shared across team workstations │
├──────────────────────────────────────────────────┤
│ Level 3: Remote Cache (optional) │
│ AWS S3, GCS, R2, Backblaze B2, etc. │
│ Shared across CI runners and distributed teams │
└──────────────────────────────────────────────────┘
All levels use the same key scheme — the object path is the same locally and remotely. Pushing a local object to S3 is a direct upload of the file under the same key. Pulling is a direct download. No translation or repackaging needed.
Why S3-Compatible Storage
Content-addressed packages are immutable, write-once blobs keyed by their input hash. This maps directly to S3’s key-value object model:
- No coordination — multiple CI runners push/pull concurrently without locking. Two builders producing the same hash write the same content; last writer wins harmlessly.
- Widely available — AWS S3, MinIO (self-hosted), GCS, Cloudflare R2, and Backblaze B2 all speak the same API. No vendor lock-in.
- Built-in lifecycle management — S3 lifecycle policies handle cache eviction (e.g., delete objects not accessed in 90 days). No custom garbage collection needed.
- Right granularity — S3 GET latency (~50-100ms) is negligible at package-level granularity. A cache hit that avoids a 5-minute GCC build is worth 100ms of network overhead.
Self-hosted MinIO is the recommended starting point for teams that want shared caching without cloud dependency. It runs as a single binary, supports the full S3 API, and works in air-gapped environments.
Comparison with Nix and Yocto
| Nix | Yocto sstate | [yoe] | |
|---|---|---|---|
| Cache granularity | Per derivation output | Per task | Per unit |
| Key computation | Full derivation hash | Task hash + signatures | Unit input hash (SHA256) |
| Object size | Closures (can be 1GB+) | Individual task outputs | Single .apk file |
| Remote backend | Cachix, nix-serve, S3 | sstate-mirror (HTTP/S3) | Any S3-compatible |
| Setup complexity | Moderate (Cachix simplifies) | High (mirrors, hashequiv) | Low (just a bucket URL) |
| Sharing model | Binary cache + substituters | sstate mirrors + hashequiv | Push/pull to S3 |
| Source caching | Separate (fixed-output drv) | DL_DIR (by filename) | Unified object store by content |
The key simplification over Yocto: no hash equivalence server, no sstate mirror
configuration, no signing key infrastructure to get started. Point cache.url
at an S3 bucket and it works. Signing is optional and adds one config line.
Language Package Manager Caches
Language-native package managers (Go modules, Cargo crates, npm packages, pip
wheels) have their own download caches. [yoe] shares these across builds:
- Go —
GOMODCACHEis set to a shared directory; the Go module proxy (GOPROXY) can point to a local Athens instance or the publicproxy.golang.org. - Rust —
CARGO_HOMEis shared; a local Panamax mirror can serve as a registry cache. - Node.js —
npm_config_cacheis shared; a local Verdaccio instance can proxy the npm registry. - Python —
PIP_CACHE_DIRis shared; a local devpi instance can proxy PyPI.
These caches are not content-addressed by [yoe] — they are managed by the
language toolchains themselves. [yoe] ensures the cache directories persist
across builds and are shared across units that use the same language.
Cache Signing and Verification
Packages pushed to a remote cache are signed with a project-level key. When
pulling from a remote cache, yoe verifies the signature before using the
cached package. This prevents cache poisoning — a compromised cache server
cannot inject malicious packages.
The signing key is configured in PROJECT.star (cache(signing=...)). For CI,
the private key is provided via environment variable; workstations can use a
read-only public key for verification only.
Multi-Target Builds
A single [yoe] project can define multiple machines and multiple images,
building any combination from the same source tree. This is similar to Yocto’s
multi-machine/multi-image capability but with simpler mechanics.
How It Works
Machines and images are independent axes. A machine defines what hardware to build for (architecture, kernel, bootloader, partition layout). An image defines what software to include (package list, services, configuration). Any image can be built for any compatible machine.
machines/ images/
├── beaglebone-black.star ├── base-image.star
├── raspberrypi4.star ├── dev-image.star
└── qemu-arm64.star └── production-image.star
Build matrix:
yoe build base-image --machine beaglebone-black
yoe build dev-image --machine beaglebone-black
yoe build production-image --machine raspberrypi4
yoe build --all --type image ← builds all image units for all machines
Package Sharing Across Targets
Because units produce architecture-specific .apk packages that live in a
shared repository, packages built for one machine are reused by any other
machine with the same architecture. Building openssh for the BeagleBone also
satisfies the Raspberry Pi — both are aarch64 and produce identical packages
(same unit, same source, same arch flags → same cache key).
This means a multi-machine project does not rebuild the world for each board. Only machine-specific packages (kernel, bootloader, device trees) are built per-machine. Everything else comes from cache.
Build Output Organization
Build outputs are organized by machine and image:
build/output/
├── beaglebone-black/
│ ├── base/
│ │ └── base-beaglebone-black.img
│ └── dev/
│ └── dev-beaglebone-black.img
├── raspberrypi4/
│ └── production/
│ └── production-raspberrypi4.img
└── repo/
└── aarch64/ ← shared package repo for all aarch64 machines
├── openssh-9.6p1-r0.apk
├── myapp-1.2.3-r0.apk
└── ...
Architecture Isolation
When a project targets multiple architectures (e.g., aarch64 and x86_64),
each architecture gets its own Tier 1 build root and package repository.
Packages from different architectures never mix. The build roots are:
/var/yoe/buildroot/aarch64/ ← aarch64 compilers, libraries
/var/yoe/buildroot/x86_64/ ← x86_64 compilers, libraries
In practice, multi-architecture builds from a single workstation are uncommon
since [yoe] uses native builds. A developer typically builds for the
architecture of their machine. Multi-arch is more relevant in CI, where
different runners handle different architectures and share results via the
remote cache.
Supported Host Architectures
Since [yoe] uses native builds (no cross-compilation), the host architecture
is the target architecture. All three supported architectures have viable
build environments:
| Architecture | Alpine Container | CI Runners | Native Hardware |
|---|---|---|---|
| x86_64 | alpine:latest | GitHub Actions, all CI | Any x86_64 machine |
| aarch64 | alpine:latest (arm64) | GitHub ARM runners, Hetzner CAX | RPi 4/5, ARM servers |
| riscv64 | alpine:edge (riscv64) | Limited | SiFive, StarFive boards |
Cross-Architecture Builds via QEMU User-Mode
Any architecture can be built on any host using QEMU user-mode emulation (binfmt_misc). Yoe builds and runs a genuine foreign-arch Docker container — no cross-compilation toolchain needed:
# One-time setup (persists until reboot)
yoe container binfmt
# Build ARM64 on an x86_64 host
yoe build base-image --machine qemu-arm64
# Run it
yoe run base-image --machine qemu-arm64
Performance is ~5-20x slower than native, which is fine for iterating on individual packages. For full system rebuilds, use native hardware or cloud CI with architecture-matched runners.
Build output is stored under build/<arch>/<unit>/ so multiple architectures
can coexist in the same project tree.
Development Environments (planned)
Status: Nothing in this document is implemented yet.
yoe shellandyoe bundledo not exist incmd/yoe/main.go, and there is no bundle export/import path in the build engine. This file describes the intended model so the no-SDK direction is discoverable.
[yoe] does not ship a separate SDK. The same tool that builds the OS is the
tool application developers use — yoe is small enough (single Go binary +
Docker) that the traditional “OS team hands an SDK to app developers” split
doesn’t need to exist.
This document describes two pieces that make the no-SDK model complete:
yoe shell— interactive access to the exact sandbox a unit builds in.yoe bundle— content-addressed export/import for air-gapped sites and CI pinning.
The No-SDK Model
Traditional embedded systems ship an SDK — a frozen sysroot + cross-toolchain tarball — because the build system is too heavyweight for app developers to run directly. The SDK drifts from the OS it was cut from, “it works on my machine” becomes “it works with my SDK version”, and the OS team spends real effort generating and distributing it.
[yoe] removes that split. An app developer installs yoe and Docker, clones
the project repo, and runs:
yoe build myapp # packages myapp.apk against target libs
yoe shell myapp # drops into the same sandbox for interactive work
yoe build base-image # folds myapp into the device image
The build environment, the dev environment, and CI are all the same yoe-managed container. There is no “SDK version” distinct from “OS version” because there is no SDK artifact.
What makes this work:
- Native arch everywhere.
[yoe]does not cross-compile. QEMU user-mode emulation (binfmt_misc) transparently runs the target-arch container on any host, so the app developer’s workstation runs the same toolchain the target device will run. - Per-unit containers. Each unit declares the container it builds in. An app
developer opening a shell for
myappgets the containermyappwas designed to build in, with the resolved-devdeps already installed viaapk— no manual sysroot wrangling. - Cached packages, not cached environments. Heavy
.apkartifacts (qt6-dev,chromium-dev,glibc-dev) live in the build cache, content-addressed by input hash. An app developer pulls them on first build and never rebuilds them unless inputs change. The cache is the SDK’s sysroot, decomposed into reusable pieces.
Working on App Code
The no-SDK model gives every developer a uniform toolchain. The other half of the app-developer loop is editing source and seeing the change on a device. Three pieces make that work:
Local-path sources
Units can reference a working tree on disk instead of (or alongside) a git URL:
unit(
name = "myapp",
source = path("./"), # build from this repo's working tree
class = "go_binary",
...
)
path() sources are not cloned. yoe binds the working tree into the build
sandbox so edits land in the next build immediately, without a commit-tag-fetch
cycle.
Fast deploy
yoe deploy <unit> <host> builds the apk for <unit>, exposes the project’s
repo over an HTTP feed (reusing a running yoe serve if one is up), and runs
apk add --upgrade <unit> on the device over SSH. Combined with local-path
sources, the loop is:
edit code → yoe deploy myapp dev-pi → service running on the device
Pull, not push: apk on the device resolves transitive deps from the same
APKINDEX.tar.gz production OTA uses, so adding a runtime dep to a unit doesn’t
require updating any deploy machinery. After the first deploy the device’s
/etc/apk/repositories keeps the dev-feed line in place, so subsequent
apk add calls from the device work too. See feed-server.md.
Watch mode
yoe dev <unit> watches the source tree and rebuilds (and optionally redeploys)
on save. For app projects this is the inner loop; for upstream units, it’s the
patch-and-iterate workflow.
Three workflow shapes
The pieces above support three repo layouts:
Single-repo project. App code and yoe config live in one git repo. Add
PROJECT.star and a unit.star next to the source tree:
my-app/
├── PROJECT.star # references units-core for the base system
├── unit.star # source = path("./")
└── src/...
yoe build && yoe deploy runs from the repo root. Easiest onboarding;
yoe-specific files become part of the project.
Multi-repo (clean app). App stays untouched in its own repo. A separate “system” project references it via a sibling path:
~/projects/
├── my-app/ # plain app repo, no yoe files
└── my-system/
├── PROJECT.star
└── apps/
└── my-app.star # source = path("../../my-app")
The system project is what gets versioned for production. Mirrors how Rust workspaces and mono-repos handle service composition.
In-tree dev of an upstream unit. yoe dev openssh checks out an upstream
unit’s source into a working dir; subsequent builds use that dir until you
commit or revert. Distinct from app dev — this is the “patch upstream and try
it” workflow.
Editor integration
Run language servers and debuggers inside yoe shell (or a devcontainer pointed
at the toolchain image) so they see the same headers, libraries, and target arch
as the build:
- VSCode Remote / Dev Containers attaches naturally.
- Neovim’s
distant.nvimworks the same way. - JetBrains Gateway connects via SSH into the container.
There is no SDK to install, no environment-setup-* to source. The container
the build runs in is the container the LSP runs in.
yoe shell
yoe shell opens an interactive shell inside the build sandbox for a unit —
same container, same environment variables, same mounted sysroot that
yoe build uses, but attached to a TTY instead of running build steps.
# Drop into the sandbox for myapp (uses myapp's unit + machine defaults)
yoe shell myapp
# For a specific machine (e.g., cross-arch via QEMU)
yoe shell myapp --machine raspberrypi4
# Open a shell without targeting a specific unit — useful for quick experiments
yoe shell --machine beaglebone-black
Inside the shell the developer can:
- Edit source in
$SRCDIR(live-mounted frombuild/<arch>/<unit>/src/). - Run the unit’s build commands manually (
./configure && make,go build,cargo build) — exactly whatyoe buildwould run. - Add extra deps interactively with
apk add <pkg>for probing; the nextyoe shellinvocation starts fresh so probes don’t pollute the recorded environment. - Use
yoe dev extract <unit>from inside the container to turn local commits into patch files for the unit.
Why this replaces an SDK shell: the SDK shell in Yocto
(environment-setup-*) is a static snapshot of environment variables.
yoe shell is a live attach to the sandbox that would run if you typed
yoe build <unit> right now — it cannot drift from the OS because it is the
OS build environment.
yoe bundle for Air-Gapped Distribution
Some environments cannot reach the internet: regulated sites, long-lifetime
industrial deployments, offline CI runners. For these, [yoe] exports a
bundle — a content-addressed archive containing everything needed to build
the declared targets without network access.
# Export a bundle for a specific image (includes everything transitively needed)
yoe bundle export base-image --out bundle-base-v1.0.tar
# Export everything reachable from PROJECT.star
yoe bundle export --all --out bundle-full.tar
# On the air-gapped machine
yoe bundle import bundle-base-v1.0.tar
yoe build base-image # all hits from cache — no network
A bundle contains:
| Piece | Source | What it’s for |
|---|---|---|
Built .apks | $YOE_CACHE/build/ | Pre-built packages matching current hash |
| Source archives | $YOE_CACHE/sources/ | Tarballs + git bundles for rebuild-ability |
| Module checkouts | $YOE_CACHE/modules/ | Vendored external modules at their refs |
| Container images | OCI archives | Toolchain / build containers as tarballs |
| Project snapshot | PROJECT.star + units/* | Optional; for bundles that include source |
Everything is keyed by content hash, so importing the same bundle on two machines produces byte-identical build results.
Why Bundles Beat an SDK Image for Air-Gapped
A monolithic SDK image is a snapshot of what was convenient to pre-bake. A bundle is a subset of the cache that covers exactly the targets the air-gapped site needs, composed from the same cache layers the OS team already produces.
- Reproducible. Two bundle exports at the same project state produce the same bytes. An SDK image bakes in timestamps and layer ordering.
- Composable. A site that needs two products ships two bundles; shared packages dedupe automatically on import.
- No separate artifact to maintain. CI already produces the cache. A bundle
is
yoe bundle export <targets>— no separate SDK build. - Targeted. A Go-microservices team gets a bundle with
go,glibc-dev, and the libraries their units link against — not the 4 GB everything-image.
Signed Bundles
Bundles are signed with the project’s cache signing key (same key used for remote cache entries). Import verifies signatures before trusting hashes, so a tampered bundle is rejected rather than silently polluting the cache.
yoe bundle export base-image --sign keys/bundle.key --out bundle.tar
yoe bundle import bundle.tar --verify keys/bundle.pub
Devcontainers / Codespaces
For developers who want a one-click cloud or VS Code setup, point the
devcontainer at the project’s toolchain container — already a regular [yoe]
unit built by container():
{
"image": "registry.example.com/yoe/toolchain-musl:v1.0.0-arm64",
"mounts": ["source=${localWorkspaceFolder},target=/src,type=bind"]
}
CI produces this image by building the container unit and pushing it:
yoe build toolchain-musl --machine raspberrypi4
docker tag yoe/toolchain-musl:...-arm64 registry.example.com/yoe/toolchain-musl:v1.0.0-arm64
docker push registry.example.com/yoe/toolchain-musl:v1.0.0-arm64
The devcontainer isn’t an SDK — it’s the build container for the machine the
team is targeting, promoted to a registry image. The app developer inside the
container still runs yoe build and yoe shell against the project checkout.
What This Replaces
| Yocto concept | [yoe] equivalent |
|---|---|
populate_sdk / SDK tarball | (nothing) — app devs install yoe directly |
environment-setup-* shell script | yoe shell |
populate_sdk_ext extensible SDK | yoe itself (the tool is the extensible SDK) |
| Offline SDK installer | yoe bundle export / yoe bundle import |
oe-devshell | yoe shell <unit> |
| Cross-toolchain tarball | (not applicable) — [yoe] is native-only |
See Also
- The
yoeTool — reference foryoe shellandyoe bundleflags once implemented. - Build Environment — the container / bwrap sandbox
model that
yoe shellattaches to. - Unit & Configuration Format
— how per-unit and per-task container selection determines what
yoe shelldrops you into.
Testing (planned)
Status: This document describes the intended shape of yoe’s test story. Today, yoe ships Go unit tests under
internal/*and a single end-to-end Go test atinternal/build/e2e_test.gothat loadstestdata/e2e-project/and exercises a dry-run build. There is noyoe testsubcommand, no on-device test runner, no image smoke-test framework, and no CI workflow that runs tests or builds (the only CI today is markdown formatting via.github/workflows/doc-check.yaml). The sections below describe what’s planned; each one calls out what exists today vs. what’s future work.
Goals
Testing in yoe needs to cover six distinct levels, because regressions can hide at any of them:
- Compiler-level (Go): yoe’s own logic — DAG resolution, hash computation, Starlark evaluation, repo indexing.
- Build-time package QA: every built package passes a fixed set of sanity
checks (ownership, stripping, RPATH, host-path leaks, missing SONAMEs, etc.).
Failures fail the build. Yocto’s equivalent is
INSANE.bbclass. - Per-unit functional tests: a unit’s build produces the expected files, services, metadata, runtime deps. Destdir assertions, run inside the build sandbox.
- On-device upstream tests: a unit ships its own
make check(orcargo test, etc.) output as an installable test subpackage; the booted device runs them. Catches ABI / linkage regressions that destdir-level tests miss. Yocto’s equivalent isptest. - Image-level smoke tests: boot the image (QEMU or real hardware), run assertions over SSH — network up, services running, basic flows work.
- Hardware-in-loop (HIL): image-level tests against a flashed physical device, not just QEMU.
The yoe test command unifies levels 3–6 behind one driver so the same test
spec runs against a destdir, a QEMU image, or a physical device. Build-time QA
(level 2) is always-on and runs as part of every package build, not opt-in.
Today
Go unit tests
Standard go test coverage across internal/:
source envsetup.sh
yoe_test # go test ./...
Notable suites:
internal/build/*_test.go— sandbox, executor, templates, starlark exec.internal/starlark/*_test.go— loader, builtins, install steps.internal/source/source_test.go— git/tarball fetchers.internal/repo/*_test.go— APKINDEX generation, signing.internal/image/rootfs_test.go— rootfs assembly logic.
End-to-end Go test
internal/build/e2e_test.go loads testdata/e2e-project/ and runs a dry-run
build of dev-image. It validates:
- Project + module load.
- Unit registration (busybox, linux, zlib, base-image, etc.).
- DAG resolution and topological sort.
It does not actually build anything — it stops at the dry-run boundary. A real build inside CI would need a Docker daemon, the toolchain container, and several minutes of compute.
CI
.github/workflows/doc-check.yaml runs prettier --check on **/*.md. There
is no workflow that runs go test, builds yoe, or builds an image.
Build-time Package QA (planned)
Status: Not implemented. Today the only built-in check is apk-level path-conflict detection (a file installed by two packages without an explicit
replaces=annotation fails image assembly). No checks run against an individual unit’s destdir before packaging.
Every unit’s destdir is sanity-checked before it is packaged into an apk. Failures fail the build. This is the cheapest tier of testing — runs on every build with no opt-in — and catches the most common shipping bugs:
- File ownership and mode: all installed files must be owned by
0:0(root) with mode that matches the unit’s policy. Setuid binaries must be declared explicitly (no accidental setuid via upstreammake install). - ELF binary checks:
- Stripped (or has separate debug info).
- No
RPATH/RUNPATHpointing at the build-time sysroot (/build/sysroot/...baked into a target binary is the classic bug). - All
NEEDEDlibraries are satisfied by the unit’sruntime_deps(catches a unit linking libfoo without depending on it). - Architecture matches the target arch (no x86_64 binary in an arm64 apk because the build slipped to host gcc).
- Path leaks: no absolute paths under
/build/,$DESTDIR,/tmp/build-*, or the host build user’s home directory in installed files (binaries, scripts, pkg-config files, libtool.lafiles). - Conffile sanity: any path declared in
conffiles=actually exists in the destdir; conffiles outside/etc/are flagged. - License:
license=is set, and a copy of the upstream license file lands at a known location.
Every check has a known-acceptable escape hatch on the unit (e.g.,
qa_skip = ["rpath"]) so a unit can opt out per-rule with a comment explaining
why, instead of being forced to vendor in workarounds.
yoe test <unit> (planned)
Status: Not implemented.
cmd/yoe/main.gohas notestcase in its command dispatch.
Run a unit’s tests against the appropriate target. The driver picks the right
mode based on the unit’s class and the --target flag:
# Unit-level: assert destdir contents after build
yoe test zlib
# Image-level: boot the image in QEMU and run smoke tests
yoe test dev-image
# Hardware-in-loop: SSH into a real device and run tests there
yoe test dev-image --target dev-pi.local
Unit-level tests
A unit declares tests inline:
unit(
name = "zlib",
version = "1.3.1",
...
tests = [
test("install-layout", steps = [
"[ -f $DESTDIR/usr/lib/libz.so.1.3.1 ]",
"[ -L $DESTDIR/usr/lib/libz.so ]",
"$DESTDIR/usr/bin/minigzip --version | grep -q 1.3.1",
]),
],
)
Tests run inside the same per-unit container the build used, against the already-built destdir. Failures are unit-build failures — no separate phase to forget.
On-device upstream tests
Most upstream projects (openssl, zlib, busybox, etc.) ship a real test suite —
make check, cargo test, pytest. Running it against the binary you just
built is the highest-confidence test you can run, because it exercises the
actual ABI / linkage / runtime behavior of the package on the target arch and
libc. Yocto calls this ptest.
A unit can declare an upstream test suite as an installable subpackage:
unit(
name = "openssl",
...
upstream_tests = task("ptest", steps = [
"make TESTS='*' check-only DESTDIR=$DESTDIR/usr/lib/yoe-tests/openssl",
]),
)
yoe build produces a separate openssl-tests-<version>.apk alongside the main
package. On the booted device:
yoe test openssl --on-device dev-pi.local
# → ssh dev-pi.local 'apk add openssl-tests && /usr/lib/yoe-tests/openssl/run.sh'
This catches regressions that destdir assertions cannot:
- A library that built but links against the wrong libc symbol.
- A binary that runs in QEMU user-mode but crashes on real hardware.
- An optimization flag that breaks a corner case the upstream covers.
Test packages stay out of the default image (dev-image does not list them) but
ship in the project’s apk repo so they can be installed on-demand.
Image-level tests
An image declares smoke tests that run against a booted instance:
image(
name = "dev-image",
artifacts = [...],
tests = [
test("boots-and-network", steps = [
"ssh-with-retry root@$TARGET 'true'",
"ssh root@$TARGET 'ip -4 -o addr | grep -v 127.0.0.1'",
"ssh root@$TARGET 'getent hosts github.com'",
]),
test("services-up", steps = [
"ssh root@$TARGET 'pgrep sshd'",
"ssh root@$TARGET 'pgrep dhcpcd'",
]),
],
)
The driver:
- Builds the image (or reuses cache).
- Boots it in QEMU (or attaches over SSH for
--target=<host>). - Runs each test step. On failure, captures the serial console + journal.
- Shuts down the image.
HIL mode
--target=<host> skips the build/boot phase and runs tests directly against an
already-running device. Useful for testing real hardware without a separate test
harness.
CI Integration (planned)
Status: Not implemented.
Three CI workflows worth adding, in order of cost:
- Go tests —
go test ./...on every PR. Cheap, catches the bulk of regressions. - Dry-run image build —
yoe build dev-image --dry-runon every PR. Catches Starlark-level breakage and unit-graph regressions without needing a real build. - Full image build + smoke tests —
yoe build dev-image && yoe test dev-imageon a schedule (nightly?) or onmain. Expensive (Docker, minutes) but catches actual build regressions.
Build History / Regression Tracking (planned)
Status: Not implemented. Yocto’s equivalent is
buildhistory.
Track per-build artifact metadata so a PR can show what changed in
machine-readable form: package sizes, file lists, RDEPENDS, image contents,
kernel config diff. Run as a CI job on main and on PRs; surface notable diffs
as a PR comment (“dev-image grew 4.2 MB”, “openssh.apk’s RDEPENDS gained
libfido2”).
This isn’t testing per se, but it occupies the same regression-detection slot — many regressions show up as “size of X grew unexpectedly” or “Y suddenly depends on Z” before they manifest as a functional failure.
Kernel QA (planned)
Status: Not implemented; mentioned as a TODO in containers.md.
For container-host images, run upstream moby/moby’s check-config.sh against
the kernel’s resulting .config to verify the required CONFIG_* options are
set. Failures should fail the build, not warn.
Comparison to Yocto
Yocto’s test infrastructure (oeqa) is the closest reference. The mapping:
| Yocto | yoe equivalent |
|---|---|
oe-selftest / bitbake-selftest | go test ./... (Go unit tests under internal/) |
INSANE.bbclass / QA_LOG | Build-time package QA (planned) |
ptest / ptest-runner | yoe test <unit> --on-device (planned) |
oeqa.runtime / testimage | yoe test <image> (planned) |
oeqa.sdk / testsdk | (no SDK product; yoe shell is the dev surface) |
testexport (run on hardware) | yoe test <image> --target <host> (planned) |
runqemu | yoe run (already shipped) |
buildhistory | Build history / regression tracking (planned) |
INHERIT += "create-spdx" | (license tracking lives in unit fields today) |
Where yoe diverges by design:
- No SDK product to test. Yocto’s
testsdkvalidates the cross-compiler tarball it produces; yoe ships no such artifact, so the tier doesn’t exist. Theyoe shellcontainer takes its place; treat shell entry as the SDK validation point. - One driver, several targets.
yoe testpicks unit / image / HIL mode from flags; Yocto splits intotestimage,testexport,ptest-runner, etc., each with its own configuration. Yoe collapses them so the same test spec runs in all three places. - QA fails the build, not warns. Yocto’s QA is configurable per-rule
(warning vs. error vs. skip) and many sites silence rules to keep builds
green. Yoe defaults all rules to error and exposes per-unit
qa_skip = [...]so the opt-out is explicit and grep-able.
See Also
- Build Environment — the container/bwrap sandbox that unit tests run inside.
- Containers — kernel QA discussion.
- Yoe Tool —
yoe testflags once implemented.
apk Signing
yoe signs every .apk and the APKINDEX.tar.gz at build time with an
RSA-PKCS#1 v1.5 SHA-1 signature, matching what apk-tools 2.x verifies. Booted
systems include the matching public key under /etc/apk/keys/, so on-target
apk add, apk upgrade, and image-time package installation all run without
--allow-untrusted.
What you need to know
- yoe auto-generates a 2048-bit RSA keypair on first build and stores it at
~/.config/yoe/keys/<project>.rsa(private) and~/.config/yoe/keys/<project>.rsa.pub(public). - The matching public key is published into your local repo under
<projectDir>/repo/<project>/keys/<project>.rsa.puband into the rootfs at/etc/apk/keys/<project>.rsa.pub(viabase-files). - A different signing key per project is the default. Two projects with the same
namefield share keys; use unique project names if that isn’t what you want.
Inspecting the current key
$ yoe key info
Signing key: /home/you/.config/yoe/keys/myproj.rsa
Public key: /home/you/.config/yoe/keys/myproj.rsa.pub
Key name: myproj.rsa.pub
Fingerprint: 1f3a:c2:e0:9d:42:8c:b6...
Use the fingerprint to confirm two systems are talking about the same key without printing the full public key.
Generating a key explicitly
yoe key generate is a no-op when the configured key already exists; if not, it
creates a fresh 2048-bit RSA pair at the configured path. The build pipeline
does the same auto-generation lazily, so most users never need to run this.
$ yoe key generate
Signing key: /home/you/.config/yoe/keys/myproj.rsa
Public key: /home/you/.config/yoe/keys/myproj.rsa.pub
Key name: myproj.rsa.pub
Fingerprint: 1f3a:c2:e0:9d:42:8c:b6...
Pinning a key path explicitly
Override the default by setting signing_key on project() in PROJECT.star:
project(
name = "myproj",
version = "0.1.0",
signing_key = "/secrets/myproj.rsa",
...
)
The configured path is treated the same way as the default — yoe loads it if it exists, generates a new keypair there if it doesn’t.
Key rotation
When you replace a key, every existing rootfs becomes unable to verify new packages until the new public key is shipped. The recommended flow is:
- Generate the new key (
yoe key generateafter deleting the old~/.config/yoe/keys/<project>.rsa.pub, or by settingsigning_keyto a fresh path). - Run
yoe build --forceso every cached apk gets re-signed with the new key. The build cache is content-addressed and doesn’t include the signing key in its hash, so a fresh build after a key swap will otherwise replay cached apks signed with the old key. - Build a new image so
base-filescarries the new public key. - Flash or upgrade devices with the new image.
- Once every device is rotated, retire the old key.
Because both keys can coexist under /etc/apk/keys/ on-target, you can also
stage a rollover: drop both .rsa.pub files into the rootfs (e.g., via an
overlay), let devices upgrade onto the new key over a period, and then strip the
old one in a later release.
What’s signed and what isn’t
Signed:
- Every
.apkproduced byyoe build. The signature covers the SHA-1 of the gzipped control stream; data integrity flows through the PKGINFOdatahashfield that the control stream carries. - The per-arch
APKINDEX.tar.gzregenerated on every publish.
Not signed:
- Bootstrap apks emitted by
yoe bootstrap. These exist only inside the build container and are never installed on a target. - Source archives, docker images, intermediate build artifacts. Only the final
.apkand the index are signed.
On-Device Package Management
apk-tools ships in dev-image and any other image that includes it, so booted
yoe systems can install, upgrade, and inspect packages against the project’s
signed repo using stock Alpine apk commands.
What’s already on the device
After a successful yoe build dev-image && yoe run dev-image:
/sbin/apk— the apk-tools binary./lib/apk/db/— the installed-package database, populated at image assembly time viaapk add./etc/apk/keys/<keyname>.rsa.pub— the project’s signing public key, shipped bybase-files. apk uses it to verify signatures on everyadd/upgrade/updatewithout any flag-passing on your part./etc/apk/repositories— a commented-out template. You override this with your project’s repo URL before doing anything live.
Pointing at a repository
Edit /etc/apk/repositories and add a single line — one repo per line. A few
common shapes:
# Project repo served over HTTPS by an nginx behind your CA
https://repo.example.com/myproj
# Project repo served by a plain HTTP server on the LAN
http://10.0.0.1/repo/myproj
# Local filesystem path (e.g., bind-mounted USB stick or sshfs)
/var/cache/yoe/repo
Then update the index cache:
$ apk update
Yoe-built repos use Alpine’s standard <repo-root>/<arch>/APKINDEX.tar.gz
layout, so apk picks the right arch automatically — point the repositories
file at the root, not at the per-arch subdirectory.
Pointing at a yoe-served feed
For development, run yoe serve on your build host and configure the device
with yoe device repo add <host>. See feed-server.md for the
full dev-loop walkthrough.
Installing and upgrading
Once a repository is wired up:
$ apk add htop # install one package
$ apk add --update vim # refresh index, then install
$ apk upgrade # upgrade everything to the latest available
$ apk del strace # remove a package
$ apk info -vv | head # list installed packages
$ apk verify # re-verify every installed package's hashes
All of these run with signature verification on. If apk reports “BAD signature”
or “untrusted”, the public key under /etc/apk/keys/ doesn’t match the key the
repo’s apks were signed with. See docs/signing.md for the key-rotation flow.
OTA flow (rebuild → publish → upgrade)
The recommended OTA path for yoe-built devices:
- Bump versions. Edit one or more units’
version =(orrelease =if just rebuilding the same source) on your dev host. - Build the new apks.
yoe build <unit>produces the new.apkfiles in<projectDir>/repo/<project>/<arch>/and refreshesAPKINDEX.tar.gz. Both are signed with the project key. - Sync to your hosting. Copy the entire
<projectDir>/repo/<project>subtree to wherever you serve it from — e.g., a static-site bucket, an nginx vhost, or a release server. The on-disk layout is already correct; no transformation needed. - On-device upgrade.
apk update && apk upgrade.
Hosting the repo over HTTP/HTTPS
Any static file server works. nginx example:
server {
listen 443 ssl;
server_name repo.example.com;
ssl_certificate /etc/letsencrypt/live/repo.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/repo.example.com/privkey.pem;
root /srv/yoe-repos;
autoindex off;
# Tighten cache headers — APKINDEX.tar.gz changes on every publish,
# but individual .apk files are content-addressed by version+release
# and never change once published.
location ~ /APKINDEX\.tar\.gz$ {
add_header Cache-Control "no-cache";
}
location ~ \.apk$ {
add_header Cache-Control "public, max-age=31536000, immutable";
}
}
Drop your project’s repo subtree under /srv/yoe-repos/myproj/ and point
/etc/apk/repositories at https://repo.example.com/myproj.
Constraints worth knowing
- Kernel upgrades need a reboot. apk doesn’t restart anything; a new
linux-*apk replaces files in/bootand the running kernel keeps running until you reboot. - No automatic rollback. If an upgrade leaves the system unbootable, there’s no built-in A/B rollback in this layer. For atomic-rootfs workflows (RAUC-style A/B partitioning, or btrfs-snapshot rollback), layer them above the apk repo — apk handles the package contents, the rootfs strategy handles atomicity.
- In-place upgrade is non-atomic. apk extracts each package’s files
individually. A power loss during
apk upgradecan leave the rootfs in a half-upgraded state. For deployments where that’s not OK, ship upgrades as full image artifacts via flash/A-B and use the apk repo for development iteration only. - No remote network at install time during image build. Image assembly runs
apk add --no-networkagainst the local repo. This is intentional: build artifacts must be reproducible from the project tree alone.
Feed server and yoe deploy
The dev loop for installing in-progress builds onto a running yoe device. Three commands, layered:
yoe serve— long-lived HTTP feed for the project’s apk repo, advertised via mDNS so devices andyoe deployfind it without configuration.yoe device repo {add,remove,list}— configure/etc/apk/repositorieson a target device soapk addfrom the device pulls from your dev feed.yoe deploy <unit> <host>— build, ship, and install a unit on a running device in one command. Pulls the unit and all its transitive deps via apk on the device side, so dependency resolution mirrors production OTA.
The model is pull, not push. Every install — image-time, on-device OTA, and
the dev loop — uses the same apk repo, the same APKINDEX.tar.gz, and the same
signing key. Adding a new runtime dep to a unit doesn’t require updating deploy
machinery; apk on the device resolves it.
Trust
apks and APKINDEX are signed by the project key (docs/signing.md). Every yoe
device has the matching public key in /etc/apk/keys/ via base-files. apk
verifies signatures unconditionally, so the HTTP transport is plain — package
integrity is enforced at the package layer, not the network layer.
For production OTA, layer HTTPS via reverse proxy (docs/on-device-apk.md).
Common workflows
One-time setup on a fresh device
A device that was just flashed with an image built by your project needs nothing
— the public key is already in /etc/apk/keys/. Configure the repo:
# Dev host, in your project dir
yoe serve &
# In another terminal — autodiscovers the running serve via mDNS
yoe device repo add dev-pi.local
After this, on the device:
apk update
apk add htop strace gdb # any unit your project builds is now installable
If the device was flashed from someone else’s image (no project key), pass
--push-key:
yoe device repo add dev-pi.local --push-key
Iterating on a single unit
yoe deploy myapp dev-pi.local
Builds myapp, starts an ephemeral feed (or reuses your running yoe serve if
it’s advertising the same project), ssh’s to the device, and runs
apk add --upgrade myapp. Transitive deps are resolved on the device.
A # >>> yoe-dev … # <<< yoe-dev block in /etc/apk/repositories on the
target is left in place after deploy — same block yoe device repo add would
have written. So the first deploy to a fresh device doubles as the persistent
feed config.
Multiple devices on a LAN
Run yoe serve once on the dev host. Each device runs yoe device repo add
once. After that, apk update && apk upgrade on each device picks up new
builds.
Tearing it down
yoe device repo remove dev-pi.local
Strips the # >>> yoe-dev block from /etc/apk/repositories. The device falls
back to whatever else is configured (typically nothing, in dev).
Inspecting the device’s repo config
yoe device repo list dev-pi.local
Cats /etc/apk/repositories, prefixed with the source filename. (Also reads
/etc/apk/repositories.d/*.list if present, though apk-tools 2.x does not read
those itself — they’re informational only.)
Command reference
yoe serve
yoe serve [--port PORT] [--bind ADDR] [--no-mdns] [--service-name NAME]
--port— TCP port. Default8765. Pinned (not random) so the URL written byyoe device repo addstays valid acrossyoe serverestarts.--bind— listen address. Default0.0.0.0(LAN-visible).--no-mdns— skip the mDNS advertisement (multicast-hostile networks).--service-name— mDNS instance name. Defaultyoe-<project>.
yoe device repo add
yoe device repo add <[user@]host[:port]> [--feed URL] [--name NAME]
[--push-key] [--user USER]
<[user@]host[:port]>— ssh destination. Examples:dev-pi.local,pi@dev-pi.local,localhost:2222(QEMU),pi@dev-pi.local:2200.--feed URL— explicit URL. If omitted, browses mDNS for_yoe-feed._tcpon the LAN; errors clearly on 0 or >1 matches.--name NAME— name suffix for the marker block written into/etc/apk/repositories(# >>> yoe-<name>…# <<< yoe-<name>). Defaultyoe-dev.--push-key— copy the project signing pubkey to/etc/apk/keys/on the target before configuring.--user USER— default ssh user when the target spec has nouser@prefix. Defaultroot. ssh shells out to the user’ssshso~/.ssh/config, ssh-agent, known_hosts, and jump hosts all work.
yoe device repo remove
yoe device repo remove <[user@]host[:port]> [--name NAME] [--user USER]
Idempotent — missing file is success.
yoe device repo list
yoe device repo list <[user@]host[:port]> [--user USER]
yoe deploy
yoe deploy <unit> <[user@]host[:port]> [--user U] [--port P]
[--host-ip IP] [--machine M]
<unit>— must resolve to a non-image unit. Image targets error with a pointer toyoe flash.<[user@]host[:port]>— ssh destination, same syntax asdevice repo add.--port— feed port (default8765, same asyoe serve).--host-ip— advertise this IP to the device instead of<hostname>.local. Use when mDNS resolution fails on the device.--machine— target machine override.
Constraints
- mDNS doesn’t cross subnets. Cross-subnet deploys need
--feed URLor--host-ip. - A pinned port
8765collides if something else on the dev host is using it — pass--porttoyoe serveandyoe deployto override. - The dev host needs avahi / systemd-resolved running for
<hostname>.localto resolve from the device. Most Linux distros ship this. - Concurrent deploys against the same project: one runs the ephemeral feed (or
reuses
yoe serve), the other will see the same URL via mDNS reuse. Truly parallel ephemeral feeds for the same project on the same dev host collide on port8765.
units-alpine — wrapping prebuilt Alpine packages
units-alpine is a yoe module that wraps prebuilt Alpine Linux .apk files as
yoe units. Where units-core builds packages from upstream source, units in
this module fetch a binary apk from a pinned Alpine release, verify its sha256,
and repack it as a yoe artifact. The unit’s “build” is just extracting the apk
into $DESTDIR.
When to reach for it
The policy yoe follows:
- Yoe builds the easy stuff. Small leaf libraries (
zlib,xz,expat,libffi,readline,ncurses, …) and small userland tools (less,htop,vim,procps-ng,iproute2, …) stay inunits-coreeven though Alpine ships them too. Their build is cheap, and keeping them in yoe preserves the option to retarget glibc or a different init system later. units-alpineships Alpine-native and hard-to-build packages. Alpine-native meansmusl,apk-tools,alpine-keys,alpine-baselayout— things that only make sense from Alpine. Hard-to-build means packages where Alpine’s expertise (configure flags, security review, codec/license decisions, multi-language coupling) earns its keep:openssl,openssh,curl, eventuallypython,llvm,qt6-qtwebengine, and similar.- Keep building from source anything where the build defines the product.
Toolchain, kernel, bootloader,
busybox, init scripts,base-files— these are not packages, they are the distribution.
For the broader strategic context — why this rubric exists, where Alpine doesn’t fit (notably edge AI on Jetson), and how yoe expects to handle glibc/systemd targets in the future — see libc-and-init.md.
Alpine release coupling
The Alpine release pinned in classes/alpine_pkg.star
(_ALPINE_RELEASE = "v3.21" at the time of writing) must match the
FROM alpine:<release> line in
@units-core//containers/toolchain-musl/Dockerfile. Both currently point at
v3.21.
The coupling is not aesthetic. Three things tie them together:
- libc ABI. Anything compiled in the toolchain container links against the
toolchain’s musl headers and libc. Anything you fetch via
alpine_pkgwas compiled against a specific Alpine release’s musl. Mix versions and you produce images that compile and link cleanly, then crash on first run when the dynamic linker resolves a symbol whose layout has changed. - Signing keys. Every Alpine release ships with a build-host signing key.
Prebuilt apks are signed by that key, and
apk-toolsinside the target image verifies signatures against the keyring baked into the toolchain container at build time. A version skew means the keyring doesn’t recognise the signatures on the packages you’re trying to install. - Library co-versioning. Many Alpine packages declare
D:so:libfoo.so.Nruntime dependencies pinned to specific minor versions. Pullingpackage-Afrom one release andpackage-Bfrom another lands you with conflictingso:constraints thatapkwill refuse to install.
When bumping the Alpine release, do all three in the same commit:
- Update
FROM alpine:<release>inmodules/units-core/containers/toolchain-musl/Dockerfile. - Update
_ALPINE_RELEASEinmodules/units-alpine/classes/alpine_pkg.star. - Update
versionandsha256on every unit inmodules/units-alpine/units/. The version comes from the new release’s APKINDEX; the sha256 is the SHA-256 of the apk file itself.
Writing a new alpine_pkg unit
load("@units-alpine//classes/alpine_pkg.star", "alpine_pkg")
alpine_pkg(
name = "sqlite-libs",
version = "3.48.0-r4",
license = "blessing",
description = "SQLite shared library (Alpine v3.21)",
runtime_deps = ["musl"],
sha256 = {
"x86_64": "...",
"arm64": "...",
},
)
The version is Alpine’s full pkgver (e.g., 3.48.0-r4), not just the upstream
version. The sha256 dict keys are yoe canonical arches; the class maps them to
Alpine arch tokens (arm64 → aarch64).
To find the version + sha256 for a package:
# 1. Find the version in the APKINDEX:
curl -sLO https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz
tar -xzOf APKINDEX.tar.gz APKINDEX | awk -v RS= '/(^|\n)P:sqlite-libs(\n|$)/ { print; exit }'
# 2. Fetch the apk and sha256 it:
curl -sLO https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/sqlite-libs-3.48.0-r4.apk
sha256sum sqlite-libs-3.48.0-r4.apk
Repeat for each architecture you target.
Dependencies are not auto-imported
Alpine packages declare runtime dependencies via the D: field in APKINDEX. The
alpine_pkg() class does not read or follow those — it requires every
dependency to be listed explicitly in runtime_deps.
This is deliberate. Auto-following Alpine’s dep closure would silently import
dozens of packages (busybox, openrc, ssl-client, …) that yoe either ships from
units-core already or doesn’t want at all. Forcing explicit runtime_deps
keeps the imported surface visible and small. When you add a new alpine_pkg,
look at its D: line in APKINDEX and either declare the corresponding yoe units
in runtime_deps, or, for deps you don’t need on the target image, just leave
them out.
Override with a from-source unit
Because units in units-alpine use the bare names (musl, sqlite-libs, …),
any later-priority module — including the project itself — can override them by
defining a unit with the same name. See
naming-and-resolution.md.
# PROJECT.star
modules = [
module(..., path = "modules/units-alpine"), # ships musl, sqlite-libs, …
module(..., path = "modules/units-core"), # source-built kernel, busybox, …
module(..., path = "modules/my-overrides"), # last → wins
]
# modules/my-overrides/units/musl.star
unit(name = "musl", source = "https://git.musl-libc.org/git/musl",
tag = "v1.2.5", tasks = [...])
The override unit produces an apk under the same name. Consumers writing
runtime_deps = ["musl"] get the override automatically.
libc, init, and the rootfs base
Yoe today is a musl + OpenRC + Alpine-derived distribution builder. This is a deliberate choice, not an accident, but it is also not a permanent one. This document explains the choice, what it implies for the products yoe can serve, where the boundary lies, and the planned direction for serving products that sit on the other side of that boundary — most notably edge-AI hardware where glibc and systemd are non-negotiable.
What yoe ships today
The default and currently only fully-supported configuration is:
- musl libc. All units build against musl. The build container
(
toolchain-musl) is Alpine-based. Theunits-alpinemodule pulls prebuilt apks from Alpine, which are themselves musl builds. - busybox + a curated GNU userland on top. The
replacesmechanism manages file conflicts where util-linux, coreutils, etc. shadow busybox applets that ship a real implementation. - OpenRC-style init scripts.
network-configand similar yoe-specific units ship/etc/init.d/Sxxxscripts. There is no systemd integration and no plan to add one insideunits-core. - apk packaging. All yoe units produce signed
.apkartifacts. Packages are installed with apk-tools at image-assembly time.
This stack runs cleanly on x86_64, arm64, and (with limitations) riscv64. It boots on QEMU, Raspberry Pi, BeagleBone, and any board where an upstream mainline kernel + a sane bootloader handle the hardware.
Where this stack works well
The musl/OpenRC/Alpine foundation is a fine choice — often the better choice — for products that share these properties:
- The developer controls the entire software stack. Custom apps, language runtimes the project picks, no closed-source vendor binaries in the critical path.
- Footprint, boot time, and simplicity matter. Alpine-derived images are typically half the size of a comparable Ubuntu image and boot in seconds. OpenRC is dramatically simpler than systemd.
- No regulatory dependence on a specific OS baseline. No Adaptive AUTOSAR, no FedRAMP/FIPS profile that names glibc, no telecom CNF spec that assumes RHEL.
- Hardware works with mainline drivers. No SoC vendor blob that was written against a specific Ubuntu LTS.
This covers a lot of real embedded territory: hobbyist SBC products, industrial gateways and edge controllers, networking equipment, custom IoT, industrial sensors, single-purpose appliances. It is a large and underserved market.
Where this stack does not work
Some products genuinely cannot ship on musl + OpenRC. The blockers are not theoretical — they are concrete proprietary binaries or specification requirements that yoe alone cannot work around.
Hard blockers (you must have glibc)
- SoC-vendor binary blobs. NVIDIA Jetson’s CUDA/cuDNN/TensorRT, Qualcomm display and camera HALs, NXP i.MX VPU and ISP blobs, Mali and Vivante GPU drivers. These are glibc-only proprietary binaries shipped by the silicon vendor with no plans to support musl.
- Commercial industrial-control runtimes. Codesys, ISaGRAF, vendor PLC stacks, fieldbus stacks (PROFINET / EtherCAT closed implementations).
- Vendor BSP ecosystems. Yocto BSPs from SoC vendors default to glibc + systemd and assume both throughout.
- Strict standards regimes. Adaptive AUTOSAR, telecom 5G CNF profiles, certain medical-device certifications.
- Enterprise Java app servers. WebSphere, WebLogic, some Oracle middleware — validated only on glibc.
Hard blockers (you must have systemd)
- Applications linking
libsystemddirectly (sd-bus, sd-journal). - Service hardening directives (
PrivateTmp,ProtectSystem, namespace policy) used as primary architecture rather than a sidecar. - Container runtimes configured with the systemd cgroup driver — many edge-AI inference deployments fall into this.
- Apps shipping systemd-only
.servicefiles, where porting to OpenRC means touching every app rather than the OS.
Soft blockers (workable but real)
- musl’s locale and i18n support is intentionally minimal.
- DNS resolver edge cases (musl historically did not do DNS-over-TCP for large responses by default).
- libstdc++ and a handful of glibc-specific extensions (
LD_AUDIT,nscd, certain printf format specifiers,getaddrinfoquirks). - Debug tooling —
gdb,perf, eBPF — has rougher edges on musl.
These are workable individually; in aggregate, on a complex product, they add up.
The case yoe should serve next: edge AI on Jetson
The natural next market for yoe is edge AI on Jetson-class hardware. This is where embedded budget is concentrated through 2026–2030, and it is where the existing tooling story is genuinely poor — NVIDIA’s SDK Manager hands you a stock Ubuntu image, customization is painful and non-reproducible, and meta-tegra (the Yocto path) is heavy and lags the official BSP.
It is also a market that yoe cannot serve in its current configuration, because Jetson forces glibc + systemd:
- CUDA, cuDNN, TensorRT, DeepStream, Triton, Argus, MMAPI — all glibc, all proprietary.
- L4T (Linux for Tegra) is an Ubuntu derivative; NVIDIA’s docs, support, reference designs, and customer projects all assume Ubuntu-shaped systems.
nvidia-container-runtimeintegrates with Docker/containerd configured against systemd’s cgroup driver.- Out-of-tree NVIDIA kernel modules must be built against L4T’s kernel tree with NVIDIA’s patches.
There is no clever way around this. A “musl Jetson” is a research project, not a product.
Strategic options
A. Stay where we are
Keep yoe aimed at the non-AI segment. Don’t pursue Jetson. This is the simplest path and the one the existing architecture serves cleanly. It is a smaller market than (C), but a real one.
B. Pivot fully to edge AI
Discard the Alpine-first foundation. Build yoe around Ubuntu/L4T as the default
rootfs source. The alpine_pkg work becomes mostly irrelevant. Different
foundation, different competition (SDK Manager, balenaOS, Foundries.io’s LmP,
meta-tegra), different positioning.
C. Make yoe agnostic about the rootfs base
Keep what we have, add a project-level abstraction that lets each project pick its own rootfs source. The same yoe DAG, dev loop, image assembly, signing, and OTA serve both “minimal Alpine gateway” and “CUDA-enabled Jetson edge AI box.”
This is yoe’s most defensible long-term identity. There is no other tool that
gives you a consistent embedded dev experience across heterogeneous distribution
bases. The work already done on shadowing, unit override, the alpine_pkg
class, and the apk-feed model is the right architecture for this future — the
base-source abstraction sits above it, not in place of it.
(C) is the recommended direction.
Rootfs-base abstraction (planned)
Status: Not implemented. Yoe today only supports the Alpine/musl/OpenRC configuration described in What yoe ships today. The abstraction sketched here is a forward design for serving glibc/systemd products (notably Jetson) without forking the project. No code, Starlark builtin, project field, or class described below exists in the current implementation.
The shape of the abstraction:
project(
name = "edge-ai-camera",
base = ubuntu_l4t(version = "36.4", flavor = "minimal"),
machines = [...],
modules = [
module("...", path = "modules/units-l4t"), # CUDA, TensorRT, DeepStream
module("...", path = "modules/my-app"), # the actual product
],
)
Or for the existing Alpine path:
project(
name = "industrial-gateway",
base = alpine_rootfs(version = "v3.21"),
machines = [...],
modules = [
module("...", path = "modules/units-alpine"),
module("...", path = "modules/units-core"),
],
)
Or for the from-source extreme:
project(
name = "minimal-bootloader-test",
base = yoe_native(), # build everything from source
...
)
A base is a tuple of
(libc, init, filesystem conventions, upstream feed format). The first three
are runtime properties of the target. The fourth is a conversion-time concern
handled by yoe, not something that propagates to the target.
The base provides:
- A starting rootfs. Tarball, deb-bootstrap, apk-bootstrap, or “build it yourself.”
- The libc and init choice. Implied by the base —
ubuntu_l4timplies glibc + systemd,alpine_rootfsimplies musl + OpenRC,yoe_nativeimplies whatever yoe builds explicitly. - Filesystem conventions. Multiarch lib paths under Debian-derived bases, flat paths under Alpine, etc.
- The “given” packages. Things the base distribution already ships, that yoe consumes rather than rebuilds (CUDA on Jetson, busybox on Alpine).
- The upstream feed format. apt/deb for Ubuntu/L4T bases, apk for Alpine bases. Yoe converts whatever the upstream uses into apks during fetch (see Package format stays apk regardless of base below). dpkg and apt never run on the target.
What yoe continues to own regardless of base:
- Image assembly: partition layout, bootloader install, signing, OTA.
- The DAG and content-addressed cache.
- The dev loop:
yoe build,yoe dev,yoe deploy,yoe run,yoe flash. - The unit format and the override/composition model.
- The signed apk feed. Every package on every target is a yoe-signed apk, regardless of where the bits originally came from.
- The on-target installer (apk-tools, glibc-built or musl-built depending on base).
- The TUI and the project orchestration commands.
The bits that vary with the base:
- The toolchain container (
toolchain-muslfor Alpine,toolchain-glibc-arm64for Jetson, etc.). - The init system integration (OpenRC scripts vs systemd unit files).
- The
network-config-style yoe-defining units (would have a systemd-flavored variant for systemd bases). - The conversion class invoked when consuming upstream packages (
alpine_pkg,deb_pkg, …).
Package format stays apk regardless of base (planned)
Status: Forward design. Today only
alpine_pkgexists, and it consumes packages that are already apks — no format conversion is performed. Thedeb_pkgclass described below is unimplemented; this section captures the design that the rootfs-base abstraction is expected to follow when Debian-derived bases land.
A core invariant of the rootfs-base abstraction: the on-target package format is apk, always. When yoe consumes packages from an upstream feed that uses a different format (apt/deb, RPM, …), the conversion happens at fetch time and produces a yoe-signed apk. The target image runs apk-tools, not dpkg or rpm.
The wins:
- The dev loop, override model, signed feed, DAG, and cache are identical across
bases. A developer working on an Alpine gateway and a developer working on a
Jetson box write the same kind of unit, deploy with the same
yoe deploy, and get the same dev experience. - Yoe’s signing key is the only key the target trusts. Upstream signing keys (NVIDIA’s apt key, Ubuntu’s keyring) never need to be installed on the target.
- A single installer toolchain on the target — apk-tools — instead of carrying dpkg + apt + their dependencies.
For Debian-derived bases, this implies a deb_pkg class symmetric to
alpine_pkg. Mechanically: ar x the .deb, extract data.tar.{gz,xz,zst},
re-pack the file tree as an apk, translate metadata (Depends: → D:,
Provides: → p:, Replaces: → r:), sign with the project key.
Glibc binaries on a glibc base, systemd unit files on a systemd base, multiarch paths on a Debian-conventions base — all of this is handled by the base, not by the format conversion. Once libc + init + conventions match what the upstream package was built for, the binaries inside the package run unchanged regardless of whether they’re delivered as a deb or a yoe-converted apk.
Residual dpkg-userland concerns
The conversion is mechanically straightforward. The non-trivial part is that
many Debian packages ship maintainer scripts that call dpkg-specific userland
tools — update-alternatives, dpkg-divert, debconf — which exist on
Debian/Ubuntu but not on a yoe target. Each has a bounded mitigation:
-
update-alternatives. Many Ubuntu packages register/usr/bin/python→python3.10,/usr/bin/editor→vim.basic, etc. Three viable strategies, in order of preference:- Bake at conversion time. Resolve alternatives statically during deb→apk repackaging — pick the priority-winning symlink, embed it as a real symlink in the apk’s data tree. Stateless, deterministic, works for the common case where embedded products don’t switch alternatives at runtime.
- Ship a tiny
update-alternativesstub. A few hundred lines of shell that mimics the file format and CLI surface. Required if any package will be installed/upgraded post-deploy viaapk addand its postinst calls update-alternatives. - Translate calls during script conversion. Postinst calls like
update-alternatives --install ...get rewritten to directln -sfduring conversion.
-
dpkg-divert. Used to relocate a file shipped by package A so package B can put its own version there. Rare in practice; effectively absent from the L4T set. Defer until a package actually needs it. -
Triggers. Debian’s file-trigger mechanism (
/etc/ld.so.conf.d/triggersldconfig,/usr/share/man/triggersmandb, etc.). apk has no equivalent. Runldconfigonce at end-of-rootfs-assembly; skip mandb / desktop-database / icon-cache for embedded images, or run them as a post-image step. None affect runtime behaviour. -
debconfinteractive prompts. Conversion has to pre-answer them. NVIDIA’s debs are mostly non-interactive; the few that aren’t get a per-package preseed declared in the unit. -
/var/lib/dpkg/probes. Some scripts test for the dpkg database. If it matters for a specific package, ship a stub dpkg database (an empty directory tree with astatusfile marking everything “installed”). Tiny, one-time work in the rootfs base. -
License redistribution. CUDA / cuDNN / TensorRT / DeepStream EULAs allow inclusion in shipped product images but generally not public mirroring. Yoe’s converted apks are fine for a customer’s private product feed; they should not be hosted on a public mirror.
alpine_pkghas this concern in principle but Alpine is FOSS-dominant; NVIDIA’s stack is where it actually bites. -
APT mirror semantics. Apt’s repo format (signed
Releasefiles,Packages.gz, version constraints with epochs and tildes) is more complex than Alpine’s flatAPKINDEX. The conversion class needs to read it correctly. Several mature Go libraries handle this; not novel work.
The kernel-module problem (NVIDIA’s out-of-tree drivers built against L4T’s specific kernel ABI) is orthogonal to package format — it’s a Jetson-target problem, not a deb-vs-apk problem.
Base bootstrap
Yoe does not have a “bootstrap” phase in the debootstrap sense — there is no
separate first stage that builds a minimum environment before normal package
installation can run. The rootfs assembly is a single procedure that works the
same way today on Alpine and would work the same way on a glibc/systemd base
tomorrow:
mkdir <rootfs>— the starting rootfs is an empty directory.- Create the apk DB skeleton:
mkdir -p <rootfs>/lib/apk/db && touch <rootfs>/lib/apk/db/installed. - Drop the project’s signing key into
<rootfs>/etc/apk/keys/. - Write
<rootfs>/etc/apk/repositoriespointing at the project’s signed feed (and any auxiliary feeds the base wants to consume directly, if the project opts in). apk add --root <rootfs> --initdb <package list>— run from inside the toolchain container, against the project’s feed.
That is the whole assembly. Everything in the rootfs lands via apks. The first
packages installed (base-files, musl or libc6, the userland shell,
apk-tools, init system) carry the filesystem skeleton — /etc/passwd,
/etc/group, /dev, /proc mountpoints, default config files — inside their
data segments.
The only things that have to exist before this loop runs are the toolchain container (provides apk-tools as the orchestrator binary) and the project’s signed feed (provides the apks to install).
What varies by base
- The foundation package set. Alpine bases install
base-files,busybox,musl,apk-tools, OpenRC. A glibc/systemd base installs something likebase-files-systemd,libc6,bash(orbusybox-glibc),apk-tools-glibc,systemd,dbus. Each base declaration enumerates its foundation set. - The toolchain container.
toolchain-muslfor Alpine bases, a paralleltoolchain-glibc-arm64(or similar) for glibc bases. The container’s libc and the target’s libc are independent — apk-tools at install time just extracts files, it doesn’t dlopen them. - The signing key trusted in the rootfs. Always the project key. The upstream signing key (Alpine’s, NVIDIA’s, Ubuntu’s) is used during fetch and verification by the conversion class but never reaches the target.
Two source models for foundation packages
Option A: From-apks (purist, fully reproducible). Every package, including
the essentials, comes from a yoe-built or conversion-class-wrapped apk in the
project’s feed. The starting rootfs is empty; yoe owns the entire chain. For a
glibc/systemd base, this means wrapping libc6, libstdc++6, systemd,
bash, etc. as deb_pkg units. More setup work, total reproducibility.
Option B: From-tarball (pragmatic, vendor-blessed). The project’s base()
declaration points at a vendor-supplied rootfs tarball — NVIDIA’s official L4T
sample rootfs for Jetson, ubuntu-base-<version>.tar.gz for generic Ubuntu, or
alpine-minirootfs-<version>.tar.gz for an Alpine shortcut. Yoe extracts the
tarball as the starting rootfs, then runs apk add --root to overlay
yoe-installed apks on top. apk-tools installs into a non-empty rootfs without
conflict — it owns its own DB and ignores files it didn’t put there, except
where its package contents collide. Faster to set up because the wrapping work
for “every essential package” is replaced by trusting the tarball. Less
reproducible because the tarball is a black box.
For Jetson, most projects will pick Option B — NVIDIA tests the sample rootfs and supports it as the basis of L4T. Option A is the right answer when every byte must be audited, when no vendor tarball exists, or when a project wants the same provenance story across bases.
Why an empty starting rootfs works for any libc
A common confusion: if running glibc binaries requires glibc to be present, how does an empty rootfs get glibc onto itself?
apk-tools at install time is a file extractor, not an executor. It reads
each apk’s data tar and writes the files to the target rootfs; nothing ever
calls into the binaries it’s installing. The apk-tools process doing the work
runs in the toolchain container, where its own libc is whatever the container
provides — musl today, glibc on a glibc-based toolchain container later. When
apk-tools extracts the libc6 package’s data tar into the target rootfs, it
places /lib/aarch64-linux-gnu/libc.so.6 on disk; nothing tries to dlopen it
until the rootfs actually boots.
So the toolchain container’s libc and the target rootfs’s libc are independent.
A Jetson target rootfs (glibc) can be assembled from a toolchain container
that’s still musl-based, and a yoe-built apk-tools-glibc unit can land on the
target as just another package alongside libc6, ready to run on first boot.
The same principle is why on-target apk add after deployment works identically
across bases: by then the rootfs has its own apk-tools binary linked against
its own libc, and the install loop is just “extract files, update DB.”
What changes for yoe-defining units
Today, network-config, base-files, and similar units assume OpenRC-style
/etc/init.d/Sxx scripts. In a base-agnostic future, those units gain a
base-aware code path or get split into init-system-specific variants. The
override model already in yoe (name shadowing, provides for alternative
selection) handles this cleanly: either the init-system-specific units-systemd
module shadows network-config with a systemd version, or network-config
itself detects the active base.
Either pattern works. The decision is local to each unit.
Practical roadmap (planned)
Status: Forward design, not a commitment. The current focus remains finishing the Alpine/musl path described in What yoe ships today and units-alpine.md. The phases below describe the approximate order in which the rootfs-base abstraction would be built, conditional on demand.
-
Solidify the Alpine path. Ship enough that yoe is a viable choice for non-AI embedded products today. The same architecture carries forward; this is the foundation that proves the dev-loop and image-assembly value before a second base is introduced.
-
Identify the Alpine-coupled seams. Survey
units-coreand the internal Go code for assumptions that won’t survive a non-Alpine base: hardcoded apk-tool invocations, OpenRC-flavored init paths, busybox-shadow logic inreplaces, the toolchain container’s musl-only Dockerfile. Make these pluggable but defer the rewrite. -
deb_pkgclass. Symmetric toalpine_pkg: fetch a.deb, extractdata.tar.{gz,xz,zst}, repack as a yoe apk with translated metadata, sign with the project key. Resolveupdate-alternativescalls statically at conversion time. Treat the rest of the dpkg-userland concerns (Residual dpkg-userland concerns) as they come up, per-package, in priority order. -
First Jetson prototype. Pick a single Jetson SKU (Orin Nano dev kit is cheapest), get a yoe-assembled image booting with CUDA working end-to-end. Treat it as a learning project — the goal is to discover what abstraction breaks, not to ship Jetson support. Likely outputs: a
toolchain-glibc-arm64container, aubuntu_l4trootfs base implementation that usesdeb_pkgto consume NVIDIA’s apt feed, a systemd-flavorednetwork-config, glibc apk-tools on the target. -
Promote the abstraction. With one working Jetson example, generalize the project base configuration so the same yoe codebase serves both Alpine and Jetson cleanly. The
deb_pkgclass earns its keep by being reused across Ubuntu generic, Debian, L4T, and any future Debian-derived base. -
Second base, third base. Once the abstraction is proven on two distinct bases, additional bases (Ubuntu generic, Adelie’s glibc/musl mix, Yocto layers, custom rootfs tarballs) become incremental wraps rather than redesigns.
Decision rubric
Until the rootfs-base abstraction lands, yoe should refuse to chase glibc/systemd compatibility through hacks (gcompat shims, dual-libc images, OpenRC-emulating-systemd compatibility layers). These produce brittle systems that look like they work and then fail at the worst moment. The right answer for a glibc/systemd target today is “yoe is not the right tool yet” — say it explicitly and revisit when the abstraction is real.
For the Alpine path, the rubric stays as established in units-alpine.md:
- Yoe builds the easy stuff (small libraries, small userland tools) to preserve libc-portability.
units-alpineships Alpine-native (apk-tools, alpine-keys, musl) and hard-to-build packages (when added — openssl, curl, openssh, qtwebengine, python, llvm).- Project-level shadowing remains the override hook for any individual package the project wants to swap.
Summary
Today: musl + OpenRC + Alpine, serving non-AI embedded well.
Tomorrow (planned): rootfs-base-agnostic, where each project picks the foundation appropriate to its hardware and product. Same yoe experience over Alpine for gateways and over Ubuntu/L4T for Jetson.
Not on the menu: trying to make musl/OpenRC pretend to be glibc/systemd, or trying to make yoe pretend to be a single-base distribution like Alpine itself. Those are projects that have already been tried and have not aged well.
Running Containers on yoe Images (planned)
Status: No container runtime ships in any yoe-built image today. This document captures the design discussion and prerequisites for getting a container runtime (Docker, Podman, or containerd) running on devices built from yoe units. Nothing described here is implemented yet.
Supporting container workloads on yoe-built images is a high-value feature: it is the single biggest thing that turns a minimal embedded Linux into something people actually want to deploy on real devices. This document records what it would take.
Reference Point: Home Assistant OS
Home Assistant OS (HAOS) is the clearest proof that full Docker on embedded devices is viable, and it is a useful reference architecture. Key facts:
- Base: Buildroot (not Yocto)
- Container runtime: full Docker Engine (
dockerd+containerd+runc) - Orchestration: their own “Supervisor” — a privileged container that manages addon containers and talks to the host via D-Bus
- Rootfs: read-only squashfs with A/B partitions for atomic updates (RAUC)
- Data partition: separate ext4/btrfs for
/var/lib/dockerand addon state - Init: systemd
- Networking: NetworkManager
HAOS images are ~350 MB compressed / ~1 GB installed and run comfortably on a Raspberry Pi 4 with 2 GB RAM. Source and kernel fragments are public at https://github.com/home-assistant/operating-system.
The takeaway: Buildroot-with-Docker has been a proven path for years. Nothing in yoe’s architecture prevents matching or bettering it.
Kernel Requirements
A container-capable kernel needs a specific set of CONFIG options. The upstream
moby/moby repository ships a check-config.sh script that enumerates them and
is worth wiring into the kernel unit’s QA step.
Essentials:
- Namespaces:
PID,NET,IPC,UTS,USER,MNT,CGROUP - Cgroups v2 (
CONFIG_CGROUPS,CONFIG_MEMCG,CONFIG_CPUSETS, etc.) — modern Docker and containerd assume v2 - Storage driver:
CONFIG_OVERLAY_FS— without this the engine falls back to thevfsdriver, which is unusably slow - Networking:
CONFIG_BRIDGE,CONFIG_VETH,CONFIG_NETFILTER*,CONFIG_NF_NAT,CONFIG_NF_TABLES(or legacyCONFIG_IP_NF_*) - Security:
CONFIG_SECCOMP,CONFIG_SECCOMP_FILTER,CONFIG_KEYS - Misc:
CONFIG_POSIX_MQUEUE
The plan is to ship a kernel-container-host.cfg config fragment alongside the
kernel unit and add a build-time check that runs check-config.sh against the
resulting .config.
Userspace Prerequisites
Container runtimes pull in userspace tools that yoe does not yet package. Shipping a container-capable image forces the following units from the roadmap to land first:
iptablesornftables— Docker refuses to start without oneca-certificates— required to pull images over TLSutil-linux— container runtimes usemountwith flags that busybox mount does not handle cleanlykmod— needed to loadoverlay,bridge, and netfilter modules at runtime, unless everything is built into the kernele2fsprogs— for formatting a dedicated/var/lib/dockerpartition
This is a nice forcing function: these units are all on the roadmap for other reasons, and shipping a container host is a concrete goal that justifies landing them.
libc and Init System
All mainstream container runtimes — Docker, containerd, runc, Podman, nerdctl — are Go and do not meaningfully care about the host libc. Alpine Linux (musl + OpenRC) has shipped full Docker for years; Void (musl + runit) and Chimera (musl + dinit) do the same. yoe currently targets musl, so this is a well-trodden path with even less friction than the glibc equivalent.
Known musl-specific caveats, all survivable:
- musl’s DNS resolver does not honor
/etc/nsswitch.confand differs from glibc in edge cases. This affects workloads running in containers, but most container images bring their own libc (Debian, Alpine, distroless), so the host’s libc rarely reaches the workload. - Prebuilt Go binaries compiled with
CGO_ENABLED=1against glibc will not run on a musl host. yoe builds everything from source, so this is moot.
None of these runtimes require systemd. Docker ships a SysV-style init script
upstream; Alpine’s packaging supplies OpenRC services for dockerd,
containerd, and Podman. Podman is daemonless and needs no init integration at
all.
Init-system considerations for yoe:
- yoe currently uses busybox init, which is fine for
dev-imagebut thin for a container host — no dependency ordering, no supervision, no auto-restart of crashed daemons. - OpenRC is the natural next step: small, well-supported by Alpine’s packaging, and the path of least resistance for Docker/containerd service scripts.
- s6 or runit are lighter alternatives if supervision is the main need and OpenRC’s dependency machinery feels heavy.
- systemd is possible but a large addition and not required. Adopt only if a downstream workload genuinely needs it.
- cgroups v2 without systemd: mount
cgroup2at/sys/fs/cgroupat boot and configure the kernel cmdline accordingly. containerd and Docker handle this fine; no systemd-specific glue is needed.
The init choice should be made deliberately before the container-host-image
milestone. OpenRC is the default recommendation unless there is a reason to pick
otherwise.
Runtime Choice
Three credible options, in rough order of embedded-friendliness:
Option 1: containerd + runc + nerdctl
- Smallest footprint (~50–100 MB installed)
- What Kubernetes and K3s use under the hood
nerdctlprovides adocker-compatible CLI- Best pick if the device is a workload runner rather than a developer box
- Recommended as the first milestone — smallest surface, proves the concept, leaves room for Docker CE later
Option 2: Podman
- Daemonless, rootless-friendly
- CLI-compatible with
docker - Popular in Red Hat ecosystems and increasingly in embedded
- Good middle ground if users expect a
docker-like UX without the daemon
Option 3: Docker CE
- Largest footprint (~200–300 MB across
dockerd,containerd,runc, CLI) - Maximum ecosystem compatibility — Compose, Swarm, third-party tooling
- What users ask for by name because of familiarity
- Worth adding after containerd is working, if there is demand
Building from Source
Docker’s prebuilt “static” binaries (from download.docker.com/linux/static/)
are not truly static — dockerd, containerd, and runc are linked against
glibc and pull in libseccomp/libdevmapper dynamically on some releases — so
they will not run on a musl-based yoe rootfs. Building from source is the only
serious path.
The toolchain side is already solved: modules/units-core/units/dev/go.star
provides a Go toolchain (currently Go 1.26.2) and classes/go.star gives Go
units a build class. The component breakdown:
dockerCLI — pure Go,CGO_ENABLED=0, no system-library depscontainerd— mostly pure Go, builds withCGO_ENABLED=0for the daemon andctrrunc— effectively requires cgo +libseccompto be useful; without seccomp filtering it is not a serious container runtime, so alibseccompunit must land firstdockerd— optional cgo paths for graphdrivers (devicemapper, btrfs), all avoidable with overlay2 as the default storage drivertini(docker-init) — small C program, trivial autotools build
So the work is one C library unit (libseccomp), four Go units (runc,
containerd, docker, dockerd), and one trivial autotools unit (tini). The
genuinely hard pieces are the runtime concerns covered elsewhere in this
document — kernel config, init integration, iptables/nftables, the
/var/lib/docker data partition — not the source builds themselves.
Alpine’s aports tree (community/docker, community/containerd,
community/runc) is the obvious reference: those packages are already
musl-native and the APKBUILDs document the exact configure flags, ldflags, and
patches that work in practice.
Building cgo Units (runc, libseccomp consumers)
The pure-Go components (docker CLI, containerd) drop into the existing
go_binary class without ceremony — that class already pulls the upstream
golang:1.24 container and builds with CGO_ENABLED=0. The interesting case is
runc, which needs cgo + a working C compiler + libseccomp headers and
libraries, all in the same build environment.
The Yoe-native answer is to use the existing units/dev/go.star as a build-time
dep rather than introducing a new “Go + GCC” container:
- A unit’s
depsare installed into the build sysroot before that unit builds. The Go toolchain unit installs to$PREFIX/lib/gowith/usr/bin/{go,gofmt}symlinks, so a unit withdeps = ["go"]getsgoonPATHat build time. - The same mechanism lands a
libseccompunit’s headers and.soin the sysroot, wherepkg-config --cflags --libs libseccompfinds them. - The existing
toolchain-muslcontainer already providesgcc,binutils,make, etc.
So a runc unit is: container = "toolchain-musl",
deps = ["go", "libseccomp"], and a build task that runs the upstream Makefile.
go build invokes gcc from the container, links against libseccomp from the
sysroot, and uses go from the sysroot. One container, three pieces, all native
to the Yoe model.
The one wrinkle: classes/go.star::go_binary currently hardcodes
container = "golang:1.24" and CGO_ENABLED=0, which is fine for pure-Go units
but cannot express the cgo + musl + sysroot-deps combination above. The class
should grow a cgo = True mode that switches the container to toolchain-musl,
drops the CGO_ENABLED=0, and relies on deps for the Go toolchain instead of
the upstream Go image. This same path will be reused by anything else needing
cgo (devmapper, btrfs, AppArmor consumers), so it is worth making first-class
rather than hand-rolling tasks per unit.
Resource Envelope
From HAOS experience and general rules of thumb:
- Storage: ~100 MB (containerd-only) to ~300 MB (Docker CE) for the engine
itself, plus whatever images and volumes the workloads need. A dedicated data
partition for
/var/lib/containerdor/var/lib/dockeris strongly recommended. - RAM: 256 MB minimum for the daemon to be non-miserable; 512 MB+ for anything real; 2 GB+ for comfortable multi-container workloads.
- Rootfs: writable
/var(or a writable overlay) is required. A read-only rootfs with a separate writable data partition — HAOS-style — is the right long-term pattern.
Suggested Path
- Land the roadmap units
util-linux,kmod,iptables/nftables,ca-certificates, ande2fsprogs. These are needed for other reasons too. - Add a
kernel-container-host.cfgfragment and wirecheck-config.shinto the kernel unit’s QA step. - Package
runc,containerd, andnerdctlas the first milestone. - Ship a
container-host-imagealongsidedev-imagethat pulls it all together — kernel config, userspace, engine, and a writable data partition. - Consider Podman and/or Docker CE as follow-on units once the containerd path is solid.
- Longer term: mirror HAOS’s update architecture (A/B partitions, read-only rootfs, signed update bundles). That is where HAOS spent its engineering budget, and it is the real differentiator against ad-hoc Buildroot images.
Why This Matters for yoe
- Enabling Docker on Buildroot is famously fiddly; on Yocto it requires the
large
meta-virtualizationlayer. yoe can ship a clean, opinionated path that is smaller and more approachable than either. - A
container-host-imageis a credible, demo-able milestone that proves the machine-portability claims indocs/metadata-format.mdare real. - It turns yoe from “a nicer way to build a minimal Linux” into “a reasonable way to build a production-shaped device OS” — a much larger audience.
Comparisons
How [yoe] relates to existing embedded Linux build systems and distributions.
For each, we identify what [yoe] adopts, what it leaves behind, and where it
differs.
vs. Yocto / OpenEmbedded
Yocto is the industry standard for custom embedded Linux. It is extremely capable but carries significant complexity.
What [yoe] adopts from Yocto:
- Machine abstraction — a declarative way to define board-specific configuration (kernel defconfig, device tree, bootloader, partition layout).
- Image units — composable definitions of what goes into a root filesystem image and how it’s laid out on disk.
- Module architecture — the ability to overlay vendor BSP customizations on top of a common base without forking.
- OTA integration — first-class support for update frameworks (RAUC, SWUpdate).
What [yoe] leaves behind:
- BitBake and the task-level dependency graph.
- The unit/bbappend/bbclass metadata system.
- sstate-cache complexity — Yocto’s sstate is per-task and requires careful
configuration of mirrors, hash equivalence servers, and signing.
[yoe]’s cache is per-unit, stored in S3-compatible object storage, and needs only a bucket URL. - Cross-compilation toolchains.
- Python as the tooling language.
No conditional override syntax. Yocto’s
override system
(DEPENDS:append:raspberrypi4, SRC_URI:remove:aarch64, etc.) exists because
BitBake’s metadata model is variable-based — you set global variables and then
layer conditional string operations on top. The result is powerful but
notoriously hard to debug (you need bitbake -e to see what a variable actually
resolved to).
[yoe]’s model is function-based, which covers the same use cases more
explicitly:
| Yocto override | [yoe] equivalent |
|---|---|
DEPENDS:append:raspberrypi4 | if MACHINE == "raspberrypi4": extra_deps = [...] |
SRC_URI:append:aarch64 | if ARCH == "aarch64": ... in the unit |
PACKAGECONFIG:remove:musl | Module scoping — musl project doesn’t include that module |
FILESEXTRAPATHS:prepend + append | load() the upstream function, call with different args |
Starlark has if with predeclared variables (MACHINE, ARCH), and the
function composition pattern handles the “extend from downstream” case. When
machine-specific behavior is needed, it’s right there in the .star file — no
hidden layering of string operations.
Key differences:
| Yocto | [yoe] | |
|---|---|---|
| Build system | BitBake (Python) | yoe (Go) |
| Package format | rpm / deb / ipk | apk |
| Config format | BitBake units (.bb/.bbappend) | Starlark (Python-like) |
| Cross-compilation | Required, central design assumption | None — native builds only |
| Dependency model | Task-level DAG (do_fetch → do_compile → …) | Unit-level DAG (simpler, atomic per-unit) |
| Language ecosystems | Wrapped in units | Native toolchains (go modules, cargo, etc.) |
| Learning curve | Steep — weeks to become productive | Shallow — Starlark (Python-like) |
| Build caching | sstate (per-task, hash-based, complex setup) | Per-unit .apk hashes in S3-compatible cache |
| Multi-image support | Yes — multiple images from one project | Yes — image inheritance + machine matrix |
| On-device updates | Possible but complex (smart image) | Built-in via apk repositories |
When to use Yocto instead: when you need extremely fine-grained control over every component, must support exotic architectures with no native build infrastructure, or are in an organization that already has deep Yocto expertise and tooling invested.
vs. Buildroot
Buildroot is the simplest of the established embedded Linux build systems. It
shares [yoe]’s preference for simplicity.
What [yoe] adopts from Buildroot:
- The principle that simpler is better.
- Minimal base system approach.
What [yoe] leaves behind:
- Kconfig as the configuration interface.
- Make as the build engine.
- The assumption that cross-compilation is required.
- Full-rebuild-on-config-change behavior.
Key differences:
| Buildroot | [yoe] | |
|---|---|---|
| Configuration | Kconfig (menuconfig) | Starlark files |
| Build engine | Make | yoe (Go) |
| Cross-compilation | Required | None — native builds only |
| On-device packages | None — monolithic image only | apk — incremental updates |
| Incremental builds | Limited — config change triggers full rebuild | Content-addressed cache, only rebuild what changed |
| Modern languages | Wraps Go/Rust/etc. in Make, often poorly | Delegates to native toolchains |
| Build caching | ccache at best, no output caching | Content-addressed .apk cache, shareable across CI |
| CI/team sharing | Everyone rebuilds from scratch | Push/pull from shared package repo |
| Composable images | No — single image output | Yes — assemble different images from same packages |
The biggest structural difference is the unit/package split. Buildroot has no concept of installable packages — it builds everything into a monolithic rootfs. This means:
- You can’t update a single component on a deployed device without reflashing.
- You can’t share build outputs between developers or CI runs.
- You can’t compose different images from the same set of built packages.
Caching gap: Buildroot has no output caching at all — every developer and
every CI run rebuilds from source. ccache can help with C/C++ compilation but
doesn’t help with configure steps, language-native builds, or package assembly.
[yoe]’s S3-backed cache means a typical developer build pulls pre-built
packages for everything except the component they’re actively changing.
Multi-image gap: Buildroot produces a single image per configuration. To
build a “dev” variant and a “production” variant, you need separate build
directories with separate configs. With [yoe], both images share the same
package repository — only the package lists differ.
When to use Buildroot instead: when you want the absolute simplest build system for a truly minimal, single-purpose, static embedded system (firmware for a sensor, a network appliance with no field updates). If the device never needs a partial update and the image is small enough to rebuild in minutes, Buildroot’s simplicity is hard to beat.
vs. Alpine Linux
Alpine is the closest existing distribution to what [yoe]’s target runtime
looks like.
What [yoe] adopts from Alpine:
- apk as the package manager — adopted directly. Fast, simple, proven.
- busybox as coreutils — minimal userspace in a single binary.
- Minimal base image size — target single-digit MB base images before application payload.
- Security-conscious defaults — no unnecessary services, no open ports, no setuid binaries unless explicitly required.
- Fast package operations — install/remove measured in milliseconds.
- Minimal install scripts — Alpine packages do little or nothing in
postinst. Most ship with no install scripts at all; those that need them
typically run a handful of lines (
addgroup,adduser, maybe anrc-update). apk supports the full lifecycle (.pre-install,.post-install,.pre-upgrade,.post-upgrade,.pre-deinstall,.post-deinstall, plus triggers), but the culture is to keep them empty. This is a sharp contrast with Debian’s.debmaintainer-script tradition — preinst/postinst/prerm/postrm with debconf prompts, alternatives,dpkg-divert, and complex migrations — which is exactly what made EmDebian’s busybox replacement effort unsustainable (see Debian section below).
Alpine APKBUILDs are the reference implementation for [yoe] units. When
writing a new unit, the corresponding Alpine APKBUILD is the first place to
look. Alpine has already solved configure flags, build-time dependencies,
patches, and — most importantly — the install-script question (usually: nothing
to do). Following Alpine keeps [yoe] out of the Debian-style postinst trap,
where package install becomes imperative system mutation that’s hard to
reproduce, hard to sandbox, and hard to roll back. If Alpine doesn’t need a
postinst for it, [yoe] shouldn’t either.
What [yoe] leaves behind:
- musl — planning to use glibc instead for maximum compatibility with
language runtimes and pre-built binaries (
[yoe]currently still inherits musl from Alpine’s toolchain; the move is pending). - Limited BSP/hardware story — Alpine doesn’t target custom embedded boards.
On the init system: Alpine uses OpenRC. [yoe] currently uses busybox init,
the same as Alpine’s minirootfs default. systemd may become an option in the
future — it’s the pragmatic choice for developer-facing systems with rich
service management, journal logging, and udev — but the project has not
committed to shipping it as part of the base. Today, service management is
whatever busybox init + plain scripts give you.
Key differences:
| Alpine | [yoe] | |
|---|---|---|
| C library | musl | musl today; glibc planned |
| Init system | OpenRC | busybox init today; systemd a future option |
| Target | Containers, small servers | Custom embedded hardware |
| BSP support | Generic x86/ARM images | Per-board machine definitions |
| Image assembly | alpine-make-rootfs | yoe build <image> with machine + partition support |
| Build system | abuild + APKBUILD shell scripts | yoe build + Starlark units |
| Kernel management | Generic kernels | Per-machine kernel config, device trees |
| OTA updates | Standard apk upgrade | apk + full image update + rollback |
When to use Alpine instead: when you’re targeting containers or generic server hardware and don’t need custom BSP, kernel configuration, or image assembly tooling. Alpine is an excellent base for Docker containers and small VMs.
vs. Arch Linux
Arch is a philosophy as much as a distribution. Its commitment to simplicity and
transparency directly influences [yoe]’s design.
What [yoe] adopts from Arch:
- Rolling release model — no big-bang version upgrades; packages update continuously against a single branch.
- Minimal base, user-assembled — ship the smallest useful system and let the integrator compose what they need.
- PKGBUILD-style simplicity — build definitions should be concise, readable
shell-like scripts, not complex metadata.
[yoe]’s Starlark units aim for similar auditability — simple units read like declarative config. - Documentation culture — invest in clear, practical docs rather than tribal knowledge.
What [yoe] leaves behind:
- x86-centric assumptions.
- pacman (using apk instead).
- The expectation of interactive manual system administration.
- Lack of reproducibility guarantees.
Key differences:
| Arch | [yoe] | |
|---|---|---|
| Target | Desktop/server, x86-first | Embedded, multi-arch |
| Package manager | pacman | apk |
| Package format | tar.zst + .PKGINFO | apk (tar.gz + .PKGINFO) |
| Build definitions | PKGBUILD (bash) | Starlark units |
| Reproducibility | Not a goal | Content-addressed builds |
| Image assembly | Manual (pacstrap) | Automated (yoe build <image>) |
| Administration | Interactive (hands-on) | Declarative (config-driven) |
When to use Arch instead: when you’re building a desktop or server system for personal use and value having full manual control. Arch’s philosophy works well for power users on general-purpose hardware.
vs. Debian
Debian is the oldest and most conservative general-purpose Linux distribution. Many embedded projects start on Debian (or a derivative like Raspberry Pi OS) before hitting its limits on custom hardware.
What [yoe] adopts from Debian:
- Signed binary package repositories — apt’s approach to package
authenticity and repository signing is the model.
[yoe]’s apk repositories follow the same principle. - Policy-driven package conventions — Debian Policy defines where files go,
how services are declared, and how packages relate.
[yoe]inherits this culture through Alpine’sabuildconventions. - Package metadata as data — control files (or APKBUILDs) are declarative, not imperative install scripts.
- Multi-arch awareness — Debian has long taken non-x86 architectures
seriously.
[yoe]does too, by design.
What [yoe] leaves behind:
- dpkg/apt in favor of apk — smaller, faster, designed for minimal systems.
- The stable/testing/unstable release model —
[yoe]is rolling by default; deployed devices pin to a known-good snapshot of the repo. - The maintainer-centric model — one maintainer per package, committee-
driven policy.
[yoe]units are part of the project; whoever changes the build changes the unit. - debconf and interactive post-install configuration — images are assembled from declarative Starlark, not from prompts during package install.
- Desktop/server default set — Debian’s standard install assumes a huge set
of tools are present.
[yoe]starts near zero and adds only what’s declared. - In-place
dist-upgrade—[yoe]prefers atomic image updates with rollback over mutating a running root filesystem.
Key differences:
| Debian | [yoe] | |
|---|---|---|
| Target | General-purpose server/desktop | Embedded, custom hardware |
| Package manager | apt / dpkg | apk |
| Package format | .deb (ar + tar) | apk (tar.gz + .PKGINFO) |
| Release model | Stable/testing/unstable + LTS | Rolling, pinned snapshots |
| Build definitions | debian/ dir (rules + control) | Starlark units |
| Image assembly | debootstrap / live-build | yoe build <image> |
| BSP support | Generic kernels; no board tooling | Per-board machine definitions |
| Kernel management | Distro-provided kernel packages | Per-machine kernel config + DTs |
| OTA updates | apt upgrade (in-place) | apk + atomic image + rollback |
| Footprint | Standard install ~1 GB+ | Target single-digit MB base |
Debian derivatives (Raspberry Pi OS, Ubuntu, etc.) inherit most of these properties. Teams often start on Raspberry Pi OS and hit three walls: (1) it’s not built from source under their control, (2) it’s difficult to trim below a couple hundred MB, and (3) there’s no clean story for deploying the same software to a custom board.
Minimum footprint
The smallest documented Debian install path is
debootstrap --variant=minbase, which
installs only Essential and Priority: required packages (base-files,
base-passwd, bash, dash, dpkg, apt, libc, perl-base, and a handful of others) —
no systemd, no standard utilities beyond the essential set. In practice minbase
produces a root filesystem in the ~150–250 MB range depending on release and
architecture. A default debootstrap (which also pulls Priority: important,
including systemd) lands closer to 300–500 MB, and a “standard” Debian install
is well over 1 GB.
Even minbase is one-to-two orders of magnitude larger than a minimal Alpine or
[yoe] base, which can reach single-digit MB before application payload. The
floor is set by the GNU userland itself: glibc + coreutils + perl-base + bash +
dpkg + apt are ~60–80 MB combined before anything application-specific is
installed. Dropping perl-base or coreutils breaks dpkg maintainer scripts (see
Emdebian, below), so this floor is structural, not a tuning problem.
Embedded Debian efforts
EmDebian (2007–2014) was the most serious attempt at a minimal, embedded-focused Debian. It shipped two variants:
- Emdebian Grip — a binary-compatible subset of Debian with a smaller curated package set, still using GNU coreutils and glibc. “Debian, but smaller.”
- Emdebian Crush — a more aggressive variant that
replaced GNU coreutils with busybox,
dropped optional dependencies (LDAP from curl, etc.), and cross-built
packages. Closer in spirit to what
[yoe]does with Alpine-style apks.
The project posted an
end-of-life notice on 13 July 2014,
with Emdebian Grip 3.1 (tracking Debian 7 “wheezy”) as the last stable release.
The cited reasons were (1) embedded hardware had moved to expandable storage
where full Debian’s size was no longer painful, and (2) the maintenance burden
of tracking Debian upstream while patching maintainer scripts for a busybox
userland was unsustainable. Crush specifically documented recurring problems
replacing coreutils components with busybox because of .deb postinst scripts —
the exact ecosystem-level incompatibility that any “Debian + busybox” attempt
runs into. Someone has already taken that path to its natural conclusion.
debos is the modern Debian image
builder, created by Sjoerd Simons at Collabora (introduced in 2018, Go
codebase). It is the closest structural analogue to [yoe]’s image assembly in
the Debian ecosystem:
- Written in Go, like
yoe. - YAML recipes describe a sequence of actions (debootstrap, apt install, partition, mkfs, bootloader install, overlay files, export as tarball/OSTree/disk image).
- Runs actions without root via a
fakemachineVM helper — similar intent to[yoe]’s “container as build worker” model. - Targets ARM embedded boards as a first-class use case.
[yoe] and debos cover overlapping ground. Key differences: debos starts from
existing Debian .debs (inheriting the size and package-model properties
above), while [yoe] builds from source into content-addressed apks; debos
recipes are flat action sequences, while [yoe]’s Starlark units form a
dependency graph with a shared, content-addressed build cache.
aptly is the canonical tool for running a
private, pinned Debian/Ubuntu repository. For teams that do ship Debian-based
devices, aptly plays the role that [yoe]’s S3 package cache plays:
- Mirror remote Debian/Ubuntu repos, partial or full, filtered by component/architecture.
- Take immutable, dated snapshots of a mirror or local repo — fixing package versions at a point in time.
- Publish snapshots as apt-consumable repositories with signed metadata.
- CLI plus REST API for CI integration.
The snapshot model is what gives a Debian-based deployment the reproducibility
[yoe] gets from content-addressed apks — different mechanism, same goal.
Gaia Build System is the most active modern example of a full build system (not just an image builder) layered on Debian. It ships three reference distributions:
- DeimOS — a base Debian-derived reference distro.
- PhobOS — a Torizon-compatible Debian derivative
that boots via OSTree, uses Aktualizr for OTA updates, bundles a Docker
runtime, and keeps native
apt-get installavailable on deployed devices. - PergamOS — a library of Debian-based container images used as build and application bases.
Architecturally:
- Cookbook model — a Yocto-inspired multi-repo structure where each
“cookbook” is a git repo and a
manifest.jsonties them together. - Container-based builds — each build runs inside a Debian Docker container,
matching
[yoe]’s “container as build worker” approach. - Multi-language recipes — the
gaiacore is TypeScript (running on Bun); cookbook logic is a mix of Xonsh (Python-flavored shell), plain shell, and JSON distro definitions.[yoe]consolidates to a single config language (Starlark) for units, machines, and images. - Targets — Raspberry Pi, NXP i.MX (e.g., iMX95 Verdin EVK via Toradex), and QEMU x86-64/arm64.
Contrast with [yoe]:
- Gaia inherits Debian’s size and package-model properties (huge archive,
.debmaintainer scripts, ~150 MB+ floor);[yoe]is apk-based and targets single-digit MB bases. - Gaia’s deployment model is OSTree + Aktualizr (Torizon-compatible);
[yoe]uses apk plus atomic image updates with rollback. - Gaia’s recipe surface is multi-language (TS + Xonsh + Shell + JSON);
[yoe]is Starlark end-to-end. - Both build inside containers, both target custom ARM hardware, both aim for reproducibility through pinned inputs.
When to prefer Gaia: when you specifically want a Debian userland with
apt-get install still functional on the device, and especially when targeting
Toradex/Torizon-adjacent hardware where OSTree-based deployment is already
established.
This doesn’t mean Debian is absent from embedded — it absolutely is present —
but the pattern is “Debian/Ubuntu-on-an-x86-or-Jetson-box,” not “Debian in a
consumer electronics device with a custom SoC.” That second case is where Yocto
and [yoe] live.
When to use Debian instead: when you’re targeting general-purpose hardware where the standard package archive is the product (“I need a server with Postgres, Nginx, and our application”), when long-term security support from a volunteer organization matters more than image size, or when your team already runs Debian in production and wants consistency between infrastructure and edge devices. For early prototyping on a Raspberry Pi before moving to custom hardware, Raspberry Pi OS is often the right starting point.
vs. Ubuntu Core
Ubuntu Core is Canonical’s IoT- and embedded-focused Ubuntu variant. Architecturally it’s a sharp departure from classic Debian/Ubuntu: every component on the device — kernel, board support, base OS, applications — is delivered as a snap package, mounted read-only via squashfs-over-loopback, and updated transactionally with rollback. Ubuntu Core 24 (the current LTS) carries a 12-year support commitment and targets production IoT, edge, and appliance devices.
What [yoe] adopts from Ubuntu Core:
- Immutable root filesystem — the shipping OS is never mutated in place; changes flow through an update mechanism with rollback.
- Gadget-snap-style board config — Ubuntu Core’s
gadget snap
bundles bootloader assets, partition layout, and device-specific defaults.
[yoe]’s machine definitions cover the same ground (kernel config, device tree, partition schema, bootloader choice). - Model assertion as device identity — UC’s signed model assertion declares
exactly which snaps constitute a device.
[yoe]’s image + machine Starlark is the structural analogue (which packages + which hardware = which shipping image). - Atomic updates with rollback — shared goal, different mechanism (snap
revisions plus a recovery seed system vs.
[yoe]’s apk + atomic image update).
What [yoe] leaves behind:
- Snaps — the squashfs-per-app loopback model.
[yoe]uses apk, which installs into a shared FHS root. - snapd — UC’s always-running daemon mediating confinement, updates, and interfaces. Significant runtime footprint and attack surface.
- Brand store requirement — commercial UC deployments require a
Canonical-hosted dedicated snap store
to control what runs on devices. This is a commercial gate.
[yoe]ships its own signed apk repository with no vendor lock-in. - Default-strict AppArmor confinement — UC apps run in a sandbox with explicit interfaces. Valuable for general-purpose appliances, often heavyweight for single-purpose embedded where the whole image is already curated.
- Canonical-centric tooling — ubuntu-image, snapcraft, Launchpad, Landscape.
[yoe]is self-hostable end to end.
Size: Ubuntu Core’s snap model has a floor
The snap delivery model has a real footprint cost. From Canonical’s own partition-sizing guidance, a minimum Ubuntu Core 24 installation with no additional application snaps lands at approximately 2,493 MiB (~2.5 GiB) of on-disk layout:
| Partition | Minimum size | Purpose |
|---|---|---|
system-seed | 457 MiB | Recovery boot loader plus recovery system snaps |
system-save | 32 MiB | Device identity and recovery data |
system-boot | 160 MiB | Kernel EFI image(s), boot loader state |
system-data | Variable | Writable — snaps, retained revisions, user data |
The 2.5 GiB floor is driven by the snap refresh model: UC keeps
refresh.retain + 1 old revisions of each snap plus a temporary copy during
updates — effectively 4× per-snap storage with the default
refresh.retain = 2. Each “revision” is a full squashfs image, not a delta. The
kernel snap alone is around 52 MiB and is retained four times over.
For comparison:
| Target | Minimum image size |
|---|---|
| Ubuntu Core 24 (no apps) | ~2,500 MiB |
Debian minbase rootfs | ~150–250 MiB |
| Alpine minimal rootfs | ~5–10 MiB |
[yoe] base target | Single-digit MiB |
Ubuntu Core is in a different footprint class. For devices with tens of GiB of storage this is irrelevant; for cost-sensitive embedded products with 128–512 MiB of flash it’s disqualifying before any application code is added.
Key differences
| Ubuntu Core | [yoe] | |
|---|---|---|
| Packaging format | Snaps (squashfs, loopback-mounted) | apk (installed into shared rootfs) |
| Root filesystem | Composed read-only snap mounts | Standard FHS, shipped read-only |
| Package daemon | snapd (always running) | apk (run at build + update time only) |
| Board config | Gadget snap | Machine definition (Starlark) |
| Image metadata | Signed model assertion | Image + machine Starlark |
| Updates | Snap revisions + recovery seed system | Atomic image update + rollback |
| Confinement | AppArmor interfaces (default strict) | Standard Linux DAC; sandboxing per app |
| Distribution | Canonical brand store (hosted) | Self-hosted signed apk repository |
| Size floor | ~2.5 GiB | Single-digit MiB |
| Build tool | ubuntu-image, snapcraft | yoe build <image> |
| Recipe language | YAML (snapcraft.yaml, model, gadget) | Starlark |
| LTS | 12 years (Canonical) | N/A — project is pre-1.0 |
When to use Ubuntu Core instead: when you want Canonical’s 12-year LTS commitment, when strict per-app confinement via snaps/AppArmor is a product requirement, when your team already operates a Canonical stack (Landscape for fleet management, brand store for distribution, Anbox Cloud, etc.), or when your device has ample storage (tens of GiB+) and the 2.5 GiB floor is an acceptable trade for the operational simplicity of signed transactional updates.
vs. Avocado OS
Avocado OS is an embedded Linux distribution
announced in April 2025 by
Peridio, a US-based company with roots in the
Elixir/Nerves OTA ecosystem. It is not a new build system — it is a curated
Yocto distro layer
(meta-avocado) plus a
Rust-written CLI (avocado-cli) layered on top of
systemd-sysext/confext semantics. The pitch is “production-grade Linux for
edge AI and physical AI” — heavy focus on NVIDIA Jetson Orin, NXP i.MX 8M Plus,
Rockchip, and Raspberry Pi. The project shipped with paying customers and is
backed by a commercial OTA SaaS
(Peridio Core).
What [yoe] adopts from Avocado OS:
- Ergonomic CLI on top of a build system — Avocado wraps Yocto in a Rust CLI
to hide BitBake’s rough edges.
[yoe]shares the diagnosis (the underlying tooling needs an ergonomic front door) but reaches a different conclusion: replace BitBake rather than wrap it. - Immutable rootfs + atomic updates as the deployment model — Avocado uses
btrfs +
systemd-sysextoverlays verified withdm-verity.[yoe]shares the immutability goal (already drawn from Ubuntu Core and NixOS), though the mechanism is still an open design decision (apk + atomic image, A/B, RAUC, etc.). - Binary extension feeds for the common case — Avocado bets that most teams
consume pre-built extensions rather than customizing the base.
[yoe]’s S3-backed apk repository plays the same role: a CI build seeds the cache and most developers never compile from source. - Live development against the deployed image — Avocado’s NFS-mounted sysext
lets a developer iterate on an extension without reflashing.
yoe devaims at the same pain point from a different angle (edit a unit’s source git tree, rebuild the apk, push to the device).
What [yoe] leaves behind:
- BitBake / Yocto — Avocado is still BitBake-bound for actual building.
Custom hardware support means writing Yocto layers on top.
[yoe]replaces the whole engine; see the Yocto section above for why. systemd-sysextas the runtime composition primitive — sysext is powerful but ties the OS tightly to systemd, dm-verity, and a particular filesystem layout.[yoe]uses apk into a shared FHS rootfs; composition is at build time (image units), not runtime (overlay mounts).- glibc baseline — Avocado inherits Yocto’s glibc default.
[yoe]is musl-first via Alpine. - Cross-compilation toolchains — Avocado uses Yocto’s standard cross
toolchain.
[yoe]is native-only. - Commercial OTA tie-in — Avocado’s business model is “free OS, paid Peridio
Core for fleet management and OTA.”
[yoe]has no commercial gate; the repository, signing, and update tooling are part of the open project. - Multi-language tooling stack — Avocado mixes BitBake, Shell, and Rust
(
avocado-cli,avocadoctl,avocado-conn).[yoe]is Go + Starlark end to end.
Key differences:
| Avocado OS | [yoe] | |
|---|---|---|
| Build engine | Yocto / BitBake (Python) | yoe (Go) |
| Recipe language | BitBake (.bb/.bbappend) | Starlark |
| CLI language | Rust (avocado-cli) | Go (yoe) |
| Cross-compilation | Yes (Yocto default) | None — native builds only |
| C library | glibc | musl |
| Package format | IPK/RPM internally; sysext DDI on device | apk |
| Runtime composition | systemd-sysext overlays + dm-verity | apk into shared FHS rootfs |
| Init system | systemd (required by sysext model) | busybox init today; systemd a future option |
| Filesystem | btrfs root, immutable | ext4 today; immutability planned |
| OTA mechanism | Peridio Core (commercial SaaS) | Self-hosted; mechanism TBD |
| Build caching | Yocto sstate | Content-addressed apk in S3-compatible cache |
| Container model | SDK containers for dev | Container as build worker |
| Hardware focus | Edge AI: Jetson, i.MX, Rockchip, RPi | Generic embedded; RPi/BBB/QEMU first |
| Commercial backing | Peridio (VC-backed) | None — open project |
| Status | Production (April 2025+), paying customers | Pre-1.0 |
Structural distance. Avocado OS and [yoe] agree on the symptoms —
unwrapped Yocto is too sharp, embedded teams need atomic updates with rollback,
most users want to consume binaries rather than rebuild — but disagree on the
cure. Avocado keeps Yocto and bets that systemd-sysext + btrfs + dm-verity is
the modern way to ship and update a device. [yoe] replaces Yocto and bets that
a smaller, single-language, apk-based stack with content-addressed caching is
enough, without taking on the systemd/btrfs/ dm-verity dependency.
When to use Avocado OS instead: when you’re shipping edge-AI hardware today
on the platforms Peridio supports (especially NVIDIA Jetson Orin), want a
vendor-backed OTA SaaS rather than running your own update infrastructure, are
comfortable with the systemd + btrfs + dm-verity baseline, and prefer to ride
Yocto’s BSP ecosystem rather than write machine definitions for new silicon. If
you need production deployment now and a paid support relationship is
acceptable, Avocado is several years ahead of [yoe] on maturity.
vs. NixOS / Nix
Nix is the most intellectually ambitious of the systems [yoe] draws from. Its
ideas about reproducibility and declarative configuration are adopted wholesale;
its implementation complexity is not.
What [yoe] adopts from Nix:
- Content-addressed build cache — build outputs keyed by their inputs so identical builds produce cache hits regardless of when or where they run.
- Declarative system configuration — the entire system image is defined by configuration files; rebuilding from that config produces the same result.
- Hermetic builds — builds do not depend on ambient host state; inputs are explicit and pinned.
- Atomic system updates and rollback — deploy new system images atomically with the ability to boot into the previous version.
What [yoe] leaves behind:
- The Nix expression language.
- The
/nix/storepath model and its massive closure sizes. - The steep learning curve.
- The assumption of abundant disk space and bandwidth.
Key differences:
| NixOS | [yoe] | |
|---|---|---|
| Config language | Nix (custom functional language) | Starlark (Python-like) |
| Store model | Content-addressed /nix/store paths | Standard FHS with apk |
| Closure size | Often 1GB+ for simple systems | Target single-digit MB base |
| Target | Desktop, server, CI | Embedded hardware |
| BSP support | Minimal | Per-board machine definitions |
| Package manager | Nix | apk |
| Reproducibility | Bit-for-bit (aspirational) | Content-addressed, functionally equivalent |
| Rollback | Via Nix generations | Planned; mechanism TBD (apk, A/B, RAUC, …) |
| Learning curve | Steep (must learn Nix language) | Shallow (Starlark, Python-like) |
Caching comparison: Nix’s binary cache (Cachix, or self-hosted with
nix-serve) is conceptually similar to [yoe]’s remote cache — both store
content-addressed build outputs in S3-compatible storage. The key differences:
Nix caches closures (a package plus all its transitive runtime dependencies),
which can be very large. [yoe] caches individual .apk packages, which are
smaller and more granular. Nix’s content addressing is based on the full
derivation hash (all inputs); [yoe] uses a similar scheme but at unit
granularity rather than Nix’s per-output granularity.
When to use Nix instead: when you need the strongest possible reproducibility guarantees, are building for desktop/server/CI, and are willing to invest in learning the Nix ecosystem. NixOS is unmatched for declarative system management on general-purpose hardware.
vs. Google GN
GN is not a Linux distribution — it’s a meta-build system used by Chromium and
Fuchsia. But several of its architectural ideas directly influenced [yoe]’s
tooling design.
What [yoe] adopts from GN:
- Two-phase resolve-then-build — GN fully resolves and validates the
dependency graph before generating any build files.
yoe builddoes the same: resolve the entire unit DAG, check for errors, then build. No partial builds from graph errors discovered mid-way. - Config propagation — GN’s
public_configsautomatically apply compiler flags to anything that depends on a target.[yoe]propagates machine-level settings (arch flags, optimization, kernel headers) through the unit graph. - Build introspection — GN provides
gn desc(what does this target do?) andgn refs(what depends on this?).[yoe]providesyoe desc,yoe refs, andyoe graphfor the same purpose. - Label-based references — GN uses
//path/to:targetfor unambiguous target identification.[yoe]uses a similar scheme for composable unit references across repositories.
What [yoe] leaves behind:
- Ninja file generation —
[yoe]’s unit builds are coarse-grained enough thatyoeorchestrates directly. - GN’s custom scripting language — Starlark serves the same purpose for
[yoe]. - C/C++ build model specifics — GN is deeply tied to source-file-level dependency tracking, which isn’t relevant for unit-level builds.
Key differences:
| GN | [yoe] | |
|---|---|---|
| Purpose | C/C++ meta-build system | Embedded Linux distribution builder |
| Output | Ninja build files | .apk packages and disk images |
| Config language | GN (custom) | Starlark (Python-like) |
| Dependency granularity | Source file / target | Unit (package) |
| Build execution | Ninja | yoe directly |
| Introspection | gn desc, gn refs | yoe desc, yoe refs, yoe graph |
GN is not an alternative to [yoe] — they solve different problems. But GN’s
approach to graph resolution, config propagation, and introspection are
well-proven patterns that [yoe] applies to the embedded Linux domain.
Value Proposition and Strategic Positioning
The Core Thesis
Yocto’s model of wrapping every dependency in a unit made sense when C/C++ was the only game in town and there was no dependency management beyond “whatever headers are on the system.” Modern languages have solved this:
- Go:
go.sumis a cryptographic lock file. Builds are already reproducible. - Rust:
Cargo.lockpins every transitive dependency. - Zig: Hash-pinned dependencies.
- Node/Python: Lock files are standard practice.
Yocto’s response is to re-declare every dependency the language toolchain
already knows about — SRC_URI with checksums for each crate,
LIC_FILES_CHKSUM for each module. This is busywork that duplicates what
Cargo.lock and go.sum already guarantee.
[yoe]’s position: let the language package manager do its job. A Go unit
should declare what to build, not how to resolve every transitive
dependency. Content-addressed caching hashes the output — if inputs haven’t
changed, the output is the same. You get reproducibility without micromanaging
the build.
Where [yoe] Cannot Compete (Yet)
Be honest about the gaps:
Vendor BSP support is Yocto’s real moat. Every major SoC vendor (NXP, TI, Qualcomm, Intel, Renesas, MediaTek) ships Yocto BSP layers and supports them. This is not a technology problem — it’s an ecosystem problem that Linux Foundation backing solves. No amount of technical superiority overcomes “the silicon vendor gives us a Yocto BSP and supports it.”
Package count. Yocto has ~5,000 recipes across oe-core + meta-openembedded,
Buildroot has ~2,800 packages, Alpine has ~36,000, Debian has ~35,000, and
Nixpkgs has ~142,000. [yoe] has dozens. Need curl, dbus, python3, or ffmpeg?
You have to write the unit.
Configuration UX. Buildroot’s make menuconfig is a killer feature —
visual, discoverable, searchable. You can explore what’s available without
reading unit files. [yoe] requires editing Starlark by hand.
Documentation and community. Yocto has comprehensive manuals, Bootlin
training materials, and years of mailing list archives. Buildroot has a
well-maintained manual and active list. Problems are googleable. [yoe] has
design docs and a small team.
Legal compliance tooling. Yocto’s do_populate_lic and Buildroot’s
make legal-info generate license manifests and source archives. This is
required for shipping products in many industries. [yoe] has nothing here yet.
Proven production track record. Thousands of products ship with Yocto.
Buildroot runs on millions of devices. [yoe] is a prototype.
Where [yoe] Can Win
Target audience: Teams building Go/Rust/Zig services for embedded Linux — edge computing, IoT gateways, network appliances. Teams where the application is the product, not the base OS. Teams that want “Alpine + my app on custom hardware” not “custom Linux distro with 200 hand-tuned units.”
These teams currently use Buildroot, hack together Docker-based builds, or cross-compile manually. They would never adopt Yocto because the overhead is absurd for their use case.
First-class modern language support. Go/Rust/Zig unit classes should be
trivial to use. The build system should get out of the way and let go build,
cargo build, and zig build do their jobs. This is where Yocto is most out of
touch.
Custom hardware without desktop distro limitations. Desktop distros (Debian,
Fedora, Alpine) have great package management but no story for custom kernels,
device trees, bootloaders, board-specific firmware, or flash/deploy workflows.
This is the entire reason Yocto and Buildroot exist. [yoe] should provide BSP
tooling (machine definitions, kernel units, yoe flash, yoe run) that is
simpler than Yocto’s but more capable than anything desktop distros offer.
Incremental builds and shared caching. Buildroot rebuilds everything from
scratch. Yocto’s sstate is powerful but complex to set up. [yoe]’s
content-addressed .apk cache in S3-compatible storage is conceptually simpler:
push packages to a bucket, pull them on other machines. CI builds once,
developers reuse the output.
AI-assisted unit generation. If an AI can generate a working Starlark unit from a project URL faster than porting a Yocto unit, the small package count stops mattering. Starlark is far more tractable for AI than BitBake’s metadata format.
The Alpine Linux Precedent
Alpine didn’t supplant Debian — it became the default for containers because it
was radically smaller and simpler for that specific use case. [yoe] doesn’t
need to replace Yocto for automotive or aerospace. It needs to be the obvious
choice for a specific class of embedded product where Yocto is overkill and
Buildroot is too limited.
What to Focus On
-
Modern language unit classes — Go, Rust, Zig should be first-class, not afterthoughts. These are the differentiator. A Go developer should go from “I have a binary” to “I have a bootable image on custom hardware” in minutes.
-
BSP tooling — machine definitions, kernel/bootloader units,
yoe flash,yoe run. This is what desktop distros lack and what justifies[yoe]’s existence as a build system rather than just another distro. -
Shared build cache — the S3-backed package cache is a major advantage over Buildroot. Make it trivial to set up so teams see the value immediately.
-
Size discipline. The summary matrix shows
[yoe]’s single-digit-MB base as a structural advantage against Ubuntu Core (~2,500 MB), NixOS (~1,500 MB), and Debian (~150 MB minbase). That floor bloats silently — one “convenient default,” one “might as well include it” at a time. Every new feature, class, and base-system addition should survive an explicit size review. Losing the size story means losing the most defensible position on the matrix. -
Atomic update + rollback story. Ubuntu Core’s pitch is “signed transactional updates with rollback”; Gaia’s is “OSTree + Aktualizr”; Yocto’s is RAUC/SWUpdate.
[yoe]needs an equivalent first-class, opinionated, documented update workflow — not a “you can wire this up yourself” footnote. The underlying mechanism is still an open design decision — candidates include apk upgrade with snapshot/rollback, A/B partition swap, RAUC-style bundle updates, and OSTree-style file trees. The commitment is to some well-integrated shippable story, not to any specific mechanism. For any team shipping a product, this is table stakes. -
AI unit generation + Alpine aports conversion. Lean into the AI-native angle: generating a new unit from a project URL should be a conversation, not a manual porting exercise. Also ship a mechanical APKBUILD → Starlark converter — Alpine has ~36,000 ready-to-port APKBUILDs, and a reliable converter closes the package-count gap faster and more predictably than pure AI generation. AI for novel cases, mechanical conversion for the long tail.
-
Board support — start with popular, accessible boards (Raspberry Pi, BeagleBone, common QEMU targets). Every board that works out of the box is a potential user who doesn’t need Yocto.
-
Don’t chase Yocto’s or Canonical’s tails. Resist adding Yocto-like features (task-level DAGs, unit splitting, bbappend equivalents) to win Yocto users, and equally resist Canonical-style add-ons (brand store, snap-style confinement, a Landscape clone) to win Ubuntu Core users. Both directions lead away from the minimal, single-language, AI-tractable design that is
[yoe]’s actual positioning. Make the simple path so good that teams choose[yoe]because it fits their workflow, not because it mimics something they already have.
Rootfs Ownership: How Each Project Handles It
A recurring problem when building an embedded image unprivileged: the installed
rootfs needs files owned by root:root (and sometimes by specific service
users), but the build itself ideally does not run as real root. mkfs.ext4 -d
copies ownership straight out of stat(), so whatever the filesystem says at
image-pack time is what the booted system sees. Every serious build tool has had
to solve this.
There are only three real options, and the industry has converged on them:
1. Real root (sudo). Traditional flow. sudo debootstrap, apk add on an
Alpine host, a container running as root — the simplest approach, but needs
privileges on the build host.
2. fakeroot (LD_PRELOAD). A small library that intercepts chown, stat,
and friends. chown updates an in-memory database instead of the kernel; later
stat calls return the faked ownership. Files on disk stay owned by the build
user, but tar / mkfs.ext4 / dpkg-deb see the virtual ownership and pack
that into the archive or image. Invented by Debian; now standard.
3. User namespaces (unshare -U). Linux kernel feature. Inside the
namespace the build process sees itself as uid 0; subuid/subgid mapping
translates writes back to a range owned by the build user on the host. No
LD_PRELOAD tricks, no real root — but requires subuid configuration on the host
kernel.
How specific projects apply these
Alpine Linux — two halves:
- Package build (
abuild) wraps the whole build infakerootso the resulting.apktar recordsroot:rootownership regardless of who ran abuild. - Rootfs assembly (
apk add,alpine-make-rootfs) runs as real root on a live system or inside a build chroot.
Debian / Ubuntu — historically real root; modern tooling offers all three:
- Package build —
dpkg-buildpackageruns underfakeroot(fakeroot debian/rules binary). This is universal — essentially every.debon the planet has its ownership laundered through fakeroot. - Rootfs assembly — the original
debootstraprequiressudo. Its successormmdebstrapexplicitly exposes the full menu via--mode=root,--mode=fakeroot,--mode=fakechroot,--mode=unshare(user namespaces),--mode=proot, and--mode=chrootless.--mode=unshareis the recommended modern unprivileged default.
Buildroot — wraps image packaging in plain fakeroot. Works, but fakeroot’s
in-memory database doesn’t persist across process invocations, so Buildroot does
the whole image pack in one fakeroot session.
Yocto / OpenEmbedded — uses pseudo instead of fakeroot. pseudo is an
enhanced fakeroot that persists state to an on-disk SQLite database, so
ownership survives across the many separate steps a Yocto task graph spawns.
This is necessary for OE’s execution model and is one of the reasons Yocto
builds have a heavier tooling footprint than Alpine/Buildroot.
NixOS — builds entirely under a sandboxing daemon (nix-daemon) running as
root; individual builders drop privileges. Image assembly for NixOS system
closures happens inside the daemon’s controlled environment with proper root, so
the ownership problem doesn’t surface the same way.
Google GN / Bazel — out of scope; neither builds Linux rootfs images as a first-class concern.
How [yoe] applies these
- APK build —
internal/artifact/apk.gonormalizes every tar header toroot:rootdirectly in Go’sarchive/tarwriter. This is the structural equivalent of what Alpine’sabuildgets fromfakerootand what Debian gets fromdpkg-buildpackageunder fakeroot — just implemented in the build tool rather than via LD_PRELOAD, because Go writes the tar anyway. - Rootfs assembly (
modules/units-core/classes/image.star) currently runs inside the Docker build container, which is already privileged. The image classchown -R 0:0s the assembled rootfs beforemkfs.ext4 -d, and chowns$DESTDIRback to the host build user at the end so the next build’s host-side cleanup works. This is roughly Alpine’s “run as real root” path, adapted to our docker-with-host-ownership cache model. - Future direction — the planned move of image assembly to the host via
bwrap --unshare-user --uid 0 --gid 0(docs/superpowers/plans/host-image-building-bwrap.md) is the user-namespace approach: the same category asmmdebstrap --mode=unshare. When it lands, thechowndance disappears — bwrap’s namespace provides pseudo-root with host-owned files for free.
The short version: we match Alpine’s tar-ownership convention for packages,
we’re currently doing the “real root in a container” move for rootfs assembly,
and we have a documented path to the mmdebstrap --mode=unshare equivalent for
the host.
Summary Matrix
| Feature | Yocto | Buildroot | Alpine | Arch | Debian | UC | NixOS | [yoe] |
|---|---|---|---|---|---|---|---|---|
| Embedded focus | Yes | Yes | Partial | No | No | Yes | No | Yes |
| Simple config | No | Moderate | Moderate | Yes | Moderate | No | No | Yes |
| Native builds | No | No | Yes | Yes | Yes | Yes | Yes | Yes |
| On-device packages | Optional | No | Yes | Yes | Yes | Yes | Yes | Yes |
| Content-addressed cache | Partial | No | No | No | No | No | Yes | Yes |
| Remote shared cache | Complex | No | No | No | No | No | Yes | Yes |
| Pre-built package cache | No | No | Yes | Yes | Yes | Yes | Yes | Yes |
| Declarative images | Yes | Partial | No | No | Partial | Yes | Yes | Yes |
| Multi-image support | Yes | No | No | No | No | Partial | Yes | Yes |
| Image inheritance | Partial | No | No | No | No | No | Yes | Yes |
| Custom BSP support | Yes | Yes | No | No | Minimal | Yes | Minimal | Yes |
| Incremental updates | Complex | No | Yes | Yes | Yes | Yes | Yes | Yes |
| Hermetic builds | Partial | No | No | No | No | Partial | Yes | Yes |
| Fast package ops | N/A | N/A | Yes | Moderate | Moderate | Slow | Slow | Yes |
| Min base image size | ~15 MB | ~5 MB | ~5 MB | ~500 MB | ~150 MB | ~2,500 MB | ~1,500 MB | ~5 MB |
| Packages available | ~5,000 | ~2,800 | ~36,000 | ~15,000 | ~35,000 | ~10,000 | ~142,000 | Dozens |
UC = Ubuntu Core. “Min base image size” is the approximate on-disk footprint of
the smallest practical bootable/usable root filesystem (core-image-minimal for
Yocto, minbase debootstrap for Debian, minirootfs for Alpine, a minimal Ubuntu
Core 24 model with no app snaps, a minimal NixOS closure). Actual sizes vary
with architecture, kernel, and configuration. “Packages available” is the rough
count of ready-to-use packages/recipes in the standard/common repositories;
Yocto counts typical oe-core + meta-openembedded, Arch excludes the ~90,000 AUR
packages, UC counts snaps in the public store — a different delivery model that
is not directly comparable. Sources: project documentation,
repology.org.
Roadmap
About this document: the roadmap is a list of pointers, not a design spec. Each item should be a one-line “we want to do this” with a link to the design doc that owns the detail. Keep design discussion in the relevant
docs/*.mdand link from here. If a topic doesn’t have a design doc yet, leave the entry brief — write the design doc when the work is actually picked up.
Next
- Better hostnames for targets.
- mDNS on target (we have a mdns component, why is it not working?)
- base-files is modified by machine
- machine package feed?
- this needs to be solve before start building multiple machines in one tree.
- e2e testing
- Save flash device preference in local.star for TUI
- Data partition for rPI targets
- Fill/format data partition
- rPI updater
- Error reading OS version: searching /etc/os-release, got: field VERSION not found
Bugs / Improvements
apk help— hard to use right now.- Helix prebuilt is glibc-only and won’t run on yoe’s musl rootfs. Needs a cargo-from-source build (or a third-party musl tarball) to actually work.
- modprobe from busybox and kmod both in image at different locations.
- kmod:
Error loading shared library liblzma.so.5: No such file or directory(needed by/usr/sbin/modprobe). - Rename rpi machines to simple rpi names.
Developer Experience
The biggest leverage area: making yoe pleasant for the developer writing apps that run on yoe-built devices, not just for the author of a distro.
Build & Deploy Loop
Goal: app developers work directly in their app’s git repo, not against an extracted SDK. The build container is the SDK. See dev-env.md for the design.
- Local-path unit sources:
source = path("./")so a unit builds from a working tree without a clone-tag cycle. Foundation for everything below. yoe devwatch mode — rebuild (and optionally redeploy) on save.- Language and build-system classes beyond
go_binary:rust_binary(Cargo),python_unit,node_unit,meson,zig_binary. See the class table in metadata-format.md. - App project scaffolding:
yoe new app --lang gostyle generator that creates a standalone project withPROJECT.star, a unit pinning the language, and a happy path. - Software update — Yoe updater or SWUpdate. Rewrite in Zig?
On-Device App UX
yoe svc start|stop|restart|status <unit> <host>over SSH.yoe logs <unit> -f— tail service logs from the host.- Persistent
/datapartition pattern so app state survives image updates. - Health-check / watchdog conventions readable by both OpenRC and a future container runtime.
Diagnostics
- Profilers:
perf,bpftrace, language-specific (py-spy,delve). - Metrics agent:
node_exporteror similar. - Crash backtrace shipper: capture coredumps to a known path, optionally upload.
Wireless / Remote
- Wifi setup workflow:
wpa_supplicantunit + a first-boot configurator. - Reverse tunnel for remote dev:
yoe tunnel, or shiptailscale/headscale.
Hardware Access
- GPIO / I²C / SPI userspace:
libgpiod, smbus userspace tools. - Audio: ALSA, PipeWire.
- Camera:
libcamera. - GUI stack: minimal Wayland compositor (cage / wlroots) for kiosk apps.
Needed Units
Existing units can be found via yoe list or by browsing
modules/units-core/units/.
Networking and Security
nftables— modern firewall (preferred over legacy iptables). Requires new dep unitslibmnl,libnftnl, andgmpbefore it can be written.wpa_supplicant— wifi.
Diagnostics
perf,bpftrace,py-spy,delve.node_exporter(or similar metrics agent).
Hardware
libgpiod, smbus userspace tools.- ALSA, PipeWire.
libcamera.
Container Stack
runc,containerd,nerdctl— first milestone for on-device containers.- Follow-on:
podman, thendocker-ce.
Nice to Have
dbus— IPC message bus; dependency for many higher-level services. Pulls in expat (already present) plus a service supervisor — non-trivial, defer until a unit needs it.ripgrep,fd.tailscale(orheadscale) — remote-dev tunnel.
Container Host on Devices
Ship a container-host-image that runs containerd (later Podman, then Docker
CE) on yoe-built devices. Design and reference architecture in
containers.md.
Init System
Replace busybox init with something supporting dependency ordering and supervision (OpenRC most likely). See containers.md for the discussion of options.
Image Assembly on Host
Move image assembly (mkfs.ext4, bootloader install) from the build container
to the host via bwrap user namespaces. Design in
build-environment.md.
Testing
Today: Go unit tests under internal/* and a single dry-run e2e test. No
on-device tests, no image smoke tests, no build-time package QA, no CI workflow
that runs builds. Design and intended shape in testing.md, which
also compares to Yocto’s oeqa / INSANE.bbclass / ptest / buildhistory.
- Build-time package QA (Yocto’s
INSANE.bbclassanalog): file ownership, ELF stripping, RPATH leaks, missing SONAMEs, host-path contamination. Always-on; failures fail the build. yoe test <unit>— drive per-unit, image, and HIL tests behind one command.- Per-unit functional tests (destdir assertions in the build sandbox).
- On-device upstream tests (
make check/cargo testshipped as a test subpackage; Yocto’sptestanalog). - Image-level smoke tests that boot in QEMU (or attach over SSH to a real device) and check network, services, basic flows.
- Build-history / regression tracking (Yocto’s
buildhistoryanalog) for size, RDEPENDS, and file-list diffs per PR. - CI workflows:
go test, dry-run image build per PR; full build + smoke tests on a schedule. - Kernel QA: run upstream
check-config.shagainst the kernel.configfor container-host images.
A/B Updates
Read-only rootfs with A/B partitions and signed update bundles. Reference architecture (Home Assistant OS) in containers.md. The Software update item under Developer Experience evolves toward this once a runtime ships.
CLI Surface
yoe serve/yoe deploy <unit> <host>/yoe device repo {add,remove,list}— shipped. See feed-server.md.yoe svc start|stop|restart|status <unit> <host>.yoe logs <unit> -f.yoe dev <unit>— watch the source tree and rebuild (optionally redeploy) on save.yoe test <unit>— run tests in QEMU or against a real device. See testing.md.yoe tunnel— reverse tunnel for remote dev (or rely on atailscaleunit).yoe new app --lang go— application project scaffolding.yoe cache— query and prune the build cache (local + future remote/S3).yoe shell— drop into the build container interactively.yoe bundle— package modules into a single distributable.yoe module list|info|check-updates— inspect and update external modules.yoe repo push|pull— sync the local apk repo to a remote (S3 / HTTP).yoe buildquery flags:--class <type>,--with-deps,--list-targets,--no-remote-cache.- Config propagation across modules.
See yoe-tool.md for design notes on existing (planned)
sections.
Format / Modules
- Sub-packages — one unit producing multiple
.apks. MODULE.starmanifests for module versioning and inter-module deps.- Per-task container overrides.
See metadata-format.md.
Distribution Variants
- glibc target. Currently musl-only. glibc support would enable workloads whose binaries require it (some cgo, prebuilt vendor SDKs, the upstream Helix release, etc.).
Self-Hosting
The ultimate dogfood test: develop yoe on a yoe-built device. Forces the distro to be capable enough for real engineering work, not just demo targets, and surfaces gaps in container hosting, editor experience, and the build cache all at once.
Compilers stay in the build containers (gcc, binutils, headers, language
toolchains live in toolchain-musl and friends, not the rootfs). What the
device itself needs:
- Container host on the device so it can run the build containers. See Container Host on Devices.
yoebinary in the project’s apk repo so a yoe-built device canapk add yoelike any other unit.- Go on-device for editing yoe source comfortably (
gopls,delve), not for the build itself. gitunit.- An editor that runs on musl. Fix the helix glibc issue (cargo-from-source build) or commit to neovim as the default.
- CI gate that builds yoe from source on a yoe-built image and runs the test suite, so toolchain or libc-compatibility regressions break the build instead of being discovered later.
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
[0.9.1] - 2026-05-01
yoe deploy <unit>now installs the package’s runtime deps too. Previously it only built and published the named unit, so deploying a package withruntime_depsoutside what the device already had on disk failed with a crypticapk adderror likesqlite (no such package). Deploy now walks the full runtime closure (the same expansionimage()does at image-build time), so every transitive dep ends up in the feed beforeapk addruns.- Deploy refreshes the device’s apk index every time. The on-device
apk updatestep now usesapk --no-cache update, forcing a refetch of every repo’sAPKINDEXinstead of trusting whatever is in/var/cache/apk/. apk-tools 2.x can otherwise hold onto a stale index across a yoe-dev rebuild and silently miss packages you just published. - Added sqlite unit
[0.9.0] - 2026-05-01
- New design doc on libc and init choice.
docs/libc-and-init.mdlays out why yoe is musl + OpenRC + Alpine today, where that stack works (gateways, IoT, networking gear), where it doesn’t (Jetson, vendor BSPs, Adaptive AUTOSAR), and the planned rootfs-base abstraction that would let a single yoe codebase serve both Alpine and Ubuntu/L4T projects. Establishes the invariant that yoe stays apk-native on every target — Debian-derived bases get adeb_pkgconversion class, not dpkg/apt on the device. - Pull packages straight from Alpine. A new
units-alpinemodule wraps prebuilt Alpine.apkfiles as yoe units via thealpine_pkg()class — no source build, no patches, just fetch + verify + repack.muslandsqlite-libsship today; add more by pinning a version and sha256. muslnow comes from Alpine. The hand-rolled musl unit that copied the dynamic linker out of the build container is gone;muslis now an Alpine apk wrapped byalpine_pkg(). Output is byte-identical to the Alpine package other projects already ship..apkURLs work as a source type. Yoe’s source workspace now recognises.apkextensions and bare-copies them so the unit’s install task can extract the multi-stream gzip with GNU tar. Bare-copied sources also keep their URL filename, so install steps can reference the file by name instead of by cache hash.- Override an upstream unit by name. Define a unit with the same name in a
higher-priority module (or in the project itself) and it shadows the upstream
one — no
providesboilerplate needed. The project root beats every module, and later modules beat earlier ones. A notice on stderr tells you which one won. - Deploy from the TUI. Press
Don a non-image unit to deploy it to a running yoe device — host prompt is pre-filled from the last-used target, build + ssh + apk add output stream into the view, and the host is saved back tolocal.staron success. - Deploy actually updates the device’s apk index.
yoe deployandyoe device repo addpreviously wrote to/etc/apk/repositories.d/yoe-dev.list, which apk-tools 2.x ignores. They now append a marker block to/etc/apk/repositoriesso the nextapk updateactually fetches the dev feed andapk add <unit>finds the freshly built package. - TUI starts a feed automatically. When you launch
yoe, it brings up the project’s apk feed (or reuses one already running on the LAN), so devices configured withyoe device repo addcan pull packages without any extra setup. Status is shown in the header. - SSH target shorthand.
yoe deployandyoe device repo {add,remove,list}accept[user@]host[:port]— e.g.yoe device repo add localhost:2222for a QEMU vm oryoe deploy myapp pi@dev-pi.local:2200. The--ssh-portflag is gone. - APK live deployment tooling.
yoe deploy <unit> <host>builds and installs a unit on a running yoe device with full apk dependency resolution. Pair withyoe serveandyoe device repo addto keep a device pointed at your dev feed for ad-hocapk addfrom the device. See docs/feed-server.md.
[0.8.6] - 2026-04-30
- Container runtime build path documented.
docs/containers.md now walks through what it takes to
ship Docker, containerd, and runc on a musl yoe rootfs — why prebuilt “static”
binaries don’t work, the per-component build breakdown, and how cgo units like
runc plug into yoe’s existing Go toolchain and
toolchain-muslcontainer viadepsinstead of needing a new Go+GCC container image. - Rename
debugunits todev. - Expand roadmap. Reorganized as a pointer index into the design docs, with new sections for the app-developer build/deploy loop, hardware access, testing, self-hosting, and distribution variants.
- New testing design doc at docs/testing.md covers the
planned
yoe testdriver, build-time package QA, on-device upstream tests (Yoctoptestanalog), image smoke tests, and CI integration. - Kernel modules now ship in images — the
linux,linux-rpi4, andlinux-rpi5units previously built only the in-tree kernel image, so drivers compiled as loadable modules (Wi-Fi, USB, sound, many filesystems) were silently dropped. Modules are now built and installed to/lib/modules/<kver>/in the rootfs, somodprobefinds them at runtime. - Fix rPI4 builds package arch did not match what apk was expecting.
[0.8.5] - 2026-04-30
- `Yazi, Zellij, and Go units added.
- Clear error when an image’s rootfs won’t fit the partition. Yoe points at
the partition size to bump instead of failing mid-
mkfs.ext4with a cryptic ext2 error. - SSH works out of the box on
dev-image.sshdstarts on boot with per-device host keys;ssh -p 2222 user@localhost(passwordpassword) just works, and passwordless root SSH matches the serial console. - Image rebuilds recover from prior failed builds. A previous failure no longer wedges the next run on “Permission denied” — yoe reports the real error and cleans up automatically.
- New
binaryclass for prebuilt binaries. Units can ship upstream release binaries with SHA256 verification, no rebuild from source. Used bygo,helix, andyazi. apk addworks against the signed repo. Image-time and on-targetapkcommands no longer fail with “BAD signature” or need--allow-untrusted/--keys-dir.apk addandapk upgradework on yoe-built devices.dev-imageshipsapk-toolsand the project’s signing key, so OTA-style updates use stockapkcommands. Seedocs/on-device-apk.md.- Signed apks and APKINDEX. Every artifact is RSA-signed at build time and
verified by stock
apkon the target.yoe key generate/yoe key infomanage the project key; seedocs/signing.md. - Rootfs builds with APK. Much faster.
providesis now a list. Useprovides = ["a", "b"]; the string formprovides = "x"no longer parses.replacesis documented. New “Shadow files” section indocs/naming-and-resolution.mdcovers when to use it and how to read apk’s “trying to overwrite” errors.- “One .apk per unit” principle, documented. Image-to-image variation
belongs at runtime, not in build-flag forks. See
docs/naming-and-resolution.md. - SSH configured to autostart and work with blank passwords for dev builds.
[0.8.4] - 2026-04-29
- Networking picks the better DHCP client when available. The default
S10networkrunsdhcpcdif it’s onPATH(IPv6 SLAAC, DHCPv6, IPv4LL fallback) and falls back to busyboxudhcpcotherwise — so an image that shipsdhcpcdgets the modern client without changing the init script. - File conflicts in image builds now fail loudly. Units can declare
replaces = ["pkg", ...]to opt into shadowing another package’s files (e.g.util-linuxover busybox’s/bin/dmesg); apk honors that at install time and rejects any conflict that wasn’t declared. Image assembly no longer passes--force-overwrite, so a new shadow becomes a real error instead of a buried warning. - Unit edits no longer get masked by stale cache hits. Editing a unit’s
description, license, runtime deps, replaces, conffiles, build environment,
scope, image partitions, image excludes, or install-step files now invalidates
the cache as it should — previously these silently kept the old apk. A new
test in
internal/resolvefails if a future Unit field is added without being incorporated into the cache key. ipworks again ondev-image. iproute2 no longer pulls in libelf at link time, so/sbin/ipruns without “Error relocating /sbin/ip: elf_getdata: symbol not found” on images that don’t ship elfutils.- Boot no longer hangs when DHCP fails. The default network init script
waits briefly for the link to come up before starting udhcpc, runs udhcpc in
the background, and limits its retries — so
dev-imagereaches a login shell even when no DHCP server is reachable, instead of looping on “Network is down”. - Image rootfs is assembled by upstream
apk add. yoe no longer loopstar xzfover each apk; image builds runapk addagainst the project’s local repo, getting real dependency resolution, file-conflict detection, and an installed-package database in/lib/apk/dbfor free. On-target you can nowapk info,apk verify, and (once apk-tools ships as a unit)apk addandapk upgradeagainst the same repo. - Service symlinks ship inside the apk. A unit’s
services = [...]declaration is materialized as real/etc/init.d/SXX<name>symlinks inside the package’s data tar at build time. On-targetapk add <pkg>produces the same rootfs as image-time assembly — yoe never patches the rootfs after install. - Repo layout switched to Alpine-native —
repo/<project>/<arch>/<pkg>-<ver>-r<N>.apkplus a per-archAPKINDEX.tar.gz..apkfilenames no longer carry a scope suffix. Existingrepo/directories are obsolete; the next build repopulates the new layout. - Yoe-built apks install with upstream Alpine apk-tools.
.apkfiles andAPKINDEXproduced by yoe now round-trip through stockapk add --allow-untrusted: no checksum errors, no format warnings, and package metadata (name, version, arch, deps, origin, commit, install size) matches whatapk indexitself would emit. - Nine new units in
dev-image—e2fsprogs(mkfs.ext4 / fsck.ext4 / tune2fs on the target),eudev(full udev for dynamic /dev),iproute2(fullip/tc),dhcpcd(a DHCP client beyond busybox udhcpc),bash,less,file,procps-ng(realps/top/free/vmstat), andhtopare now built and included indev-imageso they’re available out of the box on a booted dev system.gperfis also added as a build-time dependency for eudev. - Updated units roadmap —
util-linux,kmod, andca-certificatesare marked done;dropbearis dropped (the project standardizes onopenssh); remaining work is nownftables(blocked on libmnl/libnftnl/gmp deps) anddbus. - Documented when NOT to use
provides—docs/naming-and-resolution.mdnow spells out thatprovidesis for leaf artifacts only (kernel, base-files, init, bootloader). Using it for build-time libraries or runtime alternatives forks every transitive consumer into a per-machine apk. Runtime alternatives likemdevvseudevshould ship side-by-side and be selected at boot from init scripts. - Image rootfs assembly now warns on path collisions — when two packages
install to the same path (e.g., busybox’s
/sbin/ipsymlink vs iproute2’s full binary), the later package silently overwrote the earlier one with no trace. Image assembly now emits awarning:line per collision naming the surviving package and the shadowed ones, plus a total count. The warnings appear in the image’sbuild.log(and on terminal whenyoe build -vis used). Existing dev-image builds surface 27 expected shadows of busybox applets by full alternatives — no behavior change, just visibility.
[0.8.3] - 2026-04-28
- mDNS via new
mdnsdunit — the dev-image now answers<hostname>.localon the LAN, sossh user@yoe-dev.localworks without knowing the device’s IP. Uses troglobit/mdnsd (a small dbus-free mDNS responder) and ships a default_ssh._tcpservice record so the host A record is advertised and SSH discovery works for Bonjour-aware tools. - NTP at boot via new
ntp-clientunit — boards without a battery- backed RTC (e.g., Raspberry Pi) booted at 1970, which broke TLS with “certificate is not yet valid”.ntp-clientdoes a blocking initial sync at S20 (retried a few times to cover DNS settling right after udhcpc) so subsequent services start with real time, then leaves a busyboxntpddaemon running to discipline drift over uptime. Added todev-imageby default.base-filesalso gets/var/runso daemons that write a pidfile have a place to put it. - Fix
simpleiotfailing to start at boot — the unit installed the binary as/usr/bin/simpleiotbut its init script invoked/usr/bin/siot, so booting the dev image showedsiot: not foundand the service never ran. The binary now installs assiotto match upstream.go_binarygains abinarykwarg for cases where the installed command name should differ from the apk package name. - Per-developer machine override via
local.star— when you switch machines from the TUI’s setup view, yoe now writeslocal.starat the project root with your selection. Subsequentyoecommands use that machine without you re-passing--machineevery time. The file is gitignored so each developer can pin their own target.--machineon the command line still wins. yoe flash listand TUI device picker —yoe flash listenumerates removable USB sticks and SD cards (filtered against the disk hosting the running system). In the TUI, pressingfon an image unit opens a device picker with a live progress bar during the write.yoenever invokessudoitself; if the device isn’t writable, it prompts once for consent and runssudo chown <you> /dev/....- Honest flash progress —
yoe flashnow opens the target device withO_DIRECTso writes bypass the kernel page cache and the progress bar tracks actual device throughput. Previously the bar could hit 100% with hundreds of MB still buffered in RAM, freezing the UI for tens of seconds during the final flush. WithO_DIRECTthe wait is paid out across the write itself, and “Flash complete” appears when the data is really on the card. - Fix
yoe flashrejecting non-system disks —flashpreviously refused to write to/dev/sda,/dev/nvme0n1, and/dev/vdaregardless of the actual layout. It now detects which disk hosts the running system (/,/boot,/boot/efi,/usr) and refuses only that disk, so flashing to a USB or external SATA drive named/dev/sdaworks on machines whose root is on NVMe. - Fix images silently shipping without packages — if an artifact’s apk was
missing from the local repo (e.g., its build was cancelled), the image used to
build anyway with a
warning: package X not found, skippingand produce a kernel-panicking rootfs. Image assembly now hard-fails with a clear message naming the missing package. The build cache now also treats a unit as out-of-date when its apk has gone missing, and rebuilding any unit invalidates its dependents — so reruns auto-recover instead of reusing stale outputs.
[0.8.2] - 2026-04-24
- Fix extlinux install under Docker 29 —
--privilegedcontainers no longer auto-populate/dev/loop*, solosetup --findfailed during image assembly. Pre-create/dev/loop0..31withmknodbefore callinglosetup.
[0.8.1] - 2026-04-24
- Fix rootfs ownership on booted systems — files under
/,/bin,/etc,/usr, etc. are now owned byroot:rooton the booted system instead of showing up as whatever user built the project. - Compare rootfs ownership handling across projects —
docs/comparisons.mdnow has a section explaining how Alpine, Debian, Buildroot, Yocto, and NixOS handle root ownership during image builds, and where[yoe]fits.
[0.8.0] - 2026-04-24
- Class task merge semantics — units passing
tasks=[...]to a class (autotools,cmake,go_binary) no longer fully replace the class’s default task list. Instead, overrides are merged by name: a same-named task replaces in place (preserving position and using the override’sstepsfully), a new-named task is appended, andtask("name", remove=True)drops a base task. This lets units add a new task (e.g.,init-script) without restating the class-generatedbuildtask. The merge is implemented in a newclasses/tasks.starhelper (merge_tasks(base, overrides)) shared by the three classes. Thesimpleiotunit dropped its duplicatedbuildtask as a result; existing units that overridebuildare unaffected (replace-in-place yields the same result as the previous full-replacement semantics). - Fix install_template/install_file path resolution for helper functions —
template paths now resolve relative to the
.starfile containing theinstall_template()/install_file()call, not to the file that ultimately callsunit(). Previously, a helper likebase_files(name = "base-files-dev")inunits/base/base-files.starinvoked fromimages/dev-image.starlooked for templates underimages/base-files-dev/instead ofunits/base/base-files/, breaking thedev-imagebuild. The base directory is now captured at install-step construction time from the Starlark caller frame; existing units that define and use install steps in the same.starfile are unaffected. - File templates — units can declare external template files (
.tmpl) and static files in a directory alongside the.starfile and install them via newinstall_template()andinstall_file()step-value constructors placed directly intask(..., steps=[...])alongside shell strings. Templates render through Gotext/templatewith a unifiedmap[string]anycontext auto-populated withname/version/release/arch/machine/console/projectand any extra kwargs passed tounit(). The context map and the contents of the unit’s files directory are hashed so template edits and extra-kwarg changes invalidate the cache. Install steps run on the host (not inside the sandbox), so$DESTDIR/$SRCDIR/$SYSROOTin install paths expand to host paths rather than the container bind-mount paths.base-files,network-config, andsimpleiotmigrated off inline heredocs. Seedocs/file-templates.md. - CLI flag parsing with flag.NewFlagSet — refactored all subcommands
(
build,run,flash,init,clean,log,refs,graph) from manual switch-based parsing to Go’sflag.NewFlagSet. Adds free--helpfor every subcommand, consistent-flag/--flagsupport, and repeatable flags (e.g.,--port). Net reduction of ~70 lines. - Go module cache — Go units now persist module and build caches across
builds via
cache_dirs = {"/go/cache": "go"}. The executor mountscache/go/from the project directory into the container, andGOMODCACHEandGOCACHEpoint to it. Subsequent builds skip module downloads. - Fix service enablement for S-prefixed init scripts — services declared
with an
S<NN>prefix (likeS10network) no longer get a symlink created on top of the actual script, which was causing a symlink loop and breaking networking at boot. - Unit environment field — units can declare
environment = {"KEY": "VAL"}which the executor merges into the build environment for all tasks. The Go class uses this forGOMODCACHE/GOCACHEso custom tasks (like simpleiot) get the cache env vars automatically. - QEMU port forwarding in machine config —
qemu_config()now accepts aportsfield (e.g.,ports = ["2222:22", "8118:8118"]) for default port forwarding. CLI--portflags extend these. Fixed a bug where multiple ports created duplicate QEMU netdevs. Fixed hostfwd syntax to use QEMU’shost-:guestformat. QEMU machines default to SSH (2222:22), HTTP (8080:80), and SimpleIoT (8118:8118). - Service enablement moved to units — units now declare
services = ["sshd"]to indicate which init scripts they provide. The image assembly auto-enables services by readingservicemetadata from installed APKs and creatingS50<name>symlinks (or custom priority likeS10network). Theservicesparameter onimage()is removed. - Design specs — added
docs/starlark-packaging-images.md(move packaging and image assembly to composable Starlark tasks) anddocs/file-templates.md(external template files using Gotext/template, replacing inline heredocs in units). - Go class uses golang container —
go_binary()now defaults to thegolang:1.24external container image instead oftoolchain-musl. Cross-compilation is handled viaGOARCH/GOOSenvironment variables withCGO_ENABLED=0for static binaries, so the container always runs at host architecture (no QEMU overhead). - Per-unit sandbox and shell selection — units now have
sandbox(bool, default false) andshell(string, default “sh”) fields. The autotools, cmake, and image classes setsandbox=True, shell="bash"for bwrap isolation. External containers (likegolang:1.24) use the defaults — no bwrap, POSIX sh — since they don’t ship bwrap or bash. - simpleiot unit — new
go_binaryunit for SimpleIoT v0.18.5, an IoT application for sensor data, telemetry, and device management. - ca-certificates unit — Mozilla CA bundle for TLS verification. Added to dev-image alongside simpleiot.
- Per-task container resolution — tasks can override the unit-level
container via
task(container = "..."). The executor resolves the container per-task, falling back to the unit default. - TUI: amber
[yoe]title — the top-left title in the TUI now renders[yoe]in amber on black, matching the project logo. - Fix module URLs in
initgenerated project file.
[0.7.1] - 2026-04-06
- Unit
releasefield — units can now specifyrelease = Nfor packaging revisions (apk-rNsuffix). Defaults to 0. Bump when the unit definition changes but the upstream version doesn’t. - Build metadata — each unit’s build directory now contains a
build.jsonwith status, start/finish times, duration, build disk usage, installed size (destdir/apk), and input hash. The TUI detail view shows build time and sizes alongside the unit name. - Persistent build output — executor output (
executor.log) is now written for both CLI and TUI builds, so the TUI detail view shows build output regardless of how the build was triggered.
[0.7.0] - 2026-04-06
- Container units — build containers are now Starlark units
(
toolchain-musl) instead of an embedded Dockerfile. Containers participate in the DAG, caching, and versioning. Classes setcontainerandcontainer_archexplicitly.run(host = True)enables host-side execution for container builds. The embedded Dockerfile andEnsureImage()are removed. Container images are tagged with arch for explicitness (yoe-ng/toolchain-musl:15-x86_64). Cross-arch containers usedocker buildxautomatically. - Container image prefix renamed — Docker image prefix changed from
yoe-ng/toyoe/(e.g.,yoe/toolchain-musl:15-x86_64). Arch is always included in the tag for explicitness. Cross-arch containers usedocker buildxautomatically. - TUI: detail view log search — press
/in the unit detail view to search build output and logs. Matching lines are highlighted in yellow;n/Njump to next/previous match. Firstescclears the search, second returns to the unit list. - TUI: color-coded unit types — unselected units are now subtly colored by
class: blue for regular units, magenta for images, cyan for containers.
Selected unit uses a brighter green for visibility. Search (
/) also matches unit class, so typing “image” or “container” filters to units of that type. - E2E build test scripts — added
yoe_e2e,yoe_e2e_x86_64, andyoe_e2e_arm64shell functions inenvsetup.shthat buildbase-imagefrom the e2e test project for x86_64 and arm64 (cross-build via QEMU user-mode).
[0.6.0] - 2026-04-03
- TUI: ctrl+f/ctrl+b page scrolling — added vim-style page-forward and page-back keybindings in both the unit list and detail views, alongside the existing PgUp/PgDn keys.
- Heavy development notice — GitHub releases and
yoe updatenow remind users to clean their build directory and re-create projects with each new release. - Updated plan/spec indexes — all specs and plans marked with current implementation status; added plans INDEX.
- Remove
repository()builtin — therepository(path = "...")config inPROJECT.staris removed. APK repos are now always atrepo/<project-name>/, derived from the project name. This eliminates a confusing override that defeated per-project repo scoping. - TUI: show all units — removed the filter that only showed units reachable from image definitions. The TUI now lists all units in the project.
- README: “Is Yoe-NG Right for You?” — new section clarifying when to use Yocto vs Yoe-NG. Added container workloads on the target device to the roadmap in Design Priorities.
- Fix
yoe updatedownload URL — binary name now matches goreleaser’s naming convention (yoe-Linux-x86_64) instead of incorrectly including the version (yoe-v0.1.0-Linux-x86_64), which caused 404 errors. - Unit name collision detection — duplicate unit names now error at evaluation time with a clear message showing which module first defined the unit.
- PROVIDES collision detection — two units providing the same virtual name in the same module now error. Units from higher-priority modules (later in the module list) override lower-priority ones with a notice.
--projectflag —yoe --project projects/customer-a.star buildselects an alternate project file. Available on all subcommands.- Per-project APK repo — package repositories are now scoped per project
name (
repo/<project>/) to prevent stale packages across project switches. - README: Principles section — added six core design principles covering leveraging existing infrastructure, aggressive caching, custom containers per unit, no intermediate formats, one tool for all levels, and tracking upstream closely.
- README: Build dependencies and caching — new section explaining the three kinds of build dependencies (host tools via containers, library deps via sysroot/apk, language-native deps via their own package managers), symmetric caching at the unit level, and how native builds unlock existing package ecosystems (e.g., PyPI wheels on ARM).
- README: Cross-compilation is optional — updated from “no cross compilation” to “cross compilation is optional,” acknowledging that Go and some C/C++ packages cross-compile easily while fussy packages can avoid it.
- Raspberry Pi in yoe init — rpi machine added to the project initialization template.
- Fix false “old build layout” warning —
warnOldLayoutwas written for the oldbuild/<arch>/<unit>/directory structure but the current layout isbuild/<unit>.<scope>/, causing every build directory to trigger a spurious warning.
[0.5.1] - 2026-04-02
- Remove version from release binary name to fix stable download URL.
[0.5.0] - 2026-04-02
BASE-IMAGE boots on RPI4
- Tasks replace build steps —
build = [...]replaced bytasks = [...]with named build phases. Each task hasrun(shell string),fn(Starlark function), orsteps(mixed list). Classes (autotools, cmake, go) are now pure Starlark. run()builtin — Starlark functions can execute shell commands directly during builds. Errors show.starfile and line number, not generated shell.run(cmd, check=False)returns exit code/stdout/stderr for conditional logic.run(cmd, privileged=True)runs directly in the container as root for operations like losetup/mount that bwrap can’t do.- Unit scope — units declare
scope = "machine","noarch", or"arch"(default). Machine-scoped units (kernels, images) build per-machine. Build directories are flat:build/<name>.<scope>/. Repo is flat with scope in filenames:repo/<name>-<ver>-r0.<scope>.apk. - Machine-portable images — images no longer hard-code machine-specific
packages or partitions.
MACHINE_CONFIGandPROVIDESinject machine hardware specifics automatically.base-imageworks across QEMU x86, QEMU arm64, and Raspberry Pi without changes. PROVIDESvirtual packages — units and kernels declareprovidesto fulfill virtual names.provides = "linux"onlinux-rpi4means images that list"linux"get the RPi kernel when building forraspberrypi4.- Image assembly in Starlark — disk image creation moved from Go to
classes/image.starusingrun(). Fully readable, customizable, forkable. - Raspberry Pi BSP module (
units-rpi) — machine definitions, kernel fork units, GPU firmware, and boot config for Raspberry Pi 4 and 5. - Runtime dependency resolution — image assembly now resolves transitive
runtime dependencies automatically.
RUNTIME_DEPSpredeclared variable available after unit evaluation. Three-phase loader: machines → units → images. - Layers renamed to modules —
layer()→module(),LAYER.star→MODULE.star,yoe layer→yoe module,layers/→modules/. Aligns terminology with Go modules model used for dependency resolution.
[0.4.0] - 2026-03-31
ARM BUILDS ON X86 NOW WORK
- TUI global notifications — the TUI now shows a yellow banner for background operations like container image rebuilds. Previously these events were only visible in build log files.
- cmake added to build container — cmake is now available as a bootstrap tool in the container (version bump to 14), enabling units that use the cmake build system.
- xz switched to cmake — the xz unit now uses the cmake class instead of autotools with gettext workarounds, simplifying the build definition.
- TUI reloads .star files before each build — editing unit definitions or classes no longer requires restarting the TUI. The project is re-evaluated from Starlark on each build, picking up any changes to build steps, deps, or configuration.
- Fix xz autoreconf failure — xz’s
configure.acusesAM_GNU_GETTEXTmacros which require gettext’s m4 files. The xz unit now provides stub m4 macros and skipsautopoint, allowingautoreconfto succeed without gettext installed in the container. - Cross-architecture builds — build arm64 and riscv64 images on x86_64 hosts
using QEMU user-mode emulation. Target arch is resolved from the machine
definition. Run
yoe container binfmtfor one-time setup, thenyoe build base-image --machine qemu-arm64works transparently. - Arch-aware build directories — build output is now stored under
build/<arch>/<unit>/and APK repos underbuild/repo/<arch>/, supporting multi-arch builds in the same project. Note: existing build caches underbuild/<unit>/will need to be rebuilt (yoe clean --all). yoe container binfmt— new command to register QEMU user-mode emulation for cross-architecture container builds. Shows what it will do and prompts for confirmation.- Multi-arch QEMU —
yoe runnow auto-detects cross-architecture execution and uses software emulation (-cpu max) instead of KVM. Container includesqemu-system-aarch64andqemu-system-riscv64. - TUI setup menu — press
sto open a setup view for selecting the target machine. Shows available machines with their architecture and highlights the current selection. Designed to accommodate future setup options.
[0.3.4] - 2026-03-30
- Build lock files — a PID-based
.lockfile is written during builds so otheryoeinstances can detect in-progress work instead of marking active builds as failed. Builds are skipped if another process is already building the same unit. yoe clean --locks— removes stale lock files left behind by crashed or killed builds.- TUI edit for cached layers — pressing
eon a unit now also searches the layer cache, so editing works for units from layers cloned viayoe layer sync.
[0.3.3] - 2026-03-30
- HTTPS layer URLs —
yoe initnow uses HTTPS URLs for the units-core layer instead of SSH, removing the need for SSH key setup to get started.
[0.3.2] - 2026-03-30
- TUI scrolling — both the unit list and detail log views are now
scrollable. The unit list shows
↑/↓overflow indicators when there are more units than fit on screen. The detail view supportsj/k,PgUp/PgDn,g/Gnavigation through the full build output and log, with auto-follow during active builds. - Auto-sync layers —
yoe buildand other commands that load the project now automatically clone missing layers on first use, matching the lazy container-build pattern. Existing cached layers are not fetched/updated, so there is no added latency on subsequent runs. Explicityoe layer syncis still available to update layers. - TUI confirmation prompts — quitting (
q/ctrl+c) and cancelling a build (x) now prompt for confirmation when builds are active, preventing accidental loss of in-progress builds. Declining a prompt clears the message cleanly. - Fix build cancellation not stopping containers — cancelling a build (via
TUI quit or
ctrl+con the CLI) now explicitly stops the Docker container (docker stop) instead of only killing the CLI client, which left containers running in the background. - Fix stale cache after cancelled builds — the cache marker is now removed before building so a cancelled or failed rebuild no longer appears cached from a previous successful build.
[0.3.1] - 2026-03-30
ALL UNITS ARE NOW BUILDING
- Per-unit sysroots — each unit’s build sysroot is assembled from only its
transitive
deps, not every previously built unit. Fixes busybox symlinks shadowing container tools (e.g., musl-linkedexprbreaking autoconf). - Run from TUI — press
ron an image unit to launch it in QEMU. - Log writer plumbing — container stdout/stderr in image assembly and source fetch/prepare output now route through the build log writer instead of os.Stdout. Fixes TUI alt-screen corruption during background builds.
- Autotools maintainer-mode override —
makeinvocations passACLOCAL=true AUTOCONF=true AUTOMAKE=true AUTOHEADER=true MAKEINFO=trueto prevent re-running versioned autotools (e.g.,aclocal-1.16) that aren’t in the container. Fixes gawk and similar packages. - rcS init script —
base-filesnow includes/etc/init.d/rcSwhich runs all/etc/init.d/S*scripts at boot. - network-config unit — new unit that configures a network interface via an init script.
- Build failure context — when a unit fails, the output now lists all downstream units blocked by the failure. The TUI shows cached units in blue and displays the full build queue (waiting/cached) before work begins.
- dev-image — added
kmodandutil-linuxto the development image. - Image rootfs dep fix — image assembly now follows only
runtime_depswhen resolving packages, not build-timedeps. Fixes build-only packages (e.g., gettext via xz) being installed into the rootfs and overflowing the partition.
[0.3.0] - 2026-03-30
THIS RELEASE DOES NOT WORK - this release is only to capture rename and TUI updates. Wait for a future one to do any work.
BREAKING CHANGE - due to rename, recommend deleting any external projects and starting over.
- Terminology rename — “recipe” is now “unit” and “package” is now
“artifact” throughout the codebase. The Starlark
package()function is nowunit(), the image fieldpackagesis nowartifacts, and therecipes/directory in layers is nowunits/. Therecipes-corelayer is nowunits-core. The Gointernal/packagingpackage is nowinternal/artifact. yoe log— view build logs from the command line. Shows the most recent build log by default, or a specific unit’s log withyoe log <unit>. Use-eto open the log in$EDITOR.yoe diagnose— launch Claude Code with the/diagnoseskill to analyze a build failure. Uses the most recent build log by default, or a specific unit’s log withyoe diagnose <unit>.- TUI rewrite —
yoewith no args launches an interactive unit list with inline build status (cached/waiting/building/failed). Builds run in-process viabuild.BuildUnits()with real-time status events — dependencies show as yellow “waiting”, then flash green as they build. Features: background builds (b/B), edit unit in$EDITOR(e), view build log (l), diagnose with Claude (d), add unit with Claude (a), clean with confirmation (c/C), search/filter (/), and a split detail view showing executor output and build log tail. Theyoe tuisubcommand has been removed. - Build events —
build.Options.OnEventcallback notifies callers (e.g., the TUI) as each unit transitions through cached/building/done/failed states.
[0.2.10] - 2026-03-30
yoe container shell— interactive bash shell inside the build container with bwrap sandbox, sysroot mounts, and the same environment variables recipes see during builds. Useful for debugging build failures and sandbox issues.
[0.2.9] - 2026-03-30
- Bash for build commands — switched build shell from busybox sh to bash.
Avoids autoconf compatibility issues (e.g.,
AS_LINENO_PREPAREinfinite loop) and matches what upstream build scripts expect. Removed per-recipe bash workaround from util-linux. - User account API — new
classes/users.starprovidesuser()andusers_commands()functions for defining user accounts in Starlark.base-filesis now a callablebase_files()function that accepts ausersparameter — image recipes can override it to add users (e.g., dev-image adds auseraccount with passwordpassword).
[0.2.8] - 2026-03-30
- meson build system support — added samurai (ninja-compatible build tool),
meson, and kmod recipes. Container updated to v11 with python3 and
py3-setuptools for meson. Build environment now sets
PYTHONPATHto the sysroot so Python packages installed by recipes are discoverable. - Container versioning note — CLAUDE.md now documents that both
Dockerfile.buildandinternal/container.gomust be bumped together. - gettext recipe — builds GNU gettext from source as a recipe instead of
relying on the container. Provides
autopointneeded by packages like xz that use gettext macros in their autotools build. - Sysroot binaries on PATH —
/build/sysroot/usr/binis now prepended toPATHduring builds, so executables from dependency recipes are discoverable. - Autotools class respects explicit
buildsteps — no longer prepends default autoreconf/configure when a recipe provides its own build commands. - Claude Code plugin — added
.claude/plugin with AI skills for recipe development:diagnose(iterative build failure analysis),new-recipe(generate recipes from URLs/descriptions),update-recipe(version bumps),audit-recipe(review against best practices and other distros). --cleanbuild flag — deletes source and destdir before rebuilding.--forcenow only skips the cache check without cleaning.--force/--cleanscoped to requested recipes — dependency recipes still use the cache, only explicitly named recipes are force-rebuilt.- Fixed
YOE_CACHEhelp text — was~/.cache/yoe-ng, actually defaults tocache/in the project directory.
[0.2.7] - 2026-03-27
- Per-recipe build logs — build output written to
build/<recipe>/build.log. Console is quiet by default; on error the log path is printed. Use--verbose/-vto stream build output to the console. - Fixed QEMU machine templates — removed UEFI firmware (
ovmf/aavmf/opensbi) incompatible with MBR+syslinux boot, fixed root devicevda2→vda1.
[0.2.6] - 2026-03-27
- base-files recipe — provides filesystem skeleton:
/etc/passwd(root with blank password),/etc/inittab(busybox init + getty),/boot/extlinux/(boot config), and essential mount point dirs (/proc,/sys,/dev, etc.). Moved from hardcoded Go to a recipe so users can customize via overlays. - Serial console uses
gettyfor proper login prompt.
[0.2.5] - 2026-03-27
Added
- musl libc recipe — copies the musl dynamic linker from the build container into the image so dynamically linked packages work at runtime.
- Automatic package dep resolution — image assembly now resolves transitive build and runtime deps from recipe metadata. e.g., openssh automatically pulls in openssl and zlib without listing them in the image recipe.
- Recipes without source — recipes with no
sourcefield (e.g., musl) skip source preparation instead of erroring.
Fixed
- Disable ext4 features (
64bit,metadata_csum,extent) incompatible with syslinux 6.03 so bootloader can load kernel from any partition size. - Image package dep resolution walks both
depsandruntime_depsso shared libraries are included. - OpenSSL recipe uses
--libdir=libso libraries install to/usr/libinstead of/usr/lib64— fixes “Error loading shared library libcrypto.so.3”. - Inittab no longer tries to mount
/dev(already mounted by kernel viadevtmpfs.mount=1). - Skip
TestBuildRecipes_WithDepsin CI — GitHub Actions runners don’t support user namespaces inside Docker. - Most stuff in
dev-imagenow works.
[0.2.4] - 2026-03-27
- update BL config
[0.2.3] - 2026-03-27
Changed
- Container as build worker —
yoeCLI always runs on the host. The container is now a stateless build worker invoked only for commands that need container tools (gcc, bwrap, mkfs, etc.). Eliminates container startup overhead for read-only commands (config,desc,refs,graph,clean). - File ownership — build output uses
--user uid:gidso files created by the container are owned by the host user, not root. - QEMU host-first —
yoe runtries hostqemu-system-*first, falls back to the container if not found. --forcescoped to requested recipes —--forceand--cleanonly force-rebuild the explicitly requested recipes; dependencies still use the cache for incremental builds.- Busybox init — images use busybox
/sbin/initwith a minimal/etc/inittabinstead ofinit=/bin/sh. Shell respawns on exit, clean shutdown viapoweroff.
Fixed
- Shell quoting in bwrap sandbox commands — semicolons in env exports no longer split the command at the outer shell level.
- Package installation in image assembly — always extracts
.apkfiles viatarinstead of gating onapkbinary availability. - Rootfs mount points (
/proc,/sys,/dev,/tmp,/run) now included in disk images via.keepplaceholder files. devtmpfs.mount=1added to kernel cmdline so/devis populated before init.
Removed
YOE_IN_CONTAINERenvironment variable — no longer needed.ExecInContainer/InContainer/HasBwrapAPIs — replaced byRunInContainer.- Container re-exec pattern — the yoe binary is no longer bind-mounted into the container.
[0.2.2] - 2026-03-27
Added
- Layer
pathfield — layers can live in a subdirectory of a repo viapath = "layers/recipes-core". Layer name derived from path’s last component. - Project-local cache — source and layer caches default to
cache/in the project directory instead of~/.cache/yoe-ng/ .gitignoreinyoe init— new projects get a.gitignorewith/buildand/cache- Autotools
autoreconf— autotools class auto-runsautoreconf -fiwhen./configureis missing (common with git sources) - SSH URL support for source fetching (
git@host:user/repo.git) - Design: per-recipe tasks and containers — planned support for named
task()build steps with optional per-task Docker container images. Container resolves: task → package → bwrap. Seedocs/superpowers/plans/per-recipe-containers.md.
Changed
- Default layer in
yoe inituses SSH URL (git@github.com:YoeDistro/yoe-ng.git) withpath = "layers/recipes-core" - Container no longer mounts a separate cache volume — cache/ is accessible through the project mount
- Container runs with
--privileged(needed for losetup/mount during disk image creation and /dev/kvm for QEMU)
[0.2.1] - 2026-03-27
Added
- Dev-image with 10+ packages — new
dev-imagebuilds end-to-end with sysroot, including essential libraries (openssl, ncurses, readline, libffi, expat, xz), networking (curl, openssh), and debug tools (strace, vim) - Remote layer fetching —
yoe layer syncclones/fetches layers from Git - Sysroot + image deps in DAG — build sysroot and image dependencies resolved as part of the dependency graph
yoe_sloc— source lines of code counter usingscc
Fixed
- Correct partition size for
losetup, ensure sysroot dir exists - Recipe fixes for end-to-end dev-image builds
Changed
- Moved design docs into
docs/directory - Expanded build-environment and comparisons documentation
[0.2.0] - 2026-03-26
Added
- Bootable QEMU x86_64 image — end-to-end flow from recipes to a partitioned disk image that boots to a Linux kernel with busybox
- Starlark
load()support — class imports and@layer//pathlabel-based references across layers,//resolves to layer root when inside a layer - Recursive recipe discovery —
recipes/**/*.stardirectory traversal recipes-corelayer — autotools/cmake/go/image classes, busybox/zlib/ syslinux/linux recipes, base-image, qemu-x86_64 machine- APKINDEX generation —
APKINDEX.tar.gzfor apk dependency resolution - Bootstrap framework —
yoe bootstrap stage0/stage1/status - Container auto-enter — host
yoebinary bind-mounted into container, Dockerfile embedded in binary, versioned image tags
Fixed
- Build busybox as static binary (no shared lib dependency on rootfs)
- APKINDEX uses SHA1 base64 as required by apk
- Handle git sources in workspace (tag upstream without re-init)
- bwrap sandbox inside Docker with
--security-opt seccomp=unconfined - Mount git root for layer resolution
Changed
- Prefer git sources with shallow clone over tarballs
- Move build commands to
envsetup.sh(yoe_build,yoe_test)
[0.1.0] - 2026-03-26
Initial release of yoe-ng — a next-generation embedded Linux distribution builder.
Added
- CLI foundation —
yoe init,yoe config show,yoe clean,yoe layercommands with stdlib switch/case dispatch (no framework) - Starlark evaluation engine — recipe and configuration evaluation using
go.starlark.net with built-in functions (
project(),machine(),package(),image(),layer_info(), etc.) - Dependency resolution — DAG construction, Kahn’s algorithm topological
sort with cycle detection,
yoe desc,yoe refs,yoe graph - Content-addressed hashing — SHA256 cache keys from recipe + source + patches + dep hashes + architecture
- Source management —
yoe source fetch/list/verify/cleanwith content-addressed cache and patch application - Build execution —
yoe buildwith bubblewrap per-recipe sandboxing, automatic container isolation via Docker/Podman - Package creation — APK package creation,
yoe repocommands, local repository management - Image assembly — rootfs construction, overlay application, disk image generation with syslinux MBR + extlinux
- Device interaction —
yoe flashwith safety checks,yoe runfor QEMU with KVM - Interactive TUI — Bubble Tea interface for browsing recipes and machines
- Developer workflow —
yoe dev extract/diff/statusfor source modification - Custom commands — extensible CLI via
commands/*.star - Patch support — per-recipe patch files applied as git commits