Apertis platform guide
Apertis is a versatile open source infrastructure tailored to the automotive needs and fit for a wide variety of electronic devices. Security and modularity are two of its primary strengths. Apertis provides a feature-rich framework for add-on software and resilient upgrade capabilities. Beyond an operating system, it offers new APIs, tools and cloud services.
Apertis is not just a Ubuntu/Debian-derived Linux distribution: it comprises code hosting, code review tools, package build and image generation services, and an automated testing infrastructure with the aim of providing a clean, reliable environment to let developers go from sources to deployable system images in the most dependable way, ready to be hosted on the cloud and made available for OTA updates. It aims to be the integration point for different product lines with different goals and different schedules, but all sharing the same common core.
The set of packages shipped in Apertis provides a baseline that can be picked and chosen to quickly create deployable images for different products.
The goal of Apertis is to maximize the sharing across products, with the aim to improve time to market and reduce the efforts on long-term maintenance, in particular to enable quick and consistent response times for security issues in internet-enabled products.
While the typical embedded workflow only supports one product line at a time, Apertis focuses on a collaborative development where multiple, independent teams participate with different goals and schedules, providing the tools to maximize the commonalities that can be shared across the lines of development, yet providing the ability to differentiate and experiment without impacting the shared core.
Apertis is designed for a collaborative development model with the aim of sharing efforts across between multiple independent participants: it offers tools to maximize the shared commonalities to reduce costs and increase development speed, in contrast to approaches where all participants are working towards a single joint goal moving forward on a discrete path. While other workflows only focus on one team at a time, Apertis' strength is its ability to support many independent teams concurrently.
The development infrastructure of Apertis runs as a service to provide shared access to all those independent product teams:
- by connecting the software packages and the infrastructure through a well-defined interface their mutual independence is guaranteed, which means that in no case including an updated package requires changes to the infrastructure and, vice-versa, infrastructural changes do not require existing packages to be updated, reducing the maintenance burden over the long term
- common infrastructure is provided such that developers do not need to go through the long and subtly error-prone process of setting up the build environment
- tests get run on every supported hardware device and variant automatically, saving teams the burden of setting up their own test lab
- what is tested is what ends up deployed on production devices
- security patches in the core can be picked up by all products immediately with no effort.
On top of the shared core, each user has access to private areas for components that are not meant to be shared with other teams, both for experimentation and for product development.
The Apertis approach is driven by the need to increase safety and security by deploying updated software in the hands of users in a timely and efficient way.
Urgent patches like those fixing exploitable CVEs are merged timely in the shared core and are immediately available for downstream products. A quarterly release cycle provides a way for product teams to get access to a stable stream of less urgent updates.
The optional OSTree-based update mechanism provide an efficient and safe update facility for the base platform, such that updates can be deployed often with minimal costs.
Updates to application bundles can be deployed without re-deploying the whole platform, decoupling the release and update cycle of the base software from the one of each application.
The package-centric solution and shared infrastructure offered by Apertis defines clear boundaries between modules. Those boundaries enable all the involved teams to maximize commonalities across products and limit the branching only in the areas where it is required: this reduces the rebasing/resync efforts and makes the frequent updates needed to ensure the safety of deployed products more economically sustainable in the long term. The ability to distinguish between hardware-independent ospacks and the hardware-specific recipes and the separation between platform applications and application bundles defines additional modularity boundaries that allow fixes to be deployed quickly without impacting the product stability.
To be able to deliver recurrent updates efficiently a close relation to all the upstream projects, from the Linux kernel to Debian and Ubuntu. Aligning with Ubuntu LTS enables Apertis to directly benefit from its long term quality management and steady flow of fixes. Importing more up-to-date packages from Debian where relevant builds on top of push for quality, compatibility and maturity shared by Debian and by its many derived distributions. By closely tracking its upstreams, Apertis benefits from their well-defined CVE processes to identify urgent issues that affect packages hosted in the repositories and quickly act on those.
With Apertis developer teams can focus on their differentiating components and rely on the shared core and shared operations for everything else. The key enabler for that is the package-centric approach, which is at the center of all activities, tools and processes. Development, customization and variant handling rely on packages, and deployable images are the result of combining binary packages belonging to a specific set in a post-process step.
This enables to share infrastructure resources like compilation across all users, since changes gets processed once and resulting binaries gets shared with all. These resources are immediately available to every team, since they are provided as a service they do not simply reside on a dedicated developer machine. This ensures reproducibility, traceability and consistency during the whole product life cycle.
From sources to deployment
Apertis is built on top of Ubuntu/Debian
deb packages for their high quality
and modularity. All the source packages are stored in the Open Build Service
(OBS) instance provided by Collabora where the
Apertis OBS projects are hosted: each release is currently composed of a set of
target: packages intended for use in product images
development: additional packages needed to build
targetand development tools
hmi: packages for the current reference HMI on top of
sdk: packages to build the SDK virtual machine recommended for development
In case of common interests new packages can easily be introduce in the relevant OBS projects. It's also possible to create additional projects in OBS for e.g. product specific software packages or product specific modifications that build on top of the common baselines but aren't suitable for more general inclusion.
OBS takes care of automatically building the package sources for all the
configured architectures (
aarch64) and publishes the
results as signed APT repositories. Each package is built in a closed,
well-defined environment where OBS automatically installs all the build tools
and resolves dependencies from a clean state using the packages it has built
already: if any dependency is missing the build fails and the developer is
Most of the package sources get automatically updated from the latest Ubuntu
LTS release or have been manually picked from later Ubuntu/Debian releases when
more up-to-date versions have been deemed beneficial (one example of that is
systemd package). This allows Apertis to share bugfixes and security
fixes with the efforts done by the wider Ubuntu and Debian communities.
Packages specific to Apertis have their sources stored in git repositories hosted on the Apertis infrastructure. The git infrastructure to manage the repositories is very basic at the moment, however deploying a Apertis GitLab instances is planned to allow apertis users and contributors to more easily share and host code.
For packages hosted on the Apertis git infrastructure the Apertis Jenkins instance is configured to automatically fetch tagged commits and push them to OBS after initial smoketesting to get a continuous and hands-off update flow.
After OBS has finished building a package, the results get published in a Debian package repository. The open-source packages from Apertis can be found in the public Apertis Debian repositories.
The packages in these repositories can then be used to build images suitable for deployment onto a variety of targets, e.g. hardware boards (reference or product specific), virtual machines (e.g. for development), container images etc etc.
The overall strategy for building these deployments is to separate it in various stages starting with early common stages (e.g. a common rootfs) and then further specializing in hardware or deployment specific additions (e.g. kernel and bootloader for a specific board).
ospack: prepares the set of user space binary packages that are not specific to a particular SoC/platform or deployment method, producing tarballs
- deployment method: applies the transformations needed to make updates available through OSTree or with the
- platform: contains the hardware-specific packages for a particular SoC/platform, like bootloader, kernel, codecs, GL stack, etc., producing the images meant to be booted on devices
The reason for this split is that it allows the creation of just one SoC,
platform or even board specific recipe which can be combined with a
selection of ospacks. Typically Apertis has a
target ospack with only the
software meant to go into the final product and a
development ospack which
in addition contains extra developer tooling.
For instance the
development ospacks for
armhf could be
combined with the i.MX6 Sabrelite and Raspberry Pi 2 recipes to generate four
possible combinations of flashable images.
Generating images does not involve rebuilding all the packages from source and can thus be done quite quickly and flexibly.
The whole pipeline is controlled through YAML files, configuring partitions, bootloaders, which packages gets installed, which overlays is to be applied, and arbitrary customization shell scripts to be run over the rootfs in a QEMU-based virtualized environment.
This process is usually automatically run by Jenkins jobs, but during development can be run on developers machines as well, fetching packages from the same OBS binary repositories.
Once images are generated, other Jenkins jobs will schedule a batch of tests on the LAVA instance hosted by Collabora which will take care of deploying the freshly generated images on actual target devices running in the Collabora device farm, and of controlling them over serial connections to run the defined testcases and gather the results.
The keypoints in the workflow for Apertis components are thus:
- the VirtualBox-based Apertis SDK is used for development
- sources are stored on the git code hosting service with Debian-compatible packaging instructions
- Phabricator is used for project management and code review
- every patch submitted on Phabricator is automatically built and the unit tests shipped by the package are executed to provide quick feedback to the developer
- Jenkins pushes tagged commits to OBS
- OBS builds source packages and generates binary packages in controlled environments
- every night Jenkins jobs generate ospacks from the repositories built by OBS
- the generated ospacks are combined with other recipes by Jenkins jobs to produce deployable images
- on success, Jenkins jobs trigger on-device tests on LAVA to check the produced images
- other Jenkins jobs check if packages included in the images are tagged with task identifiers and closes them in Phabricator
Software components packaging (deb)
For all the software components meant to be included in the images Apertis uses
deb packaging format used by Debian
To package a component from scratch, Debian provides a short guide to get started.
The VirtualBox-based Apertis SDK virtual machine images ship with all the needed tools installed, providing a reliable, self-contained environment ready to be used.
Once the component has been packaged, its sources can be uploaded to OBS in a user-specific project, such that the developer is free to experiment and iterate until the component is ready to be submitted to the appropriate OBS project.
Open Build Service (OBS)
Open Build Service is the backbone of the package building infrastructure in Apertis. It stores source packages and builds them in controlled environments for all the configured CPU architectures.
The source packages are grouped in projects that can be stacked such that each stacked project will automatically share the packages in the underlying projects with all the other projects stacked on top of them.
This provides a lot of flexibility to handle different groups working on different products with different schedules while still sharing a common core.
OBS also enables developers to easily branch and customize existing packages and build them in isolated, user-specific sandboxes. Developers are free to experiment and when the updates are ready they can be merged back into the original project or moved to a new stacked project.
Code hosting and review process
Those have their source code hosted on Apertis servers which is automatically submitted to OBS when release tags are pushed.
Each patch is supposed to be reviewed on Phabricator before it can land in the git repository and thus on OBS.
In the near future Apertis plans to use a dedicated GitLab instance to let developers self-manage their git repositories.
All the automated tasks in Apertis are orchestrated by the Apertis Jenkins instance:
- build-testing patches attached to Phabricator
- submitting tagged commits to OBS
- running the pipeline to generate ospacks and deployable images
- triggering tests on the devices attached to LAVA
- autoclosing Phabricator tasks when marked packages are included in a succesful image build
Ospacks and how they should be processed to generate images are defined through YAML files.
This is an example configuration for an ARMv7 image,
architecture: armhf actions: - action: unpack file: ospack-armhf.tar.gz compression: gz - action: apt description: Install hardware support packages recommends: false packages: - linux-image-4.9.0-0.bpo.2-armmp-unsigned - u-boot-common - action: image-partition imagename: "apertis-armhf.img" imagesize: 4G partitiontype: gpt mountpoints: - mountpoint: / partition: root flags: [ boot ] partitions: - name: root fs: ext4 start: 0% end: 100% - action: filesystem-deploy description: Deploy the filesystem onto the image - action: run chroot: true command: update-u-boot - action: run description: Create bmap file postprocess: true command: bmaptool create apertis-armhf.img > apertis-armhf.img.bmap - action: run description: Compress image file postprocess: true command: gzip -f apertis-armhf.img
And this is the
ospack-armhf.yaml configuration for the ARMv7 ospack:
architecture: armhf actions: - action: debootstrap suite: "17.06" keyring-package: apertis-archive-keyring components: - target mirror: https://repositories.apertis.org/apertis variant: minbase - action: apt description: Install basic packages packages: [ procps, sudo, openssh-server, adduser ] - action: run description: Setup user account chroot: true script: setup-user.sh - action: run description: Configure the hostname chroot: true command: echo apertis > /etc/hostname - action: overlay description: Overlay systemd-networkd configuration source: networkd - action: run description: Configure network services chroot: true script: setup-networking.sh - action: pack compression: gz file: ospack-armhf.tar.gz
Collections of images are built every night such that developers can always dowload the latest image to deploy it to a target device and start using it immediately.
Images are then published on the deployable image hosting website
Automated testing with LAVA
To ensure the continued quality of the generated images, a set of automated on-device tests is run for every image so issues can be found early if they arise and handled in a timely fashion by developers.
Apertis makes heavy use of some technologies for its purposes:
- Ubuntu/Debian packages
- Systemd for application lifecycle tracking
- AppArmor for policy enforcement
- OSTree/Flatpak for safe and efficient deployments
- DBus for privilege separation
- Wayland for graphics
- GStreamer for multimedia playback
OTA update strategies
Apertis currently uses Btrfs snapshot for OTA updates but is now moving towards a full system update strategy based on OSTree which has some benefits over the Btrfs-based solution:
- works in containers
- works on flash-specific filesystems like UBIfs
- smaller downloads
Apertis ships an application framework that provides flexibility and modularity post-deployment:
- deploying applications can be done independently from full system updates
- network access, inter-process communications and file access policies are enforced through AppArmor
- failed application deployments are safely rolled back
- works both on targets, virtual machines and containers
- Flatpak-based application bundle file format
adecommand-line tool simplifies the generation of application bundles
Apertis is designed to work in setups where tasks are split over different domains, which can be connected SoCs, virtual machines or containers, for instance on setups where a privileged domain has no direct Internet access but relies on a separated, more constrained domain to access network services and validate any communication.
Comparing to Yocto
Yocto is a project that provides templates, tools and methods to create custom Linux-based systems for embedded products.
In itself, Yocto is not a distribution: it's a tool to generate custom distributions. This means that there's little sharing across products using Yocto: for instance any testing done on the official Poky reference distribution will produce only very limited benefits (if any) to other distributions generated using Yocto, like the one provided by commercial suppliers. In general, Yocto provides customizability at the expense of very high costs, specially in the medium/long term.
To generate a distribution Yocto uses a tree of recipes that get executed on each developer machine and build every component from scratch. To minimize the impact of the differences on each developer machine, Yocto builds all the development tools from scratch. To do so it requires a delicate boostrap step where developers must pay attention to not introduce unwanted dependencies on their host setup which may subtly affect the result of the build. By relying on a centralized infrastructure offered as a service to developers, Apertis provides a isolated, reproducible environment that does not require any bootstrap step and developers can start building their packages immediately without waiting for the full toolchain to be compiled from scratch.
The infrastructure to build software components and to assemble the results in a deployable image is usually quite tied together: porting recipes from one tree to another is usually non-trivial. Apertis avoids this issue with a workflow that relies on using debian-compliant source packages and on the best-practices enforced by Debian and its derivatives. As an example, a great deal of effort is put into enforcing ABI compatibility to avoid breaks, ensuring that each module is interchangeable and flexible over the whole life-cycle of the product. In Yocto those issues aren't managed and a full rebuild from scratch is usually required to overcome them, thus saving some work up front by trading it off for a much higher effort in the long term.
While yocto uses a non-deterministic target sysroot as its build environment, the Apertis infrastructure ensures reproducibility by using ephemeral environments that only have the minimal set of dependencies installed: once their packages build, developers can then be sure to have explicitly captured all the required dependencies, while with Yocto subtle differences in the sysroot may cause failures or subtle differences from one developer machine to another.
The results of the search are