There is a reason why containers have entered the industrial world. They solve many of the previous challenges of using open source software like Linux. They allow the identical software version to be used on different hardware, with different OS versions. They decouple operating system and application in terms of update and version maintenance. And their impact on critical parameters such as realtime behavior, memory consumption, etc. is negligible.

Containers allow third-party software to run on critical hardware while ensuring that critical components can perform their tasks with no interference. And to contain this software in a sandbox, minimizing the risk of malware infecting the system.

Decoupling containers from each other and from the operating system, i.e. sandboxing, is not trivial and requires careful execution and maintenance. It is feasible, but of course not in the complete sense that a virtual machine (hypervisor) would allow. However, it should suffice for most applications.

It makes it possible to run third-party software, since it can hardly do any damage due to the isolation in the container (provided that the containers are set up properly, have restricted rights, the underlying Linux kernel is hardened, etc.). Resource-hungry applications can easily be limited in their impact on the overall system, e.g. in memory consumption or CPU usage, thanks to the container approach. Tools and libraries required by an application are supplied in the container. The application thus becomes independent of the underlying operating system (distribution) and at the same time containers can be operated with different versions of a library on the same target system.

For us, container technology does not simply mean Docker. Of course we support Docker, but as a runtime system we (prefer to) use a fully compatible approach that is much more suitable for embedded systems. Smaller footprint and higher security with full compatibility to OCI runtime specification.