top of page
Writer's pictureUFRJ Nautilus

Docker

Updated: May 12, 2022

What is Docker?


Docker is an open platform that allows engineers to develop, ship and run their applications in a controlled and configurable environment. In simpler terms, it's a tool that allows developers to build their applications with all the dependencies and configurations required in an isolated environment called container.


Here at Nautilus, we use Docker as a development environment. Since ROS, the core of our application, requires Ubuntu 20.04, we built a Docker image with that operating system, our ROS distribution and other tools that make our life easier while coding, like tmux, for example. This allows us to work within any desktop operating system, without having to dual-boot, the process of installing two operating systems in a single computer, Ubuntu like before.


Why use Docker?


Well, if you're a little bit into tech you might be thinking, why not just use a virtual machine instead? And this actually makes sense, virtual machines could provide the level of isolation that is required, but VM's and containers have some architectural differences.


A VM is essentially an emulation of a computer, they run on top of a physical machine using a Hypervisor, a Hypervisor is a tool that provides the emulated hardware to a VM. Also, on top of the hypervisor we have a full Operating System, running with all the hardware like network stack, CPU, RAM and Storage emulated. This adds a lot of performance overhead, thus making the execution of our applications a lot slower. With Docker, on the other hand, you just have the operating system level of virtualization, this makes the difference between running our applications natively and within Docker unnoticeable. Also, with Docker you don’t need an entire OS in your hard drive, just the parts that are necessary.


How Docker works


The core of Docker is the container technology and containers are, actually, a set of other technologies that work together in a very specific way. Here, we’re going to talk about the main three of them.


Namespaces


The first one we're going to talk about is the Namespaces. Namespaces is a Linux Kernel feature that creates a hierarchy to isolate a global system resource, like isolated processes, network devices or mounted devices for example. In the container context, this feature isolates all the resources that the container is going to use, in a way that, for the container, it looks like he has his personal instance of that resource, meaning that the container cannot access the outside world of his namespace.


Cgroups


The second one is the cgroups, this is another Linux Kernel feature, but now related to measurement and isolation of hardware resources, with cgroups we can determine how much CPU, memory and network an application can use.


Suppose we have a container with an application running in our computer, with cgroups we can limit that container memory usage to any arbitrary value that we want, like 3GB for example and thus the application will be throttled to that predefined limit.


Chroot


Last but not least, the chroot is a system call that can change the apparent root directory of a given process. By default, in a Unix-like OS, root directory (/) is the top directory, the root file system sits on the same disk partition where the root directory is located and it is on top of this root file system, where all other file systems are mounted. All file system entries branch out of this root, this is the system’s actual root.


But each process has its own idea of what the root directory is, and Docker does this by using the chroot system call. What happens is the process running inside the Docker container uses an, apparent, isolated directory structure apart from the operating system.


Summing up


Summing up, the core of how a container works can be described by those three technologies, the namespaces that isolates the container resources and limits what the container can access, the cgroups that gives the hardware resources to the container in execution and the chroot that does the mapping of the container directory structure.


The Dockerfile


Now, you might be wondering how to start dockerizing your applications. The first step is to create what is called the Dockerfile. This file is a simple text file that includes instructions that Docker uses to build the image that will be used as a template to generate your application’s container. As defined by the Dockerfile, the image you build contains everything your application needs to run: from the most basic level, such as typically a cut down OS and a runtime environment, to more specific assets like application files, third-party libraries, environment variables and other dependencies. Once the image is built, it is saved in your storage to be used whenever you wish including running your application in a container, building new images using it as a template and more.


A Docker library: Docker Hub


The most simple way to explain what is Docker Hub is the following: Docker Hub is to Docker what GitHub is to Git. It is a storage for Docker images that anyone can use. After the image is built, it is possible to push it to Docker Hub and conversely you, or anyone else, can pull that image to any other machine that has Docker installed. Remember that this image contains a specific version of your application along with everything essential it needs. This means that with Docker you no longer need to maintain long complex release documents that have to be precisely followed before anyone can use the application, you can simply package up your application into an image, push it to Docker Hub and run it virtually anywhere.


Writen by Matheus Rodrigues e João Cavalcanti


0 comments

Recent Posts

See All

Comentários


bottom of page