Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Docker Best Practices: Using ARG and ENV in Your Dockerfiles

16 October 2024 at 21:35

If you’ve worked with Docker for any length of time, you’re likely accustomed to writing or at least modifying a Dockerfile. This file can be thought of as a recipe for a Docker image; it contains both the ingredients (base images, packages, files) and the instructions (various RUN, COPY, and other commands that help build the image).

In most cases, Dockerfiles are written once, modified seldom, and used as-is unless something about the project changes. Because these files are created or modified on such an infrequent basis, developers tend to rely on only a handful of frequently used instructions — RUN, COPY, and EXPOSE being the most common. Other instructions can enhance your image, making it more configurable, manageable, and easier to maintain. 

In this post, we will discuss the ARG and ENV instructions and explore why, how, and when to use them.

2400x1260 best practices

ARG: Defining build-time variables

The ARG instruction allows you to define variables that will be accessible during the build stage but not available after the image is built. For example, we will use this Dockerfile to build an image where we make the variable specified by the ARG instruction available during the build process.

FROM ubuntu:latest
ARG THEARG="foo"
RUN echo $THEARG
CMD ["env"]

If we run the build, we will see the echo foo line in the output:

$ docker build --no-cache -t argtest .
[+] Building 0.4s (6/6) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

However, if we run the image and inspect the output of the env command, we do not see THEARG:

$ docker run --rm argtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d19f59677dcd
HOME=/root

ENV: Defining build and runtime variables

Unlike ARG, the ENV command allows you to define a variable that can be accessed both at build time and run time:

FROM ubuntu:latest
ENV THEENV="bar"
RUN echo $THEENV
CMD ["env"]

If we run the build, we will see the echo bar line in the output:

$ docker build -t envtest .
[+] Building 0.8s (7/7) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo bar                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

If we run the image and inspect the output of the env command, we do see THEENV set, as expected:

$ docker run --rm envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=f53f1d9712a9
THEENV=bar
HOME=/root

Overriding ARG

A more advanced use of the ARG instruction is to serve as a placeholder that is then updated at build time:

FROM ubuntu:latest
ARG THEARG
RUN echo $THEARG
CMD ["env"]

If we build the image, we see that we are missing a value for $THEARG:

$ docker build -t argtest .
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo $THEARG                                                                                           0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

However, we can pass a value for THEARG on the build command line using the --build-arg argument. Notice that we now see THEARG has been replaced with foo in the output:

 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

The same can be done in a Docker Compose file by using the args key under the build key. Note that these can be set as a mapping (THEARG: foo) or a list (- THEARG=foo):

services:
  argtest:
    build:
      context: .
      args:
        THEARG: foo

If we run docker compose up --build, we can see the THEARG has been replaced with foo in the output:

$ docker compose up --build
<-- SNIP -->
 => [argtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [argtest 2/2] RUN echo foo                                                                                0.0s
 => [argtest] exporting to image                                                                                     0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->
Attaching to argtest-1
argtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argtest-1  | HOSTNAME=d9a3789ac47a
argtest-1  | HOME=/root
argtest-1 exited with code 0

Overriding ENV

You can also override ENV at build time; this is slightly different from how ARG is overridden. For example, you cannot supply a key without a value with the ENV instruction, as shown in the following example Dockerfile:

FROM ubuntu:latest
ENV THEENV
RUN echo $THEENV
CMD ["env"]

When we try to build the image, we receive an error:

$ docker build -t envtest .
[+] Building 0.0s (1/1) FINISHED                                                                     docker:desktop-linux
 => [internal] load build definition from Dockerfile                                                                 0.0s
 => => transferring dockerfile: 98B                                                                                  0.0s
Dockerfile:3
--------------------
   1 |     FROM ubuntu:latest
   2 |
   3 | >>> ENV THEENV
   4 |     RUN echo $THEENV
   5 |
--------------------
ERROR: failed to solve: ENV must have two arguments

However, we can remove the ENV instruction from the Dockerfile:

FROM ubuntu:latest
RUN echo $THEENV
CMD ["env"]

This allows us to build the image:

$ docker build -t envtest .
<-- SNIP -->
 => [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [2/2] RUN echo $THEENV                                                                                    0.0s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

Then we can pass an environment variable via the docker run command using the -e flag:

$ docker run --rm -e THEENV=bar envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=638cf682d61f
THEENV=bar
HOME=/root

Although the .env file is usually associated with Docker Compose, it can also be used with docker run.

$ cat .env
THEENV=bar

$ docker run --rm --env-file ./.env envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=59efe1003811
THEENV=bar
HOME=/root

This can also be done using Docker Compose by using the environment key. Note that we use the variable format for the value:

services:
  envtest:
    build:
      context: .
    environment:
      THEENV: ${THEENV}

If we do not supply a value for THEENV, a warning is thrown:

$ docker compose up --build
WARN[0000] The "THEENV" variable is not set. Defaulting to a blank string.
<-- SNIP -->
 => [envtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [envtest 2/2] RUN echo ${THEENV}                                                                          0.0s
 => [envtest] exporting to image                                                                                     0.0s
<-- SNIP -->
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=816d164dc067
envtest-1  | THEENV=
envtest-1  | HOME=/root
envtest-1 exited with code 0

The value for our variable can be supplied in several different ways, as follows:

  • On the compose command line:
$ THEENV=bar docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the shell environment on the host system:
$ export THEENV=bar
$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the special .env file:
$ cat .env
THEENV=bar

$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0

Finally, when running services directly using docker compose run, you can use the -e flag to override the .env file.

$ docker compose run -e THEENV=bar envtest

[+] Creating 1/0
 ✔ Synchronized File Shares                                                                                          0.0s
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=219e96494ddd
TERM=xterm
THEENV=bar
HOME=/root

The tl;dr

If you need to access a variable during the build process but not at runtime, use ARG. If you need to access the variable both during the build and at runtime, or only at runtime, use ENV.

To decide between them, consider the following flow (Figure 1):

build process

Both ARG and ENV can be overridden from the command line in docker run and docker compose, giving you a powerful way to dynamically update variables and build flexible workflows.

Learn more

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

1 October 2024 at 19:51

With many organizations moving to container-based workflows, keeping track of the different versions of your images can become a problem. Even smaller organizations can have hundreds of container images spanning from one-off development tests, through emergency variants to fix problems, all the way to core production images. This leads us to the question: How can we tame our image sprawl while still rapidly iterating our images?

A common misconception is that by using the “latest” tag, you are guaranteeing that you are pulling the “latest version of the image. Unfortunately, this assumption is wrong — all latest means is “the last image pushed to this registry.”

Read on to learn more about how to avoid this pitfall when using Docker and how to get a handle on your Docker images.

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

Using tags

One way to address this issue is to use tags when creating an image. Adding one or more tags to an image helps you remember what it is intended for and helps others as well. One approach is always to tag images with their semantic versioning (semver), which lets you know what version you are deploying. This sounds like a great approach, and, to some extent, it is, but there is a wrinkle.

Unless you’ve configured your registry for immutable tags, tags can be changed. For example, you could tag my-great-app as v1.0.0 and push the image to the registry. However, nothing stops your colleague from pushing their updated version of the app with tag v1.0.0 as well. Now that tag points to their image, not yours. If you add in the convenience tag latest, things get a bit more murky.

Let’s look at an example:

FROM busybox:stable-glibc

# Create a script that outputs the version
RUN echo -e "#!/bin/sh\n" > /test.sh && \
    echo "echo \"This is version 1.0.0\"" >> /test.sh && \
    chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

We build the above with docker build -t tagexample:1.0.0 . and run it.

$ docker run --rm tagexample:1.0.0
This is version 1.0.0

What if we run it without a tag specified?

$ docker run --rm tagexample
Unable to find image 'tagexample:latest' locally
docker: Error response from daemon: pull access denied for tagexample, repository does not exist or may require 'docker login'.
See 'docker run --help'.

Now we build with docker build . without specifying a tag and run it.

$ docker run --rm tagexample
This is version 1.0.0

The latest tag is always applied to the most recent push that did not specify a tag. So, in our first test, we had one image in the repository with a tag of 1.0.0, but because we did not have any pushes without a tag, the latest tag did not point to an image. However, once we push an image without a tag, the latest tag is automatically applied to it.

Although it is tempting to always pull the latest tag, it’s rarely a good idea. The logical assumption — that this points to the most recent version of the image — is flawed. For example, another developer can update the application to version 1.0.1, build it with the tag 1.0.1, and push it. This results in the following:

$ docker run --rm tagexample:1.0.1
This is version 1.0.1

$ docker run --rm tagexample:latest
This is version 1.0.0

If you made the assumption that latest pointed to the highest version, you’d now be running an out-of-date version of the image.

The other issue is that there is no mechanism in place to prevent someone from inadvertently pushing with the wrong tag. For example, we could create another update to our code bringing it up to 1.0.2. We update the code, build the image, and push it — but we forget to change the tag to reflect the new version. Although it’s a small oversight, this action results in the following:

$ docker run --rm tagexample:1.0.1
This is version 1.0.2

Unfortunately, this happens all too frequently.

Using labels

Because we can’t trust tags, how should we ensure that we are able to identify our images? This is where the concept of adding metadata to our images becomes important.

The first attempt at using metadata to help manage images was the MAINTAINER instruction. This instruction sets the “Author” field (org.opencontainers.image.authors) in the generated image. However, this instruction has been deprecated in favor of the more powerful LABEL instruction. Unlike MAINTAINER, the LABEL instruction allows you to set arbitrary key/value pairs that can then be read with docker inspect as well as other tooling.

Unlike with tags, labels become part of the image, and when implemented properly, can provide a much better way to determine the version of an image. To return to our example above, let’s see how the use of a label would have made a difference.

To do this, we add the LABEL instruction to the Dockerfile, along with the key version and value 1.0.2.

FROM busybox:stable-glibc

LABEL version="1.0.2"

# Create a script that outputs the version
RUN echo -e "#!/bin/sh\n" > /test.sh && \
    echo "echo \"This is version 1.0.2\"" >> /test.sh && \
    chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

Now, even if we make the same mistake above where we mistakenly tag the image as version 1.0.1, we have a way to check that does not involve running the container to see which version we are using.

$ docker inspect --format='{{json .Config.Labels}}' tagexample:1.0.1
{"version":"1.0.2"}

Best practices

Although you can use any key/value as a LABEL, there are recommendations. The OCI provides a set of suggested labels within the org.opencontainers.image namespace, as shown in the following table:

LabelContent
org.opencontainers.image.createdThe date and time on which the image was built (string, RFC 3339 date-time).
org.opencontainers.image.authorsContact details of the people or organization responsible for the image (freeform string).
org.opencontainers.image.urlURL to find more information on the image (string).
org.opencontainers.image.documentationURL to get documentation on the image (string).
org.opencontainers.image.sourceURL to the source code for building the image (string).
org.opencontainers.image.versionVersion of the packaged software (string).
org.opencontainers.image.revisionSource control revision identifier for the image (string).
org.opencontainers.image.vendorName of the distributing entity, organization, or individual (string).
org.opencontainers.image.licensesLicense(s) under which contained software is distributed (string, SPDX License List).
org.opencontainers.image.ref.nameName of the reference for a target (string).
org.opencontainers.image.titleHuman-readable title of the image (string).
org.opencontainers.image.descriptionHuman-readable description of the software packaged in the image (string).

Because LABEL takes any key/value, it is also possible to create custom labels. For example, labels specific to a team within a company could use the com.myorg.myteam namespace. Isolating these to a specific namespace ensures that they can easily be related back to the team that created the label.

Final thoughts

Image sprawl is a real problem for organizations, and, if not addressed, it can lead to confusion, rework, and potential production problems. By using tags and labels in a consistent manner, it is possible to eliminate these issues and provide a well-documented set of images that make work easier and not harder.

Learn more

10 Docker Myths Debunked

19 September 2024 at 20:59

Containers might seem like a relatively recent technological breakthrough, but their origins trace back to the 1970s when Unix systems first used container-like concepts to isolate applications. Fast-forward to 2013, and Docker revolutionized this idea by introducing a portable, user-friendly container platform, sparking widespread adoption. In 2015, Docker was instrumental in creating the Open Container Initiative (OCI) to promote open standards within the container ecosystem. With the stability provided by the OCI, container technology spread throughout the tech world.

Although Docker Desktop is the leading tool for creating containerized applications, Docker remains surrounded by numerous misconceptions. In this article, we’ll debunk the top Docker myths and explain the capabilities and benefits of this transformative technology.

2400x1260 evergreen docker blog e

Myth #1: Docker is no longer open source

Docker consists of multiple components, most of which are open source. The core Docker Engine is open source and licensed under the Apache 2.0 license, so developers can continue to use and contribute to it freely. Other vital parts of the Docker ecosystem, like the Docker CLI and Docker Compose, also remain open source. This allows the community to maintain transparency, contribute improvements, and customize their container solutions.

Docker’s commitment to open source is best illustrated by the Moby Project. In 2017, Moby was spun out of the then-monolithic Docker codebase to provide a set of “building blocks” to create containerized solutions and platforms. Docker uses the Moby project for the free Docker Engine project and our commercial Docker Desktop.

Users can also find Trusted Open Source Content on Docker Hub. These Docker-Sponsored Open Source and Docker Official Images offer trusted versions of open source projects and reliable building blocks for better development.

Docker is a founder and remains a crucial contributor to the OCI, which defines container standards. This initiative ensures that Docker and other container technologies remain interoperable and maintain a commitment to open source principles.

Myth #2: Docker containers are virtual machines 

Docker containers are often mistaken for virtual machines (VMs), but the technologies operate quite differently. Unlike VMs, Docker containers don’t include an entire operating system (OS). Instead, they share the host operating system kernel, making them more lightweight and efficient. VMs require a hypervisor to create virtual hardware for the guest OS, which introduces significant overhead. Docker only packages the application and its dependencies, allowing for faster startup times and minimal performance overhead.

By utilizing the host operating system’s resources efficiently, Docker containers use fewer resources overall than VMs, which need substantial resources to run multiple operating systems concurrently. Docker’s architecture efficiently runs numerous isolated applications on a single host, optimizing infrastructure and development workflows. Understanding this distinction is crucial for maximizing Docker’s lightweight and scalable potential.

However, when running on non-Linux systems, Docker needs to emulate a Linux environment. For example, Docker Desktop uses a fully managed VM to provide a consistent experience across Windows, Mac, and Linux by running its Linux components inside this VM.

Myth #3: Docker Engine vs. Docker Desktop vs. Docker Enterprise Edition — They’re all the same

Considerable confusion surrounds the different Docker options that are available, which include:

  • Mirantis Container Runtime: Docker Enterprise Edition (Docker EE) was sold to Mirantis in 2019 and rebranded as Mirantis Container Runtime. This software, which is managed and sold by Mirantis, is designed for production container deployments and offers a lightweight alternative to existing orchestration tools.
  • Docker Engine: Docker Engine is the fully open source version built from the Moby Project, providing the Docker Engine and CLI.
  • Docker Desktop: Docker Desktop is a commercial offering sold by Docker that combines Docker Engine with additional features to enhance developer productivity. The Docker Business subscription includes advanced security and governance features for enterprises.

All of these variants are OCI-compliant, differing mainly in features and experiences. Docker Engine caters to the open source community, Docker Desktop elevates developer workflows with a comprehensive suite of tools for building and scaling applications, and Mirantis Container Runtime provides a specialized solution for enterprise production environments with advanced management and support. Understanding these distinctions is crucial for selecting the appropriate Docker variant to meet specific project requirements and organizational goals.

Myth #4: Docker is the same thing as Kubernetes

This myth arises from the fact that both Docker and Kubernetes are associated with containerized environments. Although they are both key players in the container ecosystem, they serve different roles.

Kubernetes (K8s) is an orchestration system for managing container instances at scale. This container orchestration tool automates the deployment, scaling, and operations of multiple containers across clusters of hosts. Other orchestration technologies include Nomad, serverless frameworks, Docker’s Swarm mode, and Apache Mesos. Each offers different features for managing containerized workloads.

Docker is primarily a platform for developing, shipping, and running containerized applications. It focuses on packaging applications and their dependencies in a portable container and is often used for local development where scaling is not required. Docker Desktop includes Docker Compose, which is designed to orchestrate multi-container deployments locally

In many organizations, Docker is used to develop applications, and the resulting Docker images are then deployed to Kubernetes for production. To support this workflow, Docker Desktop includes an embedded Kubernetes installation and the Compose Bridge tool for translating Compose format into Kubernetes-friendly code.

Myth #5: Docker is not secure

The belief that Docker is not secure is often a result of misunderstandings around how security is implemented within Docker. To help reduce security vulnerabilities and minimize the attack surface, Docker offers the following measures:

Opt-in security configuration 

Except for a few components, Docker operates on an opt-in basis for security. This approach removes friction for new users, but means Docker can still be configured to be more secure for enterprise considerations and for security-conscious users with sensitive data.

“Rootless” mode capabilities 

Docker Engine can run in rootless mode, where the Docker daemon runs without root permissions. This capability reduces the potential blast radius of malicious code escaping a container and gaining root permissions on the host. Docker Desktop takes security further by offering Enhanced Container Isolation (ECI), which provides advanced isolation features beyond what rootless mode can offer.

Built-in security features

Additionally, Docker security includes built-in features such as namespaces, control groups (cgroups), and seccomp profiles that provide isolation and limit the capabilities of containers.

SOC 2 Type 2 Attestation and ISO 27001 Certification

It’s important to note that, as an open source tool, Docker Engine is not in scope for SOC 2 Type 2 Attestation or ISO 27001 Certification. These certifications pertain to Docker, Inc.’s paid products, which offer additional enterprise-grade security and compliance features. These paid features, outlined in a Docker security blog post, focus on enhancing security and simplifying compliance for SOC 2, ISO 27001, FedRAMP, and other standards.  

Along with these security measures, Docker also provides best practices in the Docker documentation and training materials to help users learn how to secure their containers effectively. Recognizing and implementing these features reduces security risks and ensures that Docker can be a secure platform for containerized applications.

Myth #6: Docker is dead

This myth stems from the rapid growth and changes within the container ecosystem over the past decade. To keep pace with these changes, Docker is actively developed and is also widely adopted. In fact, the Stack Overflow community chose Docker as the most-used and most-desired developer tool in the 2024 Developer Survey for the second year in a row and recognized it as the most-admired developer tool. 

Docker Hub is one of the world’s largest repositories of container images. According to the 2024 Docker State of Application Development Report, tools like Docker Desktop, Docker Scout, Docker Build Cloud, and Docker Debug are integral to more than two-thirds of container development workflows. And, as a founding member of the OCI and steward of the Moby project, Docker continues to play a guiding role in containerization.

In the automation space, Docker is crucial for building OCI images and creating lightweight runners for build queues. With the rise of data science and AI/ML, Docker images facilitate the exchange of models, notebooks, and applications, supported by GPU workload capabilities in Docker Desktop. Additionally, Docker is widely used for quickly and cost-effectively mocking up test scenarios as an alternative to deploying actual hardware or VMs.

Myth #7: Docker is hard to learn

The belief that Docker is difficult to learn often comes from the perceived complexity of container concepts and Docker’s many features. However, Docker is a foundational technology used by more than 20 million developers worldwide, and countless resources are available to make learning Docker accessible.

Docker, Inc. is committed to the developer experience, creating intuitive and user-friendly product design for Docker Desktop and supporting products. Documentation, workshops, training, and examples are accessible through Docker Desktop, the Docker website and blog, and the Docker Navigator newsletter. Additionally, the Docker documentation site offers comprehensive guides and learning paths, and Udemy courses co-produced with Docker help new users understand containerization and Docker usage.

The thriving Docker community also contributes a wealth of content and resources, including video tutorials, how-tos, and in-person talks.

Myth #8: Docker and container technology are only for developers

The idea that Docker is only for developers is a common misconception. Docker and containers are used across various fields beyond development. Docker Desktop’s ability to run containerized workloads on Windows, macOS, or Linux requires minimal technical knowledge from users. Its integration features — synchronized host filesystems, network proxy support, air-gapped containers, and resource controls — ensure administrators can enforce governance and security.

  • Data science: Docker provides consistent environments, enabling data scientists to share models, datasets, and development setups seamlessly.
  • Healthcare: Docker deploys scalable applications for managing patient data and running simulations, such as medical imaging software across different hospital systems.
  • Education: Educators and students use Docker to create reproducible research environments, which facilitate collaboration and simplify coding project setups.

Docker’s versatility extends beyond development, providing consistent, scalable, and secure environments for various applications.

Myth #9: Docker Desktop is just a GUI

The myth that Docker Desktop is merely a graphical user interface (GUI) overlooks its extensive features designed to enhance developer experience, streamline container management, and accelerate productivity, such as:

Cross-platform support

Docker is Linux-based, but most developer workstations run Windows or macOS. Docker Desktop enables these platforms to run Docker tooling inside a fully managed VM integrated with the host system’s networking, filesystem, and resources.

Developer tools

Docker Desktop includes built-in Kubernetes, Docker Scout for supply chain management, Docker Build Cloud for faster builds, and Docker Debug for container debugging.

Security and governance

For administrators, Docker Desktop offers Registry Access Management and Image Access Management, Enhanced Container Isolation, single sign-on (SSO) for authorization, and Settings Management, making it an essential tool for enterprise deployment and management.

Myth #10: Docker containers are for microservices only

Although Docker containers are popular for microservices architectures, they can be used for any type of application. For example, monolithic applications can be containerized, allowing them and their dependencies to be isolated into a versioned image that can run across different environments. This approach enables gradual refactoring into microservices if desired.

Additionally, Docker is excellent for rapid prototyping, allowing quick deployment of minimum viable products (MVPs). Containerized prototypes are easier to manage and refactor compared to those deployed on VMs or bare metal.

Now you know

Now that you have the facts, it’s clear that adopting Docker can significantly enhance productivity, scalability, and security for a variety of use cases. Docker’s versatility, combined with extensive learning resources and robust security features, makes it an indispensable tool in modern software development and deployment. Adopting Docker and its true capabilities can significantly enhance productivity, scalability, and security for your use case.

For more detailed insights, refer to the 2024 Docker State of Application Development Report or dive into Docker Desktop now to start your Docker journey today

Learn more

❌
❌