Normal view

There are new articles available, click to refresh the page.
Before yesterdayDocker

Enhancing Container Security with Docker Scout and Secure Repositories

25 November 2024 at 21:43

Docker Scout simplifies the integration with container image repositories, improving the efficiency of container image approval workflows without disrupting or replacing current processes. Positioned outside the repository’s stringent validation framework, Docker Scout serves as a proactive measure to significantly reduce the time needed for an image to gain approval. 

By shifting security checks left and integrating Docker Scout into the early stages of the development cycle, issues are identified and addressed directly on the developer’s machine.

2400x1260 generic scout blog d

Minimizing vulnerabilities 

This leftward shift in security accelerates the development process by keeping developers in flow, providing immediate feedback on policy violations at the point of development. As a result, images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans (Figure 1). By resolving issues earlier, Docker Scout minimizes the number of vulnerabilities detected during the CI/CD process, freeing up the security team to focus on higher-priority tasks.

Sample secure repo pipeline showing images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans.
Figure 1: Sample secure repository pipeline.

Additionally, the Docker Scout console allows the security team to define custom security policies and manage VEX (Vulnerability Exploitability eXchange) statements. VEX is a standard that allows vendors and other parties to communicate the exploitability status of vulnerabilities, allowing for the creation of justifications for including software that has been tied to Common Vulnerabilities and Exposures (CVE).

This feature enables seamless collaboration between development and security teams, ensuring that developers are working with up-to-date compliance guidelines. The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection (Figure 2).

Sample secure repo pipeline with scout: The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection.
Figure 2: Sample secure repository pipeline with Docker Scout.

How to secure image repositories

A secure container image repository provides digitally signed, OCI-compliant images that are rebuilt and rescanned nightly. These repositories are typically used in highly regulated or security-conscious environments, offering a wide range of container images, from open source software to commercial off-the-shelf (COTS) products. Each image in the repository undergoes rigorous security assessments to ensure compliance with strict security standards before being deployed in restricted or sensitive environments.

Key components of the repository include a hardened source code repository and an OCI-compliant registry (Figure 3). All images are continuously scanned for vulnerabilities, stored secrets, problematic code, and compliance with various standards. Each image is assigned a score upon rebuild, determining its compliance and suitability for use. Scanning reports and justifications for any potential issues are typically handled using the VEX format.

Key components of the repository include a hardened source code repository and an OCI-compliant registry
Figure 3: Key components of the repository include a hardened source code repository and an OCI-compliant registry.

Why use a hardened image repository?

A hardened image repository mitigates the security risks associated with deploying containers in sensitive or mission-critical environments. Traditional software deployment can expose organizations to vulnerabilities and misconfigurations that attackers can exploit. By enforcing a strict set of requirements for container images, the hardened image repository ensures that images meet the necessary security standards before deployment. Rebuilding and rescanning each image daily allows for continuous monitoring of new vulnerabilities and emerging attack vectors.

Using pre-vetted images from a hardened repository also streamlines the development process, reducing the load on development teams and enabling faster, safer deployment.

In addition to addressing security risks, the repository also ensures software supply chain security by incorporating software bills of materials (SBOMs) with each image. The SBOM of a container image can provide an inventory of all the components that were used to build the image, including operating system packages, application specific dependencies with its versions, and license information. By maintaining a robust vetting process, the repository guarantees that all software components are traceable, verifiable, and tamper-free — essential for ensuring the integrity and reliability of deployed software.

Who uses a hardened image repository?

The main users of a hardened container image repository include internal developers responsible for creating applications, developers working on utility images, and those responsible for building base images for other containerized applications. Note that the titles for these roles can vary by organization.

  • Application developers use the repository to ensure that the images their applications are built upon meet the required security and compliance standards.
  • DevOps engineers are responsible for building and maintaining the utility images that support various internal operations within the organization.
  • Platform developers create and maintain secure base images that other teams can use as a foundation for their containerized applications.

Daily builds

One challenge with using a hardened image repository is the time needed to approve images. Daily rebuilds are conducted to assess each image for vulnerabilities and policy violations, but issues can emerge, requiring developers to make repeated passes through the pipeline. Because rebuilds are typically done at night, this process can result in delays for development teams, as they must wait for the next rebuild cycle to resolve issues.

Enter Docker Scout

Integrating Docker Scout into the pre-submission phase can reduce the number of issues that enter the pipeline. This proactive approach helps speed up the submission and acceptance process, allowing development teams to catch issues before the nightly scans. 

Vulnerability detection and management

  • Requirement: Images must be free of known vulnerabilities at the time of submission to avoid delays in acceptance.
  • Docker Scout contribution:
    • Early detection: Docker Scout can scan Docker images during development to detect vulnerabilities early, allowing developers to resolve issues before submission.
    • Continuous analysis: Docker Scout continually reviews uploaded SBOMs, providing early warnings for new critical CVEs and ensuring issues are addressed outside of the nightly rebuild process.
    • Justification handling: Docker Scout supports VEX for handling exceptions. This can streamline the justification process, enabling developers to submit justifications for potential vulnerabilities more easily.

Security best practices and configuration management

  • Requirement: Images must follow security best practices and configuration guidelines, such as using secure base images and minimizing the attack surface.
  • Docker Scout contribution:
    • Security posture enhancement: Docker Scout allows teams to set policies that align with repository guidelines, checking for policy violations such as disallowed software or unapproved base images.

Compliance with dependency management

  • Requirement: All dependencies must be declared, and internet access during the build process is usually prohibited.
  • Docker Scout contribution:
    • Dependency scanning: Docker Scout identifies outdated or vulnerable libraries included in the image.
    • Automated reports: Docker Scout generates security reports for each dependency, which can be used to cross-check the repository’s own scanning results.

Documentation and provenance

  • Requirement: Images must include detailed documentation on their build process, dependencies, and configurations for auditing purposes.
  • Docker Scout contribution:
    • Documentation support: Docker Scout contributes to security documentation by providing data on the scanned image, which can be used as part of the official documentation submitted with the image.

Continuous compliance

  • Requirement: Even after an image is accepted into the repository, it must remain compliant with new security standards and vulnerability disclosures.
  • Docker Scout contribution:
    • Ongoing monitoring: Docker Scout continuously monitors images, identifying new vulnerabilities as they emerge, ensuring that images in the repository remain compliant with security policies.

By utilizing Docker Scout in these areas, developers can ensure their images meet the repository’s rigorous standards, thereby reducing the time and effort required for submission and review. This approach helps align development practices with organizational security objectives, enabling faster deployment of secure, compliant containers.

Integrating Docker Scout into the CI/CD pipeline

Integrating Docker Scout into an organization’s CI/CD pipeline can enhance image security from the development phase through to deployment. By incorporating Docker Scout into the CI/CD process, the organization can automate vulnerability scanning and policy checks before images are pushed into production, significantly reducing the risk of deploying insecure or non-compliant images.

  • Integration with build pipelines: During the build stage of the CI/CD pipeline, Docker Scout can be configured to automatically scan Docker images for vulnerabilities and adherence to security policies. The integration would typically involve adding a Docker Scout scan as a step in the build job, for example through a GitHub action. If Docker Scout detects any issues such as outdated dependencies, vulnerabilities, or policy violations, the build can be halted, and feedback is provided to developers immediately. This early detection helps resolve issues long before images are pushed to the hardened image repository.
  • Validation in the deployment pipeline: As images move from development to production, Docker Scout can be used to perform final validation checks. This step ensures that any security issues that might have arisen since the initial build have been addressed and that the image is compliant with the latest security policies. The deployment process can be gated based on Docker Scout’s reports, preventing insecure images from being deployed. Additionally, Docker Scout’s continuous analysis of SBOMs means that even after deployment, images can be monitored for new vulnerabilities or compliance issues, providing ongoing protection throughout the image lifecycle.

By embedding Docker Scout directly into the CI/CD pipeline (Figure 1), the organization can maintain a proactive approach to security, shifting left in the development process while ensuring that each image deployed is safe, compliant, and up-to-date.

Defense in depth and Docker Scout’s role

In any organization that values security, adopting a defense-in-depth strategy is essential. Defense in depth is a multi-layered approach to security, ensuring that if one layer of defense is compromised, additional safeguards are in place to prevent or mitigate the impact. This strategy is especially important in environments that handle sensitive data or mission-critical operations, where even a single vulnerability can have significant consequences.

Docker Scout plays a vital role in this defense-in-depth strategy by providing a proactive layer of security during the development process. Rather than relying solely on post-submission scans or production monitoring, Docker Scout integrates directly into the development and CI/CD workflows (Figure 2), allowing teams to catch and resolve security issues early. This early detection prevents issues from escalating into more significant risks later in the pipeline, reducing the burden on the SecOps team and speeding up the deployment process.

Furthermore, Docker Scout’s continuous monitoring capabilities mean that images are not only secure at the time of deployment but remain compliant with evolving security standards and new vulnerabilities that may arise after deployment. This ongoing vigilance forms a crucial layer in a defense-in-depth approach, ensuring that security is maintained throughout the entire lifecycle of the container image.

By integrating Docker Scout into the organization’s security processes, teams can build a more resilient, secure, and compliant software environment, ensuring that security is deeply embedded at every stage from development to deployment and beyond.

Learn more

Docker Desktop 4.36: New Enterprise Administration Features, WSL 2, and ECI Enhancements

22 November 2024 at 23:38

Key features of the Docker Desktop 4.36 release include: 

Docker Desktop 4.36 introduces powerful updates to simplify enterprise administration and enhance security. This release features streamlined macOS sign-in enforcement via configuration profiles, enabling IT administrators to deploy tamper-proof policies at scale, alongside a new PKG installer for efficient, consistent deployments. Enhancements like the unified WSL 2 mono distribution improve startup speeds and workflows, while updates to Enhanced Container Isolation (ECI) and Desktop Settings Management allow for greater flexibility and centralized policy enforcement. These innovations empower organizations to maintain compliance, boost productivity, and streamline Docker Desktop management across diverse enterprise environments.

2400x1260 4.36 rectangle docker desktop release

Sign-in enforcement: Streamlined alternative for organizations for macOS 

Recognizing the need for streamlined and secure ways to enforce sign-in protocols, Docker is introducing a new sign-in enforcement mechanism for macOS configuration profiles. This Early Access update delivers significant business benefits by enabling IT administrators to enforce sign-in policies quickly, ensuring compliance and maximizing the value of Docker subscriptions.

Key benefits

  • Fast deployment and rollout: Configuration profiles can be rapidly deployed across a fleet of devices using Mobile Device Management (MDM) solutions, making it easy for IT admins to enforce sign-in requirements and other policies without manual intervention.
  • Tamper-proof enforcement: Configuration profiles ensure that enforced policies, such as sign-in requirements, cannot be bypassed or disabled by users, providing a secure and reliable way to manage access to Docker Desktop (Figure 1).
  • Support for multiple organizations: More than one organization can now be defined in the allowedOrgs field, offering flexibility for users who need access to Docker Desktop under multiple organizational accounts (Figure 2).

How it works

macOS configuration profiles are XML files that contain specific settings to control and manage macOS device behavior. These profiles allow IT administrators to:

  • Restrict access to Docker Desktop unless the user is authenticated.
  • Prevent users from disabling or bypassing sign-in enforcement.

By distributing these profiles through MDM solutions, IT admins can manage large device fleets efficiently and consistently enforce organizational policies.

Screenshot of Enforced Sign-in Configuration Profile showing Description, Signed, Installed, Settings, Details, and Custom Settings.
Figure 1: macOS configuration profile in use.
Screenshot of macOS configuration profile showing "allowedOrgs"
Figure 2: macOS configuration profile in use with multiple allowedOrgs visible.

Configuration profiles, along with the Windows Registry key, are the latest examples of how Docker helps streamline administration and management. 

Enforce sign-in for multiple organizations

Docker now supports enforcing sign-in for more than one organization at a time, providing greater flexibility for users working across multiple teams or enterprises. The allowedOrgs field now accepts multiple strings, enabling IT admins to define more than one organization via any supported configuration method, including:

  • registry.json
  • Windows Registry key
  • macOS plist
  • macOS configuration profile

This enhancement makes it easier to enforce login policies across diverse organizational setups, streamlining access management while maintaining security (Figure 3).

Learn more about the various sign-in enforcement methods.

Screenshot of Sign-in required box, saying "Sign-in to continue using Docker Desktop. You must be a member of one of the following organizations" with Docker-internal and Docker listed.
Figure 3: Docker Desktop when sign-in is enforced across multiple organizations. The blue highlights indicate the allowed company domains.

Deploy Docker Desktop for macOS in bulk with the PKG installer

Managing large-scale Docker Desktop deployments on macOS just got easier with the new PKG installer. Designed for enterprises and IT admins, the PKG installer offers significant advantages over the traditional DMG installer, streamlining the deployment process and enhancing security.

  • Ease of use: Automate installations and reduce manual steps, minimizing user error and IT support requests.
  • Consistency: Deliver a professional and predictable installation experience that meets enterprise standards.
  • Streamlined deployment: Simplify software rollouts for macOS devices, saving time and resources during bulk installations.
  • Enhanced security: Benefit from improved security measures that reduce the risk of tampering and ensure compliance with enterprise policies.

You can download the PKG installer via Admin Console > Security and Access > Deploy Docker Desktop > macOS. Options for both Intel and Arm architectures are also available for macOS and Windows, ensuring compatibility across devices.

Start deploying Docker Desktop more efficiently and securely today via the Admin Console (Figure 4). 

Screenshot of Admin console showing option to download PKG installer.
Figure 4: Admin Console with PKG installer download options.

Desktop Settings Management (Early Access) 

Managing Docker Desktop settings at scale is now easier than ever with the new Desktop Settings Management, available in Early Access for Docker Business customers. Admins can centrally deploy and enforce settings policies for Docker Desktop directly from the cloud via the Admin Console, ensuring consistency and efficiency across their organization.

Here’s what’s available now:

  • Admin Console policies: Configure and enforce default Docker Desktop settings from the Admin Console.
  • Quick import: Import existing configurations from an admin-settings.json file for seamless migration.
  • Export and share: Export policies as JSON files to easily share with security and compliance teams.
  • Targeted testing: Roll out policies to a smaller group of users for testing before deploying globally.

What’s next?

Although the Desktop Settings Management feature is in Early Access, we’re actively building additional functionality to enhance it, such as compliance reporting and automated policy enforcement capabilities. Stay tuned for more!

This is just the beginning of a powerful new way to simplify Docker Desktop management and ensure organizational compliance. Try it out now and help shape the future of settings management: Admin Console > Security and Access > Desktop Settings Management (Figure 5).

Screenshot of Admin console showing Desktop Setting Management page, which includes Global policy, Settings policy, User policies, and more.
Figure 5: Admin console with Desktop Settings Management.

Streamlining data workflow with WSL 2 mono distribution 

Simplify the Windows Subsystem for Linux (WSL 2) setup by eliminating the need to maintain two separate Docker Desktop WSL distributions. This update streamlines the WSL 2 configuration by consolidating the previously required dual Docker Desktop WSL distributions into a single distribution, now available on both macOS and Windows operating systems.

The simplification of Docker Desktop’s WSL 2 setup is designed to make the codebase easier to understand and maintain. This enhances the ability to handle failures more effectively and increases the startup speed of Docker Desktop on WSL 2, allowing users to begin their work more quickly.

The value of streamlining data workflows and relocating data to a different drive on macOS and Windows with the WSL 2 backend in Docker Desktop encompasses these key areas:

  • Improved performance: By separating data and system files, I/O contention between system operations and data operations is reduced, leading to faster access and processing.
  • Enhanced storage management: Separating data from the main system drives allows for more efficient use of space.
  • Increased flexibility with cross-platform compatibility: Ensuring consistent data workflows across different operating systems (macOS and Windows), especially when using Docker Desktop with WSL 2.
  • Enhanced Docker performance: Docker performs better when processing data on a drive optimized for such tasks, reducing latency and improving container performance.

By implementing these practices, organizations can achieve more efficient, flexible, and high-performing data workflows, leveraging Docker Desktop’s capabilities on both macOS and Windows platforms.

Enhanced Container Isolation (ECI) improvements 

  • Allow any container to mount the Docker socket: Admins can now configure permissions to allow all containers to mount the Docker socket by adding * or *:* to the ECI Docker socket mount permission image list. This simplifies scenarios where broad access is required while maintaining security configuration through centralized control. Learn more in the advanced configuration documentation.
  • Improved support for derived image permissions: The Docker socket mount permissions for derived images feature now supports wildcard tags (e.g., alpine:*), enabling admins to grant permissions for all versions of an image. Previously, specific tags like alpine:latest had to be listed, which was restrictive and required ongoing maintenance. Learn more about managing derived image permissions.

These enhancements reduce administrative overhead while maintaining a high level of security and control, making it easier to manage complex environments.

Upgrade now

The Docker Desktop 4.36 release introduces a suite of features designed to simplify enterprise administration, improve security, and enhance operational efficiency. From enabling centralized policy enforcement with Desktop Settings Management to streamlining deployments with the macOS PKG installer, Docker continues to empower IT administrators with the tools they need to manage Docker Desktop at scale.

The improvements in Enhanced Container Isolation (ECI) and WSL 2 workflows further demonstrate Docker’s commitment to innovation, providing solutions that optimize performance, reduce complexity, and ensure compliance across diverse enterprise environments.  

As businesses adopt increasingly complex development ecosystems, these updates highlight Docker’s focus on meeting the unique needs of enterprise teams, helping them stay agile, secure, and productive. Whether you’re managing access for multiple organizations, deploying tools across platforms, or leveraging enhanced image permissions, Docker Desktop 4.36 sets a new standard for enterprise administration.  

Start exploring these powerful new features today and unlock the full potential of Docker Desktop for your organization.

Learn more

What Are the Latest Docker Desktop Enterprise-Grade Performance Optimizations?

21 November 2024 at 21:34

Key highlights:

At Docker, we’re continuously enhancing Docker Desktop to meet the evolving needs of enterprise users. Since Docker Desktop 4.23, where we reduced startup time by 75%, we’ve made significant investments in both performance and stability. These improvements are designed to deliver a faster, more reliable experience for developers across industries. (Read more about our previous performance milestones.)

In this post, we walk through the latest performance enhancements.

2400x1260 evergreen docker blog a

Latest performance enhancements

Boost performance with Docker VMM on Apple Silicon Mac

Apple Silicon Mac users, we’re excited to introduce Docker Virtual Machine Manager (Docker VMM) — a powerful new virtualization option designed to enhance performance for Docker Desktop on M1 and M2 Macs. Currently in beta, Docker VMM gives developers a faster, more efficient alternative to the existing Apple Virtualization Framework for many workflows (Figure 1). Docker VMM is available starting in the Docker Desktop 4.35 release.

Screenshot of Docker Desktop showing Virtual Machine Options including Docker VMM (beta), Apple Virtualization Framework, and QEMU (legacy).
Figure 1: Docker virtual machine options.

Why try Docker VMM?

If you’re running native ARM-based images on Docker Desktop, Docker VMM offers a performance boost that could make your development experience smoother and more efficient. With Docker VMM, you can:

  • Experience faster operations: Docker VMM shows improved speeds on essential commands like git status and others, especially when caches are built up. In our benchmarks, Docker VMM eliminates certain slowdowns that can occur with the Apple Virtualization framework.
  • Enjoy flexibility: Not sure if Docker VMM is the right fit? No problem! Docker VMM is still in beta, so you can switch back to the Apple Virtualization framework at any time and try Docker VMM again in future releases as we continue optimizing it.

What about emulated Intel images?

If you’re using Rosetta to emulate Intel images, Docker VMM may not be the ideal choice for now, as it currently doesn’t support Rosetta. For workflows requiring Intel emulation, the Apple Virtualization framework remains the best option, as Docker VMM is optimized for native Arm binaries.

Key benchmarks: Real-world speed gains

Our testing reveals significant improvements when using Docker VMM for common commands, including git status:

  • Initial git status: Docker VMM outperforms, with the first run significantly faster compared to the Apple Virtualization framework (Figure 2).
  • Subsequent git status: With Docker VMM, subsequent runs are also speedier due to more efficient caching (Figure 3).

With Docker VMM, you can say goodbye to frustrating delays and get a faster, more responsive experience right out of the gate.

Graph comparison of git status times for cold caches between the Apple Virtualization Framework (~27 seconds) and Docker VMM (slightly under 10 seconds).
Figure 2: Initial git status times.
Graph comparison of git status times for warm caches between the Apple Virtualization Framework (~3 seconds) and Docker VMM (less than 1 second).
Figure 3: Subsequent git status times.

Say goodbye to QEMU

For users who may have relied on QEMU, note that we’re transitioning it to legacy support. Docker VMM and Apple Virtualization Framework now provide superior performance options, optimized for the latest Apple hardware.

Docker Desktop for Windows on Arm

For specific workloads, particularly those involving parallel computing or Arm-optimized tasks, Arm64 devices can offer significant performance benefits. With Docker Desktop now supporting Windows on Arm, developers can take advantage of these performance boosts while maintaining the familiar Docker Desktop experience, ensuring smooth, efficient operations on this architecture.

Synchronized file shares

Unlike traditional file-sharing mechanisms that can suffer from performance degradation with large projects or frequent file changes, the synchronized file shares feature offers a more stable and performant alternative. It uses efficient synchronization processes to ensure that changes made to files on the host are rapidly reflected in the container, and vice versa, without the bottlenecks or slowdowns experienced with older methods.

This feature is a major performance upgrade for developers who work with shared files between the host and container. It reduces the performance issues related to intensive file system operations and enables smoother, more responsive development workflows. Whether you’re dealing with frequent file changes or working on large, complex projects, synchronized file sharing improves efficiency and ensures that your containers and host remain in sync without delays or excessive resource usage.

Key highlights of synchronized file sharing include:

  • Selective syncing: Developers can choose specific directories to sync, avoiding unnecessary overhead from syncing unneeded files or directories.
  • Faster file changes: It significantly reduces the time it takes for changes made in the host environment to be recognized and applied within containers.
  • Improved performance with large projects: This feature is especially beneficial for large projects with many files, as it minimizes the file-sharing latency that often accompanies such setups.
  • Cross-platform support: Synchronized file sharing is supported on both macOS and Windows, making it versatile across platforms and providing consistent performance.

The synchronized file shares feature is available in Docker Desktop 4.27 and newer releases.

GA for Docker Desktop on Red Hat Enterprise Linux (RHEL)

Red Hat Enterprise Linux (RHEL) is known for its high-performance capabilities and efficient resource utilization, which is essential for developers working with resource-intensive applications. Docker Desktop on RHEL enables enterprises to fully leverage these optimizations, providing a smoother, faster experience from development through to production. Moreover, RHEL’s robust security framework ensures that Docker containers run within a highly secure, certified operating system, maintaining strict security policies, patch management, and compliance standards — vital for industries like finance, healthcare, and government.

Continuous performance improvements in every Docker Desktop release

At Docker, we are committed to delivering continuous performance improvements with every release. Recent updates to Docker Desktop have introduced the following optimizations across file sharing and network performance:

  • Advanced VirtioFS optimizations: The performance journey continued in Docker Desktop 4.33 with further fine-tuning of VirtioFS. We increased the directory cache timeout, optimized host change notifications, and removed extra FUSE operations related to security.capability attributes. Additionally, we introduced an API to clear caches after container termination, enhancing overall file-sharing efficiency and container lifecycle management.
  • Faster read and write operations on bind mounts. In Docker Desktop 4.32, we further enhanced VirtioFS performance by optimizing read and write operations on bind mounts. These changes improved I/O throughput, especially when dealing with large files or high-frequency file operations, making Docker Desktop more responsive and efficient for developers.
  • Enhanced caching for faster performance: Continuing with performance gains, Docker Desktop 4.31 brought significant improvements to VirtioFS file sharing by extending attribute caching timeouts and improving invalidation processes. This reduced the overhead of constant file revalidation, speeding up containerized applications that rely on shared files.

Why these updates matter for you

Each update to Docker Desktop is focused on improving speed and reliability, ensuring it scales effortlessly with your infrastructure. Whether you’re using RHEL, Apple Silicon, or Windows Arm, these performance optimizations help you work faster, reduce downtime, and boost productivity. Stay current with the latest updates to keep your development environment running at peak efficiency.

Share your feedback and help us improve

We’re always looking for ways to enhance Docker Desktop and make it the best tool for your development needs. If you have feedback on performance, ideas for improvement, or issues you’d like to discuss, we’d love to hear from you. If you have feedback on performance, ideas for improvement, or issues you’d like to discuss, we’d love to hear from you. Feel free to reach out and schedule time to chat directly with a Docker Desktop Product Manager via Calendly.

Learn more

Extending the Interaction Between AI Agents and Editors

19 November 2024 at 01:28

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

We recently met up with developers at GitHub Universe at the Docker booth. In addition to demonstrating the upcoming Docker agent extension for GitHub Copilot, we also hosted a “Hack with Docker Labs” session. 

2400x1260 docker labs genai

To facilitate these sessions, we created a VSCode extension to explore the relationship between agents and tools. We encouraged attendees to think about how agents can change how we interact with tools by mixing tool definitions (anything you can package in a Docker container) with prompts using a simple Markdown-based canvas.

Many of these sessions followed a simple pattern.

  • Choose a tool and describe what you want it to do.
  • Let the agent interact with that tool.
  • Ask the agent to explain what it did or adjust the strategy and try again.

It was great to facilitate these discussions and learn more about how agents are challenging us to interact with tools in new ways.  

Figure 1 shows a short example of a session where we generated a QR code (qrencode). We start by defining both a tool and a prompt in the Markdown file. Then, we pass control over to the agent and let it interact with that tool (the output from the agent pops up on the right-hand side).

Animated gif showing generation of QR code using tool and prompt in Markdown file.
Figure 1: Generating a QR code.

Feel free to create an issue in our repo if you want to learn more.

Editors

This year’s trip to GitHub Universe also felt like an opportunity to reflect on how developer workflows are changing with the introduction of coding assistants. Developers may have had language services in the editor for a long time now, but coding assistants that can predict the next most likely tokens have taught us something new. We were all writing more or less the same programs (Figure 2).

Diagram showing process flow to and from Editor, Language Service, and LLM.
Figure 2: Language service interaction.

Other agents

Tools like Cursor, GitHub Copilot Chat, and others are also teaching us new ways in which coding assistants are expanding beyond simple predictions. In this series, we’ve been highlighting tools that typically work in the background. Agents armed with these tools will track other kinds of issues, such as build problems, outdated dependencies, fixable linting violations, and security remediations.

Extending the previous diagram, we can imagine an updated picture where agents send diagnostics and propose code actions, while still offering chat interfaces for other kinds of user input (Figure 3). If the ecosystem has felt closed, get ready for it to open up to new kinds of custom agents.

Diagram showing language service interaction on the left, with extension to Agent and Chat on the right.
Figure 3: Agent extension.

More to come

In the next few posts, we’ll take this series in a new direction and look at how agents are able to use LSPs to interact with developers in new ways. An agent that represents background tasks, such as updating dependencies, or fixing linting violations, can now start to use language services and editors as tools! We think this will be a great way for agents to start helping developers better understand the changes they’re making and open up these platforms to input from new kinds of tools.

GitHub Universe was a great opportunity to check in with developers, and we were excited to learn how many more tools developers wanted to bring to their workflows. As always, to follow along with this effort, check out the GitHub repository for this project.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Why Testcontainers Cloud Is a Game-Changer Compared to Docker-in-Docker for Testing Scenarios

14 November 2024 at 22:39

Navigating the complex world of containerized testing environments can be challenging, especially when dealing with Docker-in-Docker (DinD). As a senior DevOps engineer and Docker Captain, I’ve seen firsthand the hurdles that teams face with DinD, and here I’ll share why Testcontainers Cloud is a transformative alternative that’s reshaping the way we handle container-based testing.

2400x1260 Testcontainers Cloud evergreen

Understanding Docker-in-Docker

Docker-in-Docker allows you to run Docker within a Docker container. It’s like Inception for containers — a Docker daemon running inside a Docker container, capable of building and running other containers.

How Docker-in-Docker works

  • Nested Docker daemons: In a typical Docker setup, the Docker daemon runs on the host machine, managing containers directly on the host’s operating system. With DinD, you start a Docker daemon inside a container. This inner Docker daemon operates independently, enabling the container to build and manage its own set of containers.
  • Privileged mode and access to host resources: To run Docker inside a Docker container, the container needs elevated privileges. This is achieved by running the container in privileged mode using the --privileged flag:
docker run --privileged -d docker:dind
  • The --privileged flag grants the container almost all the capabilities of the host machine, including access to device files and the ability to perform system administration tasks. Although this setup enables the inner Docker daemon to function, it poses significant security risks, as it can potentially allow the container to affect the host system adversely.
  • Filesystem considerations: The inner Docker daemon stores images and containers within the file system of the DinD container, typically under /var/lib/docker. Because Docker uses advanced file system features like copy-on-write layers, running an inner Docker daemon within a containerized file system (which may itself use such features) can lead to complex interactions and potential conflicts.
  • Cgroups and namespace isolation: Docker relies on Linux kernel features like cgroups and namespaces for resource isolation and management. When running Docker inside a container, these features must be correctly configured to allow nesting. This process can introduce additional complexity in ensuring that resource limits and isolation behave as expected.

Why teams use Docker-in-Docker

  • Isolated build environments: DinD allows each continuous integration (CI) job to run in a clean, isolated Docker environment, ensuring that builds and tests are not affected by residual state from previous jobs or other jobs running concurrently.
  • Consistency across environments: By encapsulating the Docker daemon within a container, teams can replicate the same Docker environment across different stages of the development pipeline, from local development to CI/CD systems.

Challenges with DinD

Although DinD provides certain benefits, it also introduces significant challenges, such as:

  • Security risks: Running containers in privileged mode can expose the host system to security vulnerabilities, as the container gains extensive access to host resources.
  • Stability issues: Nested containers can lead to storage driver conflicts and other instability issues, causing unpredictable build failures.
  • Complex debugging: Troubleshooting issues in a nested Docker environment can be complicated, as it involves multiple layers of abstraction and isolation.

Real-world challenges

Although Docker-in-Docker might sound appealing, it often introduces more problems than it solves. Before diving into those challenges, let’s briefly discuss Testcontainers and its role in modern testing practices.

What is Testcontainers?

Testcontainers is a popular open source library designed to support integration testing by providing lightweight, disposable instances of common databases, web browsers, or any service that can run in a Docker container. It allows developers to write tests that interact with real instances of external resources, rather than relying on mocks or stubs.

Key features of Testcontainers

  • Realistic testing environment: By using actual services in containers, tests are more reliable and closer to real-world scenarios.
  • Isolation: Each test session, or even each test can run in a clean environment, reducing flakiness due to shared state.
  • Easy cleanup: Containers are ephemeral and are automatically cleaned up after tests, preventing resource leaks.

Dependency on the Docker daemon

A core component of Testcontainers’ functionality lies in its interaction with the Docker daemon. Testcontainers orchestrates Docker resources by starting and stopping containers as needed for tests. This tight integration means that access to a Docker environment is essential wherever the tests are run.

The DinD challenge with Testcontainers in CI

When teams try to include Testcontainers-based integration testing in their CI/CD pipelines, they often face the challenge of providing Docker access within the CI environment. Because Testcontainers requires communication with the Docker daemon, many teams resort to using Docker-in-Docker to emulate a Docker environment inside the CI job.

However, this approach introduces significant challenges, especially when trying to scale Testcontainers usage across the organization.

Case study: The CI pipeline nightmare

We had a Jenkins CI pipeline that utilized Testcontainers for integration tests. To provide the necessary Docker environment, we implemented DinD. Initially, it seemed to work fine, but soon we encountered:

  • Unstable builds: Random failures due to storage driver conflicts and issues with nested container layers. The nested Docker environment sometimes clashed with the host, causing unpredictable behavior.
  • Security concerns: Running containers in privileged mode raised red flags during security audits. Because DinD requires privileged mode to function correctly, it posed significant security risks, potentially allowing containers to access the host system.
  • Performance bottlenecks: Builds were slow, and resource consumption was high. The overhead of running Docker within Docker led to longer feedback loops, hindering developer productivity.
  • Complex debugging: Troubleshooting nested containers became time-consuming. Logs and errors were difficult to trace through the multiple layers of containers, making issue resolution challenging.

We spent countless hours trying to patch these issues, but it felt like playing a game of whack-a-mole.

Why Testcontainers Cloud is a better choice

Testcontainers Cloud is a cloud-based service designed to simplify and enhance your container-based testing. By offloading container execution to the cloud, it provides a secure, scalable, and efficient environment for your integration tests.

How TestContainers Cloud addresses DinD’s shortcomings

Enhanced security

  • No more privileged mode: Eliminates the need for running containers in privileged mode, reducing the attack surface.
  • Isolation: Tests run in isolated cloud environments, minimizing risks to the host system.
  • Compliance-friendly: Easier to pass security audits without exposing the Docker socket or granting elevated permissions.

Improved performance

  • Scalability: Leverage cloud resources to run tests faster and handle higher loads.
  • Resource efficiency: Offloading execution frees up local and CI/CD resources.

Simplified configuration

  • Plug-and-play integration: Minimal changes are required to switch from local Docker to Testcontainers Cloud.
  • No nested complexity: Avoid the intricacies and pitfalls of nested Docker daemons.

Better observability and debugging

  • Detailed logs: Access comprehensive logs through the Testcontainers Cloud dashboard.
  • Real-time monitoring: Monitor containers and resources in real time with enhanced visibility.

Getting started with Testcontainers Cloud

Let’s dive into how you can get the most out of Testcontainers Cloud.

Switching to Testcontainers Cloud allows you to run tests without needing a local Docker daemon:

  • No local Docker required: Testcontainers Cloud handles container execution in the cloud.
  • Consistent environment: Ensures that your tests run in the same environment across different machines.

Additionally, you can easily integrate Testcontainers Cloud into your CI pipeline to run the same tests without scaling your CI infrastructure.

Using Testcontainers Cloud with GitHub Actions

Here’s how you can set up Testcontainers Cloud in your GitHub Actions workflow.

1. Create a new service account

  • Log in to Testcontainers Cloud dashboard.
  • Navigate to Service Accounts:
    • Create a new service account dedicated to your CI environment.
  • Generate an access token:
    • Copy the access token. Remember, you can only view it once, so store it securely.

2. Set the TC_CLOUD_TOKEN environment variable

  • In GitHub Actions:
    • Go to your repository’s Settings > Secrets and variables > Actions.
    • Add a new Repository Secret named TC_CLOUD_TOKEN and paste the access token.

3. Add Testcontainers Cloud to your workflow

Update your GitHub Actions workflow (.github/workflows/ci.yml) to include the Testcontainers Cloud setup.

Example workflow:

name: CI Pipeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      # ... other preparation steps (dependencies, compilation, etc.) ...

      - name: Set up Java
        uses: actions/setup-java@v3
        with:
          distribution: 'temurin'
          java-version: '17'

      - name: Setup Testcontainers Cloud Client
        uses: atomicjar/testcontainers-cloud-setup-action@v1
        with:
          token: ${{ secrets.TC_CLOUD_TOKEN }}

      # ... steps to execute your tests ...
      - name: Run Tests
        run: ./mvnw test

Notes:

  • The atomicjar/testcontainers-cloud-setup-action GitHub Action automates the installation and authentication of the Testcontainers Cloud Agent in your CI environment.
  • Ensure that your TC_CLOUD_TOKEN is kept secure using GitHub’s encrypted secrets.

Clarifying the components: Testcontainers Cloud Agent and Testcontainers Cloud

To make everything clear:

  • Testcontainers Cloud Agent (CLI in CI environments): In CI environments like GitHub Actions, you use the Testcontainers Cloud Agent (installed via the GitHub Action or command line) to connect your CI jobs to Testcontainers Cloud.
  • Testcontainers Cloud: The cloud service that runs your containers, offloading execution from your CI environment.

In CI environments:

  • Use the Testcontainers Cloud Agent (CLI) within your CI jobs.
  • Authenticate using the TC_CLOUD_TOKEN.
  • Tests executed in the CI environment will use Testcontainers Cloud.

Monitoring and debugging

Take advantage of the Testcontainers Cloud dashboard:

  • Session logs: View logs for individual test sessions.
  • Container details: Inspect container statuses and resource usage.
  • Debugging: Access container logs and output for troubleshooting.

Why developers prefer Testcontainers Cloud over DinD

Real-world impact

After integrating Testcontainers Cloud, our team observed the following:

  • Faster build times: Tests ran significantly faster due to optimized resource utilization.
  • Reduced maintenance: Less time spent on debugging and fixing CI pipeline issues.
  • Enhanced security: Eliminated the need for privileged mode, satisfying security audits.
  • Better observability: Improved logging and monitoring capabilities.

Addressing common concerns

Security and compliance

  • Data isolation: Each test runs in an isolated environment.
  • Encrypted communication: Secure data transmission.
  • Compliance: Meets industry-standard security practices.

Cost considerations

  • Efficiency gains: Time saved on maintenance offsets the cost.
  • Resource optimization: Reduces the need for expensive CI infrastructure.

Compatibility

  • Multi-language support: Works with Java, Node.js, Python, Go, .NET, and more.
  • Seamless integration: Minimal changes required to existing test code.

Conclusion

Switching to Testcontainers Cloud, with the help of the Testcontainers Cloud Agent, has been a game-changer for our team and many others in the industry. It addresses the key pain points associated with Docker-in-Docker and offers a secure, efficient, and developer-friendly alternative.

Key takeaways

  • Security: Eliminates the need for privileged containers and Docker socket exposure.
  • Performance: Accelerates test execution with scalable cloud resources.
  • Simplicity: Simplifies configuration and reduces maintenance overhead.
  • Observability: Enhances debugging with detailed logs and monitoring tools.

As someone who has navigated these challenges, I recommend trying Testcontainers Cloud. It’s time to move beyond the complexities of DinD and adopt a solution designed for modern development workflows.

Additional resources

Learn How to Optimize Docker Hub Costs With Our Usage Dashboards

13 November 2024 at 21:46

Effective infrastructure management is crucial for organizations using Docker Hub. Without a clear understanding of resource consumption, unexpected usage can emerge and skyrocket. This is particularly true if pulls and storage needs are not budgeted and forecasted correctly. By implementing proactive post controls and monitoring usage patterns, development teams can sustain their Docker Hub usage while keeping expenses under control. 

To support these goals, we’ve introduced new Docker Hub Usage dashboards, offering organizations the ability to access and analyze their usage patterns for storage and pulls. 

Docker Hub’s Usage dashboards put you in control, giving visibility into every pull and image your Docker systems request. Each pull and cache becomes a deliberate choice — not a random event — so you can make every byte count. With clear insights into what’s happening and why, you can design more efficient, optimized systems.

2400x1260 generic hub blog c

Reclaim control and manage technical resources by kicking bad habits

hub usage f1
Figure 1: Docker Hub Usage dashboards.

The Docker Hub Usage dashboards (Figure 1) provide valuable insights, allowing teams to track peaks and valleys, detect high usage periods, and identify the images and repositories driving the most consumption. This visibility not only aids in managing usage but also strengthens continuous improvement efforts across your software supply chain, helping teams build applications more efficiently and sustainably. 

This information helps development teams to stay on top of challenges, such as: 

  • Redundant pulls and misconfigured repositories: These can quickly and quietly drive up technical expenses while falling out of scope of the most relevant or critical use cases. Docker Hub’s Usage dashboards can help development teams identify patterns and optimize accordingly. They let you view usage trends across IPs and users as well, which helps with pinpointing high consumption areas and ensuring accountability in an organization when it comes to resource management. 
  • Poor caching management: Repository insights and image tagging helps customers assess internal usage patterns, such as frequently accessed images, where there might be an opportunity to improve caching. With proper governance models, organizations can also establish policies and processes that reduce the variability of resource usage as a whole. This goal goes beyond keeping track of seasonality usage patterns to help you design more predictable usage patterns so you can budget accordingly. 
  • Accidental automation: Accidental automated system activities can really hurt your usage. Let’s say you are using a CI/CD pipeline or automated scripts configured to pull images more often than they should. They may pull on every build instead of the actual version change, for example. 

Usage dashboards can help you identify these inefficiencies by showing detailed pull data associated with automated tooling. This information can help your teams quickly identify and adjust misconfigured systems, fine-tune automations to only pull when needed, and ultimately focus on the most relevant use cases for your organization, avoiding accidental overuse of resources:

Details of Docker Hub Usage Dashboard with columns for Date/hour, Username, Repository library, IPs, Version checks, pulls, and more.
Figure 2: Details from the Usage dashboards.

Docker Hub’s Usage dashboards offer a comprehensive view of your usage data, including downloadable CSV reports that include metrics such as pull counts, repository names, IP addresses, and version checks (Figure 2). This granular approach allows your organization to gain valuable insights and trend data to help optimize your team’s workflows and inform policies. 

Integrate robust operational principles into your development pipeline by leveraging these data-driven reports and maintain control over resource consumption and operational efficiency with Docker Hub. 

Learn more

Better Together: Understanding the Difference Between Sign-In Enforcement and SSO

12 November 2024 at 21:57

Docker Desktop’s single sign-on (SSO) and sign-in enforcement (also called login enforcement) features work together to enhance security and ease of use. SSO allows users to log in with corporate credentials, whereas login enforcement ensures every user is authenticated, giving IT tighter control over compliance. In this post, we’ll define each of these features, explain their unique benefits, and show how using them together streamlines management and improves your Docker Desktop experience.

2400x1260 evergreen docker blog a

Before diving into the benefits of login alongside SSO, let’s clarify three related terms: login, single sign-on (SSO), and enforced login.

  • Login: Logging in connects users to Docker’s suite of tools, enabling access to personalized settings, team resources, and features like Docker Scout and Docker Build Cloud. By default, members of an organization can use Docker Desktop without signing in. Logging in can be done through SSO or by using Docker-specific credentials.
  • Single sign-on (SSO): SSO allows users to access Docker using their organization’s central authentication system, letting teams streamline access across multiple platforms with one set of credentials. SSO standardizes and secures login and supports automation around provisioning but does not automatically log in users unless enforced.
  • Enforced login: This policy, configured by administrators, ensures users are logged in by requiring login credentials before accessing Docker Desktop and associated tools. With enforced login, teams gain consistent access to Docker’s productivity and security features, minimizing gaps in visibility and control.

With these definitions in mind, here’s why being logged in matters, how SSO simplifies login, and how login enforcement ensures your team gets the full benefit of Docker’s powerful development tools.

Why logging in matters for admins and compliance teams

Enforcing sign-in with corporate credentials ensures that all users accessing Docker Desktop are verified and utilizing the benefits of your Docker Business subscription while adding a layer of security to safeguard your software supply chain. This policy strengthens your organization’s security posture and enables Docker to provide detailed usage insights, helping compliance teams track engagement and adoption. 

Enforced login will support cloud-based control over settings, allowing admins to manage application configurations across the organization more effectively. By requiring login, your organization benefits from greater transparency, control, and alignment with compliance standards. 

When everyone in your organization signs in with proper credentials:

  • Access controls for shared resources become more reliable, allowing administrators to enforce policies and permissions consistently.
  • Developers stay connected to their workspaces and resources, minimizing disruptions.
  • Desktop Insights Dashboard provides admins actionable insights into usage, from feature adoption to image usage trends and login activity, helping administrators optimize team performance and security.
  • Teams gain full visibility and access to Docker Scout’s security insights, which only function with logged-in accounts.

Read more about the benefits of login on our blog post, Maximizing Docker Desktop: How Signing In Unlocks Advanced Features.

Options for enforcing sign-in

Docker provides three options to help administrators enforce sign-in

  • Registry key method (Windows Only): Integrates seamlessly with Windows, letting IT enforce login policies within familiar registry settings, saving time on configuration. 
  • Plist or config profiles method (Mac): Provides an easy way for IT to manage access on macOS, ensuring policy consistency across Apple devices without additional tools. 
  • Registry.json method (all platforms): Works across Windows, macOS, and Linux, allowing IT to enforce login on all platforms with a single, flexible configuration file, streamlining policy management for diverse environments.

Each method helps IT secure access, restrict to authorized users, and maintain compliance across all systems. You can enforce login without setting up SSO. Read the documentation to learn more about Docker’s sign-in enforcement methods.  

Single sign-on (SSO) 

Docker Desktop’s SSO capabilities allow organizations to streamline access by integrating with corporate identity providers, ensuring that only authorized team members can access Docker resources using their work credentials. This integration enhances security by eliminating the need for separate Docker-specific passwords, reducing the risk of unauthorized access to critical development tools. With SSO, admins can enforce consistent login policies across teams, simplify user management, and gain greater control over who accesses Docker Desktop. Additionally, SSO enables compliance teams to track access and usage better, aligning with organizational security standards and improving overall security posture.

Docker Desktop supports SSO integrations with a variety of idPs, including Okta, OneLogin, Auth0, and Microsoft Entra ID. By integrating with these IdPs, organizations can streamline user authentication, enhance security, and maintain centralized access control across their Docker environments.

Differences between SSO enforcement and SSO enablement

SSO and SCIM give your company more control over how users log in and attach themselves to your organization and Docker subscription but do not require your users to sign in to your organization when using Docker Desktop. Without sign-in enforcement, users can continue to utilize Docker Desktop without logging in or using their personal Docker IDs or subscriptions, preventing Docker from providing you with insights into their usage and control over the application. 

SSO enforcement usually applies to identity management across multiple applications, enforcing a single, centralized login for a suite of apps or services. However, a registry key or other local login enforcement mechanism typically applies only to that specific application (e.g., Docker Desktop) and doesn’t control access across different services.

Better together: Sign-in enforcement and SSO 

While SSO enables seamless access to Docker for those who choose to log in, enforcing login ensures that users fully benefit from Docker’s productivity and security features.

Docker’s SSO integration is designed to simplify enterprise user management, allowing teams to access Docker with their organization’s centralized credentials. This streamlines onboarding and minimizes password management overhead, enhancing security across the board. However, SSO alone doesn’t require users to log in — it simply makes it more convenient and secure. Without enforced login, users might bypass the sign-in process, missing out on Docker’s full benefits, particularly in areas of security and control.

By coupling SSO with login enforcement, organizations strengthen their Registry Access Management (RAM), ensuring access is restricted to approved registries, boosting image compliance, and centralizing control. Encouraging login alongside SSO ensures teams enjoy a seamless experience while unlocking Docker’s complete suite of features.

Learn more

Accelerating AI Development with the Docker AI Catalog

12 November 2024 at 21:38

Developers are increasingly expected to integrate AI capabilities into their applications but they also face many challenges. Namely, the steep learning curve, coupled with an overwhelming array of tools and frameworks, makes this process too tedious. Docker aims to bridge this gap with the Docker AI Catalog, a curated experience designed to simplify AI development and empower both developers and publishers.

2400x1260 generic hub blog f

Why Docker for AI?

Docker and container technology has been a key technology used by developers at the forefront of AI applications for the past few years. Now, Docker is doubling down on that effort with our AI Catalog. Developers using Docker’s suite of products are often responsible for building, deploying, and managing complex applications — and, now, they must also navigate generative AI (GenAI) technologies, such as large language models (LLMs), vector databases, and GPU support.

For developers, the AI Catalog simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. This approach removes the hassle of evaluating numerous tools and configurations, allowing developers to focus on building innovative AI applications.

Key benefits for development teams

The Docker AI Catalog is tailored to help users overcome common hurdles in the evolving AI application development landscape, such as:

  • Decision overload: The GenAI ecosystem is crowded with new tools and frameworks. The Docker AI Catalog simplifies the decision-making process by offering a curated list of trusted content and container images, so developers don’t have to wade through endless options.
  • Steep learning curve: With the rise of new technologies like LLMs and retrieval-augmented generation (RAG), the learning curve can be overwhelming. Docker provides an all-in-one resource to help developers quickly get up to speed.
  • Complex configurations preventing production readiness: Running AI applications often requires specialized hardware configurations, especially with GPUs. Docker’s AI stacks make this process more accessible, ensuring that developers can harness the full power of these resources without extensive setup.

The result? Shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Empowering publishers

For Docker verified publishers, the AI Catalog provides a platform to differentiate themselves in a crowded market. Independent software vendors (ISVs) and open source contributors can promote their content, gain insights into adoption, and improve visibility to a growing community of AI developers.

Key features for publishers include:

  • Increased discoverability: Publishers can highlight their AI content within a trusted ecosystem used by millions of developers worldwide.
  • Metrics and insights: Verified publishers gain valuable insights into the performance of their content, helping them optimize strategies and drive engagement.

Unified experience for AI application development

The AI Catalog is more than just a repository of AI tools. It’s a unified ecosystem designed to foster collaboration between developers and publishers, creating a path forward for more innovative approaches to building applications supported by AI capabilities. Developers get easy access to essential AI tools and content, while publishers gain the visibility and feedback they need to thrive in a competitive marketplace.

With Docker’s trusted platform, development teams can build AI applications confidently, knowing they have access to the most relevant and reliable tools available.

The road ahead: What’s next?

Docker will launch the AI Catalog in preview on November 12, 2024, alongside a joint webinar with MongoDB. This initiative will further Docker’s role as a leader in AI application development, ensuring that developers and publishers alike can take full advantage of the opportunities presented by AI tools.

Stay tuned for more updates and prepare to dive into a world of possibilities with the Docker AI Catalog. Whether you’re an AI developer seeking to streamline your workflows or a publisher looking to grow your audience, Docker has the tools and support you need to succeed.

Ready to simplify your AI development process? Explore the AI Catalog and get access to trusted content that will accelerate your development journey. Start building smarter, faster, and more efficiently.

For publishers, now is the perfect time to join the AI Catalog and gain visibility for your content. Become a trusted source in the AI development space and connect with millions of developers looking for the right tools to power their next breakthrough.

Learn more

Dockerize WordPress: Simplify Your Site’s Setup and Deployment

5 November 2024 at 22:15

If you’ve ever been tangled in the complexities of setting up a WordPress environment, you’re not alone. WordPress powers more than 40% of all websites, making it the world’s most popular content management system (CMS). Its versatility is unmatched, but traditional local development setups like MAMP, WAMP, or XAMPP can lead to inconsistencies and the infamous “it works on my machine” problem.

As projects scale and teams grow, the need for a consistent, scalable, and efficient development environment becomes critical. That’s where Docker comes into play, revolutionizing how we develop and deploy WordPress sites. To make things even smoother, we’ll integrate Traefik, a modern reverse proxy that automatically obtains TLS certificates, ensuring that your site runs securely over HTTPS. Traefik is available as a Docker Official Image from Docker Hub.

In this comprehensive guide, I’ll show how to Dockerize your WordPress site using real-world examples. We’ll dive into creating Dockerfiles, containerizing existing WordPress instances — including migrating your data — and setting up Traefik for automatic TLS certificates. Whether you’re starting fresh or migrating an existing site, this tutorial has you covered.

Let’s dive in!

Dockerize WordPress App

Why should you containerize your WordPress site?

Containerizing your WordPress site offers a multitude of benefits that can significantly enhance your development workflow and overall site performance.

Increased page load speed

Docker containers are lightweight and efficient. By packaging your application and its dependencies into containers, you reduce overhead and optimize resource usage. This can lead to faster page load times, improving user experience and SEO rankings.

Efficient collaboration and version control

With Docker, your entire environment is defined as code. This ensures that every team member works with the same setup, eliminating environment-related discrepancies. Version control systems like Git can track changes to your Dockerfiles and to wordpress-traefik-letsencrypt-compose.yml, making collaboration seamless.

Easy scalability

Scaling your WordPress site to handle increased traffic becomes straightforward with Docker and Traefik. You can spin up multiple Docker containers of your application, and Traefik will manage load balancing and routing, all while automatically handling TLS certificates.

Simplified environment setup

Setting up your development environment becomes as simple as running a few Docker commands. No more manual installations or configurations — everything your application needs is defined in your Docker configuration files.

Simplified updates and maintenance

Updating WordPress or its dependencies is a breeze. Update your Docker images, rebuild your containers, and you’re good to go. Traefik ensures that your routes and certificates are managed dynamically, reducing maintenance overhead.

Getting started with WordPress, Docker, and Traefik

Before we begin, let’s briefly discuss what Docker and Traefik are and how they’ll revolutionize your WordPress development workflow.

  • Docker is a cloud-native development platform that simplifies the entire software development lifecycle by enabling developers to build, share, test, and run applications in containers. It streamlines the developer experience while providing built-in security, collaboration tools, and scalable solutions to improve productivity across teams.
  • Traefik is a modern reverse proxy and load balancer designed for microservices. It integrates seamlessly with Docker and can automatically obtain and renew TLS certificates from Let’s Encrypt.

How long will this take?

Setting up this environment might take around 45-60 minutes, especially if you’re integrating Traefik for automatic TLS certificates and migrating an existing WordPress site.

Documentation links

Tools you’ll need

  • Docker Desktop: If you don’t already have the latest version installed, download and install Docker Desktop.
  • A domain name: Required for Traefik to obtain TLS certificates from Let’s Encrypt.
  • Access to DNS settings: To point your domain to your server’s IP address.
  • Code editor: Your preferred code editor for editing configuration files.
  • Command-line interface (CLI): Access to a terminal or command prompt.
  • Existing WordPress data: If you’re containerizing an existing site, ensure you have backups of your WordPress files and MySQL database.

What’s the WordPress Docker Bitnami image?

To simplify the process, we’ll use the Bitnami WordPress image from Docker Hub, which comes pre-packaged with a secure, optimized environment for WordPress. This reduces configuration time and ensures your setup is up to date with the latest security patches.

Using the Bitnami WordPress image streamlines your setup process by:

  • Simplifying configuration: Bitnami images come with sensible defaults and configurations that work out of the box, reducing the time spent on setup.
  • Enhancing security: The images are regularly updated to include the latest security patches, minimizing vulnerabilities.
  • Ensuring consistency: With a standardized environment, you avoid the “it works on my machine” problem and ensure consistency across development, staging, and production.
  • Including additional tools: Bitnami often includes helpful tools and scripts for backups, restores, and other maintenance tasks.

By choosing the Bitnami WordPress image, you can leverage a tested and optimized environment, reducing the risk of configuration errors and allowing you to focus more on developing your website.

Key features of Bitnami WordPress Docker image:

  • Optimized for production: Configured with performance and security in mind.
  • Regular updates: Maintained to include the latest WordPress version and dependencies.
  • Ease of use: Designed to be easy to deploy and integrate with other services, such as databases and reverse proxies.
  • Comprehensive documentation: Offers guides and support to help you get started quickly.

Why we use Bitnami in the examples:

In our Docker Compose configurations, we specified:

WORDPRESS_IMAGE_TAG=bitnami/wordpress:6.3.1

This indicates that we’re using the Bitnami WordPress image, version 6.3.1. The Bitnami image aligns well with our goals for a secure, efficient, and easy-to-manage WordPress environment, especially when integrating with Traefik for automatic TLS certificates.

By leveraging the Bitnami WordPress Docker image, you’re choosing a robust and reliable foundation for your WordPress projects. This approach allows you to focus on building great websites without worrying about the underlying infrastructure.

How to Dockerize an existing WordPress site with Traefik

Let’s walk through dockerizing your WordPress site using practical examples, including your .env and wordpress-traefik-letsencrypt-compose.yml configurations. We’ll also cover how to incorporate your existing data into the Docker containers.

Step 1: Preparing your environment variables

First, create a .env file in the same directory as your wordpress-traefik-letsencrypt-compose.yml file. This file will store all your environment variables.

Example .env file:

# Traefik Variables
TRAEFIK_IMAGE_TAG=traefik:2.9
TRAEFIK_LOG_LEVEL=WARN
TRAEFIK_ACME_EMAIL=your-email@example.com
TRAEFIK_HOSTNAME=traefik.yourdomain.com
# Basic Authentication for Traefik Dashboard
# Username: traefikadmin
# Passwords must be encoded using BCrypt https://hostingcanada.org/htpasswd-generator/
TRAEFIK_BASIC_AUTH=traefikadmin:$$2y$$10$$EXAMPLEENCRYPTEDPASSWORD

# WordPress Variables
WORDPRESS_MARIADB_IMAGE_TAG=mariadb:11.4
WORDPRESS_IMAGE_TAG=bitnami/wordpress:6.6.2
WORDPRESS_DB_NAME=wordpressdb
WORDPRESS_DB_USER=wordpressdbuser
WORDPRESS_DB_PASSWORD=your-db-password
WORDPRESS_DB_ADMIN_PASSWORD=your-db-admin-password
WORDPRESS_TABLE_PREFIX=wpapp_
WORDPRESS_BLOG_NAME=Your Blog Name
WORDPRESS_ADMIN_NAME=AdminFirstName
WORDPRESS_ADMIN_LASTNAME=AdminLastName
WORDPRESS_ADMIN_USERNAME=admin
WORDPRESS_ADMIN_PASSWORD=your-admin-password
WORDPRESS_ADMIN_EMAIL=admin@yourdomain.com
WORDPRESS_HOSTNAME=wordpress.yourdomain.com
WORDPRESS_SMTP_ADDRESS=smtp.your-email-provider.com
WORDPRESS_SMTP_PORT=587
WORDPRESS_SMTP_USER_NAME=your-smtp-username
WORDPRESS_SMTP_PASSWORD=your-smtp-password

Notes:

  • Replace placeholder values (e.g., your-email@example.com, your-db-password) with your actual credentials.
  • Do not commit this file to version control if it contains sensitive information.
  • Use a password encryption tool to generate the encrypted password for TRAEFIK_BASIC_AUTH. For example, you can use the htpasswd generator.

Step 2: Creating the Docker Compose file

Create a wordpress-traefik-letsencrypt-compose.yml file that defines your services, networks, and volumes. This YAML file is crucial for configuring your WordPress installation through Docker.

Example wordpress-traefik-letsencrypt-compose.yml.

networks:
  wordpress-network:
    external: true
  traefik-network:
    external: true

volumes:
  mariadb-data:
  wordpress-data:
  traefik-certificates:

services:
  mariadb:
    image: ${WORDPRESS_MARIADB_IMAGE_TAG}
    volumes:
      - mariadb-data:/var/lib/mysql
    environment:
      MARIADB_DATABASE: ${WORDPRESS_DB_NAME}
      MARIADB_USER: ${WORDPRESS_DB_USER}
      MARIADB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
      MARIADB_ROOT_PASSWORD: ${WORDPRESS_DB_ADMIN_PASSWORD}
    networks:
      - wordpress-network
    healthcheck:
      test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 60s
    restart: unless-stopped

  wordpress:
    image: ${WORDPRESS_IMAGE_TAG}
    volumes:
      - wordpress-data:/bitnami/wordpress
    environment:
      WORDPRESS_DATABASE_HOST: mariadb
      WORDPRESS_DATABASE_PORT_NUMBER: 3306
      WORDPRESS_DATABASE_NAME: ${WORDPRESS_DB_NAME}
      WORDPRESS_DATABASE_USER: ${WORDPRESS_DB_USER}
      WORDPRESS_DATABASE_PASSWORD: ${WORDPRESS_DB_PASSWORD}
      WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX}
      WORDPRESS_BLOG_NAME: ${WORDPRESS_BLOG_NAME}
      WORDPRESS_FIRST_NAME: ${WORDPRESS_ADMIN_NAME}
      WORDPRESS_LAST_NAME: ${WORDPRESS_ADMIN_LASTNAME}
      WORDPRESS_USERNAME: ${WORDPRESS_ADMIN_USERNAME}
      WORDPRESS_PASSWORD: ${WORDPRESS_ADMIN_PASSWORD}
      WORDPRESS_EMAIL: ${WORDPRESS_ADMIN_EMAIL}
      WORDPRESS_SMTP_HOST: ${WORDPRESS_SMTP_ADDRESS}
      WORDPRESS_SMTP_PORT: ${WORDPRESS_SMTP_PORT}
      WORDPRESS_SMTP_USER: ${WORDPRESS_SMTP_USER_NAME}
      WORDPRESS_SMTP_PASSWORD: ${WORDPRESS_SMTP_PASSWORD}
    networks:
      - wordpress-network
      - traefik-network
    healthcheck:
      test: timeout 10s bash -c ':> /dev/tcp/127.0.0.1/8080' || exit 1
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 90s
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.wordpress.rule=Host(`${WORDPRESS_HOSTNAME}`)"
      - "traefik.http.routers.wordpress.service=wordpress"
      - "traefik.http.routers.wordpress.entrypoints=websecure"
      - "traefik.http.services.wordpress.loadbalancer.server.port=8080"
      - "traefik.http.routers.wordpress.tls=true"
      - "traefik.http.routers.wordpress.tls.certresolver=letsencrypt"
      - "traefik.http.services.wordpress.loadbalancer.passhostheader=true"
      - "traefik.http.routers.wordpress.middlewares=compresstraefik"
      - "traefik.http.middlewares.compresstraefik.compress=true"
      - "traefik.docker.network=traefik-network"
    restart: unless-stopped
    depends_on:
      mariadb:
        condition: service_healthy
      traefik:
        condition: service_healthy

  traefik:
    image: ${TRAEFIK_IMAGE_TAG}
    command:
      - "--log.level=${TRAEFIK_LOG_LEVEL}"
      - "--accesslog=true"
      - "--api.dashboard=true"
      - "--api.insecure=true"
      - "--ping=true"
      - "--ping.entrypoint=ping"
      - "--entryPoints.ping.address=:8082"
      - "--entryPoints.web.address=:80"
      - "--entryPoints.websecure.address=:443"
      - "--providers.docker=true"
      - "--providers.docker.endpoint=unix:///var/run/docker.sock"
      - "--providers.docker.exposedByDefault=false"
      - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
      - "--certificatesresolvers.letsencrypt.acme.email=${TRAEFIK_ACME_EMAIL}"
      - "--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json"
      - "--metrics.prometheus=true"
      - "--metrics.prometheus.buckets=0.1,0.3,1.2,5.0"
      - "--global.checkNewVersion=true"
      - "--global.sendAnonymousUsage=false"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - traefik-certificates:/etc/traefik/acme
    networks:
      - traefik-network
    ports:
      - "80:80"
      - "443:443"
    healthcheck:
      test: ["CMD", "wget", "http://localhost:8082/ping","--spider"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 5s
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.rule=Host(`${TRAEFIK_HOSTNAME}`)"
      - "traefik.http.routers.dashboard.service=api@internal"
      - "traefik.http.routers.dashboard.entrypoints=websecure"
      - "traefik.http.services.dashboard.loadbalancer.server.port=8080"
      - "traefik.http.routers.dashboard.tls=true"
      - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
      - "traefik.http.services.dashboard.loadbalancer.passhostheader=true"
      - "traefik.http.routers.dashboard.middlewares=authtraefik"
      - "traefik.http.middlewares.authtraefik.basicauth.users=${TRAEFIK_BASIC_AUTH}"
      - "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`)"
      - "traefik.http.routers.http-catchall.entrypoints=web"
      - "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
      - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
    restart: unless-stopped

Notes:

  • Networks: We’re using external networks (wordpress-network and traefik-network). We’ll create these networks before deploying.
  • Volumes: Volumes are defined for data persistence.
  • Services: We’ve defined mariadb, wordpress, and traefik services with the necessary configurations.
  • Health checks: Ensure that services are healthy before dependent services start.
  • Labels: Configure Traefik routing, HTTPS settings, and enable the dashboard with basic authentication.

Step 3: Creating external networks

Before deploying your Docker Compose configuration, you need to create the external networks specified in your wordpress-traefik-letsencrypt-compose.yml.

Run the following commands to create the networks:

docker network create traefik-network
docker network create wordpress-network

Step 4: Deploying your WordPress site

Deploy your WordPress site using Docker Compose with the following command (Figure 1):

docker compose -f wordpress-traefik-letsencrypt-compose.yml -p website up -d
Screenshot of running "docker compose -f wordpress-traefik-letsencrypt-compose.yml -p website up -d" commmand.
Figure 1: Using Docker Compose to deploy your WordPress site.

Explanation:

  • -f wordpress-traefik-letsencrypt-compose.yml: Specifies the Docker Compose file to use.
  • -p website: Sets the project name to website.
  • up -d: Builds, (re)creates, and starts containers in detached mode.

Step 5: Verifying the deployment

Check that all services are running (Figure 2):

docker ps
Screenshot of services running, showing columns for Container ID, Image, Command, Created, Status, Ports, and Names.
Figure 2: Services running.

You should see the mariadb, wordpress, and traefik services up and running.

Step 6: Accessing your WordPress site and Traefik dashboard

WordPress site: Navigate to https://wordpress.yourdomain.com in your browser. Type in the username and password you set earlier in the .env file and click the Log In button. You should see your WordPress site running over HTTPS, with a valid TLS certificate automatically obtained by Traefik (Figure 3).

Screenshot of WordPress dashboard showing Site Health Status, At A Glance, Quick Draft, and other informational sections.
Figure 3: WordPress dashboard.

Important: To get cryptographic certificates, you need to set up A-type records in your external DNS zone that point to your server’s IP address where Traefik is installed. If you’ve just set up these records, wait a bit before starting the service installation because it can take anywhere from a few minutes to 48 hours — sometimes even longer — for these changes to fully spread across DNS servers.

  • Traefik dashboard: Access the Traefik dashboard at https://traefik.yourdomain.com. You’ll be prompted for authentication. Use the username and password specified in your .env file (Figure 4).
Screenshot of Traefik dashboard showing information on Entrypoints, Routers, Services, and Middleware.
Figure 4: Traefik dashboard.

Step 7: Incorporating your existing WordPress data

If you’re migrating an existing WordPress site, you’ll need to incorporate your existing files and database into the Docker containers.

Step 7.1: Restoring WordPress files

Copy your existing WordPress files into the wordpress-data volume.

Option 1: Using Docker volume mapping

Modify your wordpress-traefik-letsencrypt-compose.yml to map your local WordPress files directly:

volumes:
  - ./your-wordpress-files:/bitnami/wordpress

Option 2: Copying files into the running container

Assuming your WordPress backup is in ./wordpress-backup, run:

docker cp ./wordpress-backup/. wordpress_wordpress_1:/bitnami/wordpress/

Step 7.2: Importing your database

Export your existing WordPress database using mysqldump or phpMyAdmin.

Example:

mysqldump -u your_db_user -p your_db_name > wordpress_db_backup.sql

Copy the database backup into the MariaDB container:

docker cp wordpress_db_backup.sql wordpress_mariadb_1:/wordpress_db_backup.sql

Access the MariaDB container:

docker exec -it wordpress_mariadb_1 bash

Import the database:

mysql -u root -p${WORDPRESS_DB_ADMIN_PASSWORD} ${WORDPRESS_DB_NAME} < wordpress_db_backup.sql

Step 7.3: Update wp-config.php (if necessary)

Because we’re using environment variables, WordPress should automatically connect to the database. However, if you have custom configurations, ensure they match the settings in your .env file.

Note: The Bitnami WordPress image manages wp-config.php automatically based on environment variables. If you need to customize it further, you can create a custom Dockerfile.

Step 8: Creating a custom Dockerfile (optional)

If you need to customize the WordPress image further, such as installing additional PHP extensions or modifying configuration files, create a Dockerfile in your project directory.

Example Dockerfile:

# Use the Bitnami WordPress image as the base
FROM bitnami/wordpress:6.3.1

# Install additional PHP extensions if needed
# RUN install_packages php7.4-zip php7.4-mbstring

# Copy custom wp-content (if not using volume mapping)
# COPY ./wp-content /bitnami/wordpress/wp-content

# Set working directory
WORKDIR /bitnami/wordpress

# Expose port 8080
EXPOSE 8080

Build the custom image:

Modify your wordpress-traefik-letsencrypt-compose.yml to build from the Dockerfile:

wordpress:
  build: .
  # Rest of the configuration

Then, rebuild your containers:

docker compose -p wordpress up -d --build

Step 9: Customizing WordPress within Docker

Adding themes and plugins

Because we’ve mapped the wordpress-data volume, any changes you make within the WordPress container (like installing plugins or themes) will persist across container restarts.

  • Via WordPress admin dashboard: Install themes and plugins as you normally would through the WordPress admin interface (Figure 5).
Screenshot of WordPress admin dashboard showing plugin choices such as Classic Editor, Akismet Anti-spam, and Jetpack.
Figure 5: Adding plugins.
  • Manually: Access the container and place your themes or plugins directly.

Example:

docker exec -it wordpress_wordpress_1 bash
cd /bitnami/wordpress/wp-content/themes
# Add your theme files here

Managing and scaling WordPress with Docker and Traefik

Scaling your WordPress service

To handle increased traffic, you might want to scale your WordPress instances.

docker compose -p wordpress up -d --scale wordpress=3

Traefik will automatically detect the new instances and load balance traffic between them.

Note: Ensure that your WordPress setup supports scaling. You might need to externalize session storage or use a shared filesystem for media uploads.

Updating services

To update your services to the latest images:

Pull the latest images:

docker compose -p wordpress pull

Recreate containers:

docker compose -p wordpress up -d

Monitoring and logs

Docker logs:
View logs for a specific service:

docker compose -p wordpress logs -f wordpress

Traefik dashboard:
Use the Traefik dashboard to monitor routing, services, and health checks.

Optimizing your WordPress Docker setup

Implementing caching with Redis

To improve performance, you can add Redis for object caching.

Update wordpress-traefik-letsencrypt-compose.yml:

services:
  redis:
    image: redis:alpine
    networks:
      - wordpress-network
    restart: unless-stopped

Configure WordPress to use Redis:

  • Install a Redis caching plugin like Redis Object Cache.
  • Configure it to connect to the redis service.

Security best practices

  • Secure environment variables:
    • Use Docker secrets or environment variables to manage sensitive information securely.
    • Avoid committing sensitive data to version control.
  • Restrict access to Docker socket:
    • The Docker socket is mounted read-only (:ro) to minimize security risks.
  • Keep images updated:
    • Regularly update your Docker images to include security patches and improvements.

Advanced Traefik configurations

  • Middleware: Implement middleware for rate limiting, IP whitelisting, and other request transformations.
  • Monitoring: Integrate with monitoring tools like Prometheus and Grafana for advanced insights.
  • Wildcard certificates: Configure Traefik to use wildcard certificates if you have multiple subdomains.

Wrapping up

Dockerizing your WordPress site with Traefik simplifies your development and deployment processes, offering consistency, scalability, and efficiency. By leveraging practical examples and incorporating your existing data, we’ve created a tailored guide to help you set up a robust WordPress environment.

Whether you’re managing an existing site or starting a new project, this setup empowers you to focus on what you do best — developing great websites — while Docker and Traefik handle the heavy lifting.

So go ahead, give it a shot! Embracing these tools is a step toward modernizing your workflow and staying ahead in the ever-evolving tech landscape.

Learn more

To further enhance your skills and optimize your setup, check out these resources:

Docker Desktop 4.35: Organization Access Tokens, Docker Home, Volumes Export, and Terminal in Docker Desktop

4 November 2024 at 23:51

Key features of the Docker Desktop 4.35 release include: 

2400x1260 4.35 rectangle docker desktop release 1

Organization access tokens (Beta) 

Before the beta release of organization access tokens, managing developer access to Docker resources was challenging, as it relied heavily on individual user accounts, leading to security risks and administrative inefficiencies. 

Organization access tokens let you manage access at the organizational level, providing enhanced security. This feature allows teams to operate more securely and efficiently with centralized user management, reduced administrative overhead, and the flexibility to scale access as the organization grows. For businesses, this feature offers significant value by improving governance, enhancing security, and supporting scalable infrastructure from an administrative perspective. 

Organizational access tokens empower organizations to maintain tighter control over their resources and security, making Docker Desktop even more valuable for enterprise users. This is one piece of the continuous updates we’re releasing to support administrators across large enterprise companies, ensuring they have the tools needed to manage complex environments with efficiency and confidence.

Docker Home (Beta) 

Sign in to your Docker account to see the release of the new Docker Home page (Figure 1). The new Docker Home marks a milestone in Docker’s journey as a multi-product company, reinforcing Docker’s commitment to providing an expanding suite of solutions that help developers and businesses containerize applications with ease.

  • Unified experience: The home page provides a central hub for users to access Docker products, manage subscriptions, adjust settings, and find resources — all in one place. This approach simplifies navigation for developers and admins.
  • Admin access: Administrators can manage organizations, users, and onboarding processes through the new portal, with access to dashboards for monitoring Docker usage.
  • Future enhancements: Future updates will add personalized features for different roles, and business subscribers will gain access to tools like the Docker Support portal and organization-wide notifications.
Docker Product home page showing sections for Docker Desktop, Docker Build Cloud, Docker Scout, Docker Hub, and more.
Figure 1: New Docker home page.

Terminal experience in Docker Desktop

Our terminal feature in Docker Desktop is now generally available. While managing containerized applications, developers have often faced friction and inefficiencies when switching between the Docker Desktop CLI and GUI. This constant context switching disrupted workflows and reduced productivity. 

The terminal enhancement integrates a terminal directly within the Docker Desktop GUI, enabling seamless transitions between CLI and GUI interactions within a single window. By incorporating a terminal shell into the Docker Desktop interface (Figure 2), we significantly reduce the friction associated with context switching for developers.

Screenshot of Docker Desktop showing terminal window in lower half of screen.
Figure 2: Terminal shell in Docker Desktop.

This functionality is designed to streamline workflows, accelerate delivery times, and enhance overall developer productivity.

Volumes Export is GA 

With the 4.35 release, we’ve elevated volume backup capabilities in Docker Desktop, introducing an upgraded feature set (Figure 3). This enhancement directly integrates the previous Volumes Backup & Share extension directly into Docker Desktop, streamlining your backup processes.

Screenshot of Docker Desktop Volumes showing option to "Quick export data backup to a specified location"
Figure 3: Docker Desktop Volumes view showcasing new backup functionality.

Although this release marks a significant step forward, it’s just the beginning. We’re committed to expanding these capabilities, adding even more value in future updates. Check out the beta of Scheduled Backups as well as External Cloud Storage backups, which are also available. 

Significantly improved performance experience on macOS (Beta)

Docker Desktop 4.35 also includes a beta release of Docker VMM, a container-optimized hypervisor for Apple Silicon Macs. Local developer workflows rely heavily on the performance of the hypervisor layer for everything from handling individual timer interrupts to accessing files and downloading images from the network. 

Docker VMM allows us to optimize the Linux kernel and hypervisor layer together, massively improving the speed of many common developer tasks. For example, iterating over a large shared file system with find is now 2x faster than on Docker Desktop 4.34 with a cold cache and up to 25x faster — faster than running natively on the Mac — when the cache is warm. This is only the beginning. Thanks to Docker VMM, we have many exciting new performance improvements in the pipeline.

Enable Docker VMM via Settings > General > Virtual Machine options and try it for your developer workflows today (Figure 4).

F4 Docker VMM
Figure 4: Docker VMM.

Docker Desktop for Red Hat Enterprise Linux 

Today we are excited to announce the general availability of Docker Desktop for Red Hat Enterprise Linux (RHEL). This feature marks a great milestone for both Docker and our growing community of developers.

By making Docker Desktop available on RHEL, we’re not only extending our reach — we’re meeting developers where they are. RHEL users can now access a seamless containerized development experience directly on the same OS that might power their production environments.

Docker Desktop for RHEL (Figure 5) offers the same intuitive interface, integrated tooling, and performance optimizations that you’ve come to expect on the other supported Linux distributions.

Screenshot of Docker Desktop for Red Hat Enterprise Linux with terminal window, Docker Desktop window, and RHEL logo in lower left.
Figure 5: Docker Desktop for RHEL.

How to install Docker Desktop on Red Hat Enterprise Linux

Download links and information can be found in our release notes

Looking for support?

Did you know that you can get Premium Customer Support for Docker Desktop with a Pro or Team subscription? With this GA release, we’re now ready to officially help support you if you’re thinking about using Docker Desktop. Check out our pricing page to learn more about what’s included in a Pro or Team subscription, and if it’s right for you.

Explore the latest updates

With this latest wave of updates, from the security enhancements of organization access tokens to the performance boost of Docker VMM for Apple Silicon Macs, we’re pushing Docker Desktop forward to meet the evolving needs of developers and organizations alike. Each new feature is designed to make development smoother, faster, and more secure — whether you’re managing large teams or optimizing your individual workflow. 

We’re continuing to make improvements, with more tools and features on the way to help you build, manage, and scale your projects efficiently. Explore the latest updates and see how they can enhance your development experience

Learn more

Maximizing Docker Desktop: How Signing In Unlocks Advanced Features

4 November 2024 at 21:25

Docker Desktop is more than just a local application for containerized development — it’s your gateway to an integrated suite of cloud-native tools that streamline the entire development workflow. While Docker Desktop can be used without signing in, doing so unlocks the full potential of Docker’s powerful, interconnected ecosystem. By signing in, you gain access to advanced features and services across Docker Hub, Build Cloud, Scout, and Testcontainers Cloud, enabling deeper collaboration, enhanced security insights, and scalable cloud resources. 

This blog post explores the full range of capabilities unlocked by signing in to Docker Desktop, connecting you to Docker’s integrated suite of cloud-native development tools. From enhanced security insights with Docker Scout to scalable build and testing resources through Docker Build Cloud and Testcontainers Cloud, signing in allows developers and administrators to fully leverage Docker’s unified platform.

Note that the following sections refer to specific Docker subscription plans. With Docker’s newly streamlined subscription plans — Docker Personal, Docker Pro, Docker Team, and Docker Business — developers and organizations can access a scalable suite of tools, from individual productivity boosters to enterprise-grade governance and security. Visit the Docker pricing page to learn more about how these plans support different team sizes and workflows. 

2400x1260 evergreen docker blog c

Benefits for developers when logged in

Docker Personal

  • Access to private repositories: Unlock secure collaboration through private repositories on Docker Hub, ensuring that your sensitive code and dependencies are managed securely across teams and projects.
  • Increased pull rate: Boost your productivity with an increased pull rate from Docker Hub (40 pulls/hour per user), ensuring smoother, uninterrupted development workflows without waiting on rate limits. The rate limit without authentication is 10 pulls/hour per IP.
  • Docker Scout CLI: Leverage Docker Scout to proactively secure your software supply chain with continuous security insights from code to production. By signing in, you gain access to powerful CLI commands that help prevent vulnerabilities before they reach production. 
  • Build Cloud and Testcontainers Cloud: Experience the full power of Docker Build Cloud and Testcontainers Cloud with free trials (7-day for Build Cloud, 30-day for Testcontainers Cloud). These trials give you access to scalable cloud infrastructure that speeds up image builds and enables more reliable integration testing.

Docker Pro/Team/Business 

For users with a paid Docker subscription, additional features are unlocked.

  • Unlimited pull rate: No Hub rate limit will be enforced for users with a paid subscription plan. 
  • Docker Scout base image recommendations: Docker Scout offers continuous recommendations for base image updates, empowering developers to secure their applications at the foundational level and fix vulnerabilities early in the development lifecycle.
dd signin f1
Figure 1: Docker Scout showing recommendations.
  • Docker Debug: The docker debug CLI command can help you debug containers, while the images contain the minimum required to run your application.
dd signin f2
FIgure 2: Docker debug CLI.

Docker Debug functionalities have also been integrated into the container view of the Docker Desktop UI.

dd signin f3
Figure 3: Debug functionalities integrated into the container view of Docker Desktop.
  • Synchronized file shares: Host to Docker Desktop VM file sharing via bind mounts can be quite slow for large codebases. Speed up your development cycle with synchronized file shares, allowing you to sync large codebases into containers quickly and efficiently without performance bottlenecks—helping developers iterate faster on critical projects.
dd signin f4
Figure 4: Synchronized file shares.
  • Additional free minutes for Docker Build Cloud: Docker Build Cloud helps developer teams speed up image builds by offloading the build process to the cloud. The following benefits are available for users depending on the subscription plan
    • Docker Pro: 200 mins/month per org
    • Docker Team: 500 mins/month per org
    • Docker Business: 1500 mins/month per org
  • Additional free minutes for Testcontainers Cloud: Testcontainers Cloud simplifies the process for developers to run reliable integration tests using real dependencies defined in code, whether on their laptops or within their team’s CI pipeline. Depending on the subscription plan, the following benefits are available for users:
    • Docker Pro: 100 mins/month per org
    • Docker Team: 500 mins/month per org
    • Docker Business: 1,500 mins/month per org

Benefits for administrators when your users are logged in

Docker Business

Security and governance

The Docker Business plan offers enterprise-grade security and governance controls, which are only applicable if users are signed in. As of Docker Desktop 4.35.0, these features include:

License management

Tracking usage for licensing purposes can be challenging for administrators due to Docker Desktop not requiring authentication by default. By ensuring all users are signed in, administrators can use Docker Hub’s organization members list to manage licenses effectively.

This can be coupled with Docker Business’s Single Sign-On and SCIM capabilities to ease this process further. 

Insights

Administrators and other stakeholders (such as engineering managers) must comprehensively understand Docker Desktop usage within their organization. With developers signed into Docker Desktop, admins gain actionable insights into usage, from feature adoption to image usage trends and login activity, helping administrators optimize team performance and security. A dashboard offering insights is now available to simplify monitoring. Contact your account rep to enable the dashboard.

Desktop Insights available when your users log in to your organization
Figure 5: Desktop Insights view when users log in to your organization.

Enforce sign-in for Docker Desktop

Docker Desktop includes a feature that allows administrators to require authentication at start-up. Admins can ensure that all developers sign in to access Docker Desktop, enabling full integration with Docker’s security and productivity features. Sign-in enforcement helps maintain continuous compliance with governance policies across the organization.

dd signin f5
Figure 6: Prompting sign in.

Developers can then click on the sign-in button, which takes them through the authentication flow. 

More information on how to enforce sign-in can be found in the documentation

Unlock the full potential of Docker’s integrated suite

Signing into Docker Desktop unlocks significant benefits for both developers and administrators, enabling teams to fully leverage Docker’s integrated, cloud-native suite. Whether improving productivity, securing the software supply chain, or enforcing governance policies, signing in maximizes the value of Docker’s unified platform — especially for organizations using Docker’s paid subscription plans.

Note that new features are introduced with each new release, so keep an eye on our blog and subscribe to the Docker Newsletter for the latest product and feature updates.

Up next

Using AI Tools to Convert a PDF into Images

4 November 2024 at 21:01

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

How do you turn a PDF into a set of images? A modern suggestion could be to get Claude or GPT to write you a CLI command for something like ImageMagick. Let’s follow that train of thought.

2400x1260 docker labs genai

If we enter a prompt like “How do I turn a PDF into a bunch of images using ImageMagick,” we will indeed get something that looks like it might be a solution.

Imagemagick includes a command-line tool called convert which you can use to convert PDF files into images. Here's how you can do it:

1. Open your terminal

2. Change directory to where your PDF is located:

```
cd /path/to/your/pdf

```
3. Use the following command to convert the PDF into images (in this case JPEG):

```
convert -density 300 -quality 100 your_file.pdf output_name.jpg

``` 

The agent will then explain how convert works and finish with a suggestion to “please make sure that you have installed ImageMagick before running these commands.”

Instead of worrying about whether ImageMagick is installed, however, let’s use Docker as our tool provider and prompt it to go ahead and run the command. If you’re following along in this series, you’ll have seen that we are using Markdown files to mix together tools and prompts.  Here’s the first prompt we tried:

---
tools:
  - name: imagemagick
---
# prompt user

Use Imagemagick to convert the family.pdf file into a bunch of jpg images.

After executing this prompt, the LLM generated a tool call, which we executed in the Docker runtime, and it successfully converted family.pdf into nine .jpg files (my family.pdf file had nine pages). 

Figure 1 shows the flow from our VSCode Extension.

Animated VSCode workflow showing the process of converting PDFs to images.
Figure 1: Workflow from VSCode Extension.

We have given enough context to the LLM that it is able to plan a call to this ImageMagick binary. And, because this tool is available on Docker Hub, we don’t have to “make sure that ImageMagick is installed.” This would be the equivalent command if you were to use docker run directly:

# family.pdf must be located in your $PWD

docker run --rm -v $PWD:/project --workdir /project vonwig/imageMagick:latest convert -density 300 -quality 300 family.pdf family.jpg 

The tool ecosystem

How did this work? The process relied on two things:

  • Tool distribution and discovery (pulling tools into Docker Hub for distribution to our Docker Desktop runtime).
  • Automatic generation of Agent Tool interfaces.

When we first started this project, we expected that we’d begin with a small set of tools because the interface for each tool would take time to design. We thought we were going to need to bootstrap an ecosystem of tools that had been prepared to be used in these agent workflows. 

However, we learned that we can use a much more generic approach. Most tools already come with documentation, such as command-line help, examples, and man pages. Instead of treating each tool as something special, we are using an architecture where an agent responds to failures by reading documentation and trying again (Figure 2).

Illustration of circular process showing "Run tool" leading to "Capture errors" leading to "Read docs" in a continuous loop.
Figure 2: Agent process.

We see a process of experimenting with tools that is not unlike what we, as developers, do on the command line. Try a command line, read a doc, adjust the command line, and try again.

The value of this kind of looping has changed our expectations. Step one is simply pulling the tool into Docker Hub and seeing whether the agent can use it with nothing more than its out-of-the-box documentation. We are also pulling open source software (OSS)  tools directly from nixpkgs, which gives us access to tens of thousands of different tools to experiment with. 

Docker keeps our runtimes isolated from the host operating system, while the nixpkgs ecosystem and maintainers provide a rich source of OSS tools.

As expected, packaging agents still run into issues that force us to re-plan how tools are packaged. For example, the prompt we showed above might have generated the correct tool call on the first try, but the ImageMagick container failed on the first run with this terrible-looking error message:

function call failed call exited with non-zero code (1): Error: sh: 1: gs: not found  

Fortunately, feeding that error back into the LLM resulted in the suggestion that convert needs another tool, called Ghostscript, to run successfully. Our agent was not able to fix this automatically today. However, we adjusted the image build slightly and now the “latest” version of the vonwig/imagemagick:latest no longer has this issue. This is an example of something we only need to learn once.

The LLM figured out convert on its own. But its agency came from the addition of a tool.

Read the Docker Labs GenAI series to see more of what we’ve been working on.

Learn more

Model-Based Testing with Testcontainers and Jqwik

23 October 2024 at 20:31

When testing complex systems, the more edge cases you can identify, the better your software performs in the real world. But how do you efficiently generate hundreds or thousands of meaningful tests that reveal hidden bugs? Enter model-based testing (MBT), a technique that automates test case generation by modeling your software’s expected behavior.

In this demo, we’ll explore the model-based testing technique to perform regression testing on a simple REST API.

We’ll use the jqwik test engine on JUnit 5 to run property and model-based tests. Additionally, we’ll use Testcontainers to spin up Docker containers with different versions of our application.

2400x1260 Testcontainers evergreen set 4

Model-based testing

Model-based testing is a method for testing stateful software by comparing the tested component with a model that represents the expected behavior of the system. Instead of manually writing test cases, we’ll use a testing tool that:

  • Takes a list of possible actions supported by the application
  • Automatically generates test sequences from these actions, targeting potential edge cases
  • Executes these tests on the software and the model, comparing the results

In our case, the actions are simply the endpoints exposed by the application’s API. For the demo’s code examples, we’ll use a basic service with a CRUD REST API that allows us to:

  • Find an employee by their unique employee number
  • Update an employee’s name
  • Get a list of all the employees from a department
  • Register a new employee
testcontainers model based f1
Figure 1: Finding an employee, updating their name, finding their department, and registering a new employee.

Once everything is configured and we finally run the test, we can expect to see a rapid sequence of hundreds of requests being sent to the two stateful services:

testcontainers model based f2
Figure 2: New requests being sent to the two stateful services.

Docker Compose

Let’s assume we need to switch the database from Postgres to MySQL and want to ensure the service’s behavior remains consistent. To test this, we can run both versions of the application, send identical requests to each, and compare the responses.

We can set up the environment using a Docker Compose that will run two versions of the app:

  • Model (mbt-demo:postgres): The current live version and our source of truth.
  • Tested version (mbt-demo:mysql): The new feature branch under test.
services:
  ## MODEL
  app-model:
      image: mbt-demo:postgres
      # ...
      depends_on:
          - postgres
  postgres:
      image: postgres:16-alpine
      # ...
      
  ## TESTED
  app-tested:
    image: mbt-demo:mysql
    # ...
    depends_on:
      - mysql
  mysql:
    image: mysql:8.0
    # ...

Testcontainers

At this point, we could start the application and databases manually for testing, but this would be tedious. Instead, let’s use Testcontainers’ ComposeContainer to automate this with our Docker Compose file during the testing phase.

In this example, we’ll use jqwik as our JUnit 5 test runner. First, let’s add the jqwik and Testcontainers and the jqwik-testcontainers dependencies to our pom.xml:

<dependency>
    <groupId>net.jqwik</groupId>
    <artifactId>jqwik</artifactId>
    <version>1.9.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>net.jqwik</groupId>
    <artifactId>jqwik-testcontainers</artifactId>
    <version>0.5.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <version>1.20.1</version>
    <scope>test</scope>
</dependency>

As a result, we can now instantiate a ComposeContainer and pass our test docker-compose file as argument:

@Testcontainers
class ModelBasedTest {

    @Container
    static ComposeContainer ENV = new ComposeContainer(new File("src/test/resources/docker-compose-test.yml"))
       .withExposedService("app-tested", 8080, Wait.forHttp("/api/employees").forStatusCode(200))
       .withExposedService("app-model", 8080, Wait.forHttp("/api/employees").forStatusCode(200));

    // tests
}

Test HTTP client

Now, let’s create a small test utility that will help us execute the HTTP requests against our services:

class TestHttpClient {
  ApiResponse<EmployeeDto> get(String employeeNo) { /* ... */ }
  
  ApiResponse<Void> put(String employeeNo, String newName) { /* ... */ }
  
  ApiResponse<List<EmployeeDto>> getByDepartment(String department) { /* ... */ }
  
  ApiResponse<EmployeeDto> post(String employeeNo, String name) { /* ... */ }

    
  record ApiResponse<T>(int statusCode, @Nullable T body) { }
    
  record EmployeeDto(String employeeNo, String name) { }
}

Additionally, in the test class, we can declare another method that helps us create TestHttpClients for the two services started by the ComposeContainer:

static TestHttpClient testClient(String service) {
  int port = ENV.getServicePort(service, 8080);
  String url = "http://localhost:%s/api/employees".formatted(port);
  return new TestHttpClient(service, url);
}

jqwik

Jqwik is a property-based testing framework for Java that integrates with JUnit 5, automatically generating test cases to validate properties of code across diverse inputs. By using generators to create varied and random test inputs, jqwik enhances test coverage and uncovers edge cases.

If you’re new to jqwik, you can explore their API in detail by reviewing the official user guide. While this tutorial won’t cover all the specifics of the API, it’s essential to know that jqwik allows us to define a set of actions we want to test.

To begin with, we’ll use jqwik’s @Property annotation — instead of the traditional @Test — to define a test:

@Property
void regressionTest() {
  TestHttpClient model = testClient("app-model");
  TestHttpClient tested = testClient("app-tested");
  // ...
}

Next, we’ll define the actions, which are the HTTP calls to our APIs and can also include assertions.

For instance, the GetOneEmployeeAction will try to fetch a specific employee from both services and compare the responses:

record ModelVsTested(TestHttpClient model, TestHttpClient tested) {}

record GetOneEmployeeAction(String empNo) implements Action<ModelVsTested> {
  @Override
  public ModelVsTested run(ModelVsTested apps) {
    ApiResponse<EmployeeDto> actual = apps.tested.get(empNo);
    ApiResponse<EmployeeDto> expected = apps.model.get(empNo);

    assertThat(actual)
      .satisfies(hasStatusCode(expected.statusCode()))
      .satisfies(hasBody(expected.body()));
    return apps;
  }
}

Additionally, we’ll need to wrap these actions within Arbitrary objects. We can think of Arbitraries as objects implementing the factory design pattern that can generate a wide variety of instances of a type, based on a set of configured rules.

For instance, the Arbitrary returned by employeeNos() can generate employee numbers by choosing a random department from the configured list and concatenating a number between 0 and 200:

static Arbitrary<String> employeeNos() {
  Arbitrary<String> departments = Arbitraries.of("Frontend", "Backend", "HR", "Creative", "DevOps");
  Arbitrary<Long> ids = Arbitraries.longs().between(1, 200);
  return Combinators.combine(departments, ids).as("%s-%s"::formatted);
}

Similarly, getOneEmployeeAction() returns an Aribtrary action based on a given Arbitrary employee number:

static Arbitrary<GetOneEmployeeAction> getOneEmployeeAction() {
  return employeeNos().map(GetOneEmployeeAction::new);
}

After declaring all the other Actions and Arbitraries, we’ll create an ActionSequence:

@Provide
Arbitrary<ActionSequence<ModelVsTested>> mbtJqwikActions() {
  return Arbitraries.sequences(
    Arbitraries.oneOf(
      MbtJqwikActions.getOneEmployeeAction(),
      MbtJqwikActions.getEmployeesByDepartmentAction(),
      MbtJqwikActions.createEmployeeAction(),
      MbtJqwikActions.updateEmployeeNameAction()
  ));
}


static Arbitrary<Action<ModelVsTested>> getOneEmployeeAction() { /* ... */ }
static Arbitrary<Action<ModelVsTested>> getEmployeesByDepartmentAction() { /* ... */ }
// same for the other actions

Now, we can write our test and leverage jqwik to use the provided actions to test various sequences. Let’s create the ModelVsTested tuple and use it to execute the sequence of actions against it:

@Property
void regressionTest(@ForAll("mbtJqwikActions") ActionSequence<ModelVsTested> actions) {
  ModelVsTested testVsModel = new ModelVsTested(
    testClient("app-model"),
    testClient("app-tested")
  );
  actions.run(testVsModel);
}

That’s it — we can finally run the test! The test will generate a sequence of thousands of requests trying to find inconsistencies between the model and the tested service:

INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-129?name=v
INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-129?name=v
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-129
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-129
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] POST /api/employees { name=sdxToS, empNo=Frontend-91 }
INFO com.etr.demo.utils.TestHttpClient -- [app-model] POST /api/employees { name=sdxToS, empNo=Frontend-91 }
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-4?name=PZbmodNLNwX
INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-4?name=PZbmodNLNwX
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-4
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-4
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees?department=ٺ⯟桸
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees?department=ٺ⯟桸
        ...

Catching errors

If we run the test and check the logs, we’ll quickly spot a failure. It appears that when searching for employees by department with the argument ٺ⯟桸 the model produces an internal server error, while the test version returns 200 OK:

Original Sample
---------------
actions:
ActionSequence[FAILED]: 8 actions run [
    UpdateEmployeeAction[empNo=Creative-13, newName=uRhplM],
    CreateEmployeeAction[empNo=Backend-184, name=aGAYQ],
    UpdateEmployeeAction[empNo=Backend-3, newName=aWCxzg],
    UpdateEmployeeAction[empNo=Frontend-93, newName=SrJTVwMvpy],
    UpdateEmployeeAction[empNo=Frontend-129, newName=v],
    CreateEmployeeAction[empNo=Frontend-91, name=sdxToS],
    UpdateEmployeeAction[empNo=Frontend-4, newName=PZbmodNLNwX],
    GetEmployeesByDepartmentAction[department=ٺ⯟桸]
]
    final currentModel: ModelVsTested[model=com.etr.demo.utils.TestHttpClient@5dc0ff7d, tested=com.etr.demo.utils.TestHttpClient@64920dc2]
Multiple Failures (1 failure)
    -- failure 1 --
    expected: 200
    but was: 500

Upon investigation, we find that the issue arises from a native SQL query using Postgres-specific syntax to retrieve data. While this was a simple issue in our small application, model-based testing can help uncover unexpected behavior that may only surface after a specific sequence of repetitive steps pushes the system into a particular state.

Wrap up

In this post, we provided hands-on examples of how model-based testing works in practice. From defining models to generating test cases, we’ve seen a powerful approach to improving test coverage and reducing manual effort. Now that you’ve seen the potential of model-based testing to enhance software quality, it’s time to dive deeper and tailor it to your own projects.

Clone the repository to experiment further, customize the models, and integrate this methodology into your testing strategy. Start building more resilient software today!

Thank you to Emanuel Trandafir for contributing this post.

Learn more

Docker at Cloud Expo Asia: GenAI, Security, and New Innovations

By: Yiwen Xu
22 October 2024 at 22:23

Cloud Expo Asia 2024 in Singapore drew thousands of cloud professionals and tech business leaders to explore and exchange the latest in cloud computing, security, GenAI, sustainability, DevOps, and more. At our Cloud Expo Asia booth, Docker showcased our latest innovations in AI integration, containerization, security best practices, and updated product offerings. Here are a few highlights from our experience at the event.

2400x1260 evergreen docker blog a

AI/ML and GenAI everywhere

AI/ML and GenAI were hot topics at Cloud Expo Asia. Docker CPO Giri Sreenivas’s talk on Transforming App Development: Docker’s Advanced Containerization and AI Integration highlighted that GenAI impacts software in two big ways — it accelerates product development and creates new types of products and experiences. He discussed how containers are an ideal tool for containerizing GenAI workflows in development, ensuring consistency across CI/CD pipelines and reproducibility across diverse platforms in production.

cloud expo asia 2024 f1
Docker Chief Product Officer Giri Sreenivas’s talk drew an overflow crowd.

Sreenivas highlighted the Docker extension for GitHub Copilot as an example of how Docker helps empower development teams to focus on innovation — closing the gap from the first line of code to production. Sreenivas also gave a sneak peek into upcoming products designed to streamline GenAI development to illustrate Docker’s commitment to evolving solutions to meet emerging needs. 

Adopting security best practices and shifting left

Developer efficiency and security were also popular themes at the event. When Sreenivas mentioned in his talk that security vulnerabilities that cost dollars to fix early in development would cost hundreds of dollars later in production, members of the audience nodded in agreement.

Docker CTO Justin Cormack gave a keynote address titled “The Docker Effect: Driving Developer Efficiency and Innovation in a Hybrid World.” He discussed how implementing best practices and investing in the inner loop are crucial for today’s development teams. 

One best practice, for example, is shifting left and identifying problems as quickly as possible in the software development lifecycle. This approach improves efficiency and reduces costs by detecting and addressing software issues earlier before they become expensive problems.

cloud expo asia 2024 f2
At Docker CTO Justin Cormack’s talk, attendees were eager to snap pictures of every slide.

Cormack also provided a few tips for meeting the security and control needs of modern enterprises with a layered approach. Start with key building blocks, he explained, such as trusted content, which provides dev teams with a good foundation to build securely from the start. 

A pyramid with the title Modern Enterprises Need a Layered Approach to Security and Control. The pyramid, from top down (or reverse order): Deliver a secure end product, Build on a secure platform, and Start with a secure Foundation.
Docker CTO Justin Cormack’s recommendations on meeting the security and control needs of modern enterprises.

At the Docker event booth, we demonstrated Docker Scout, which helps development teams identify, analyze, and remediate security vulnerabilities early in the dev process. Docker Business customers can take advantage of enterprise controls, letting admins, IT teams, and security teams continuously monitor and manage risk and compliance with confidence. 

cloud expo asia 2024 f4
After four hours of demos at the Docker booth, senior software engineer Chase Frankenfeld was still enthusiastically discussing Docker products, while our CEO Scott Johnston listened attentively to an attendee’s questions.

New Docker innovations and updated plan

From students to C-level executives who visited our booth, everyone was eager to learn more about containers and Docker. People lined up to see an end-to-end demo of how the suite of Docker products, such as Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout, work together seamlessly to enable development teams to work more efficiently. 

Attendees also had the opportunity to learn more about Docker’s updated plans, which makes accessing the full suite of Docker products and solutions easy, with options for individual developers, small teams, and large enterprises.

cloud expo asia 2024 f5
Senior software engineer Maxime Clement explains Docker’s updated plans and demos Docker products to booth visitors.

Thanks, Cloud Expo Asia!

We enjoyed our conversations with event attendees and appreciate everyone who helped make this such a successful event. Thank you to the organizers, speakers, sponsors, and the community for a productive, information-packed experience.

cloud expo asia 2024 f6
What’s better than Docker swag? Docker swag in a claw machine.

From accelerating app development, supporting best practices of shifting left, meeting the security and control needs of modern enterprises, and innovating with GenAI, Docker wants to be your trusted partner to navigate the challenges in modern app development. 

Explore our Docker updated plans to learn how Docker can empower your teams, or contact our sales team to discover how we can help you innovate with confidence.

Learn more

Using Docker AI Tools for Devs to Provide Context for Better Code Fixes

21 October 2024 at 20:00

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

At Docker Labs, we’ve been exploring how LLMs can connect different parts of the developer workflow, bridging gaps between tools and processes. A key insight is that LLMs excel at fixing code issues when they have the right context. To provide this context, we’ve developed a process that maps out the codebase using linting violations and the structure of top-level code blocks. 

By combining these elements, we teach the LLM to construct a comprehensive view of the code, enabling it to fix issues more effectively. By leveraging containerization, integrating these tools becomes much simpler.

2400x1260 docker labs genai

Previously, my linting process felt a bit disjointed. I’d introduce an error, run Pylint, and receive a message that was sometimes cryptic, forcing me to consult Pylint’s manual to understand the issue. When OpenAI released ChatGPT, the process improved slightly. I could run Pylint, and if I didn’t grasp an error message, I’d copy the code and the violation into GPT to get a better explanation. Sometimes, I’d ask it to fix the code and then manually paste the solution back into my editor.

However, this approach still required several manual steps: copying code, switching between applications, and integrating fixes. How might we improve this process?

Docker’s AI Tools for Devs prompt runner is an architecture that allows us to integrate tools like Pylint directly into the LLM’s workflow through containerization. By containerizing Pylint and creating prompts that the LLM can use to interact with it, we’ve developed a system where the LLM can access the necessary tools and context to help fix code issues effectively.

Understanding the cognitive architecture

For the LLM to assist effectively, it needs a structured way of accessing and processing information. In our setup, the LLM uses the Docker prompt runner to interact with containerized tools and the codebase. The project context is extracted using tools such as Pylint and Tree-sitter that run against the project. This context is then stored and managed, allowing the LLM to access it when needed.

By having access to the codebase, linting tools, and the context of previous prompts, the LLM can understand where problems are, what they are, and have the right code fragments to fix them. This setup replaces the manual process of finding issues and feeding them to the LLM with something automatic and more engaging.

Streamlining the workflow

Now, within my workflow, I can ask the assistant about code quality and violations directly. The assistant, powered by an LLM, has immediate access to a containerized Pylint tool and a database of my code through the Docker prompt runner. This integration allows the LLM to use tools to assist me directly during development, making the programming experience more efficient.

This approach helps us rethink how we interact with our tools. By enabling a conversational interface with tools that map code to issues, we’re exploring possibilities for a more intuitive development experience. Instead of manually finding problems and feeding them to an AI, we can convert our relationship with tools themselves to be conversational partners that can automatically detect issues, understand the context, and provide solutions.

Walking through the prompts

Our project is structured around a series of prompts that guide the LLM through the tasks it needs to perform. These prompts are stored in a Git repository and can be versioned, tracked, and shared. They form the backbone of the project, allowing the LLM to interact with tools and the codebase effectively. We automate this entire process using Docker and a series of prompts stored in a Git repository. Each prompt corresponds to a specific task in the workflow, and Docker containers ensure a consistent environment for running tools and scripts.

Workflow steps

An immediate and existential challenge we encountered was that this class of problem has a lot of opportunities to overwhelm the context of the LLM. Want to read a source code file? It has to be small enough to read. Need to work on more than one file? Your realistic limit is three to four files at once. To solve this, we can instruct the LLM to automate its own workflow with tools, where each step runs in a Docker container.

Again, each step in this workflow runs in a Docker container, which ensures a consistent and isolated environment for running tools and scripts. The first four steps prepare the agent to be able to extract the right context for fixing violations. Once the agent has the necessary context, the LLM can effectively fix the code issues in step 5.

1. Generate violations report using Pylint:

Run Pylint to produce a violation report.

2. Create a SQLite database:

Set up the database schema to store violation data and code snippets.

3. Generate and run INSERT statements:

  • Decouple violations from the range they represent.
  • Use a script to convert every violation and range from the report into SQL insert statements.
  • Run the statements against the database to populate it with the necessary data.

4. Index code in the database:

  • Generate an abstract syntax tree (AST) of the project with Tree-sitter (Figure 1).
Screenshot of syntax tree, showing files, with detailed look at Example .py.parsed.
Figure 1: Generating an abstract syntax tree.
  • Find all second-level nodes (Figure 2). In Python’s grammar, second-level nodes are statements inside of a module.
Expanded look at Example .py.parsed with highlighted statements.
Figure 2: Extracting content for the database.
  • Index these top-level ranges into the database.
  • Populate a new table to store the source code at these top-level ranges.

5. Fix violations based on context:

Once the agent has gathered and indexed the necessary context, use prompts to instruct the LLM to query the database and fix the code issues (Figure 3).

Illustration of instructions, for example, to "fix the violation "some violation" which occurs in file.py on line 1" with information on the function it occurs in.
Figure 3: Instructions for fixing violations.

Each step from 1 to 4 builds the foundation for step 5, where the LLM, with the proper context, can effectively fix violations. The structured preparation ensures that the LLM has all the information it needs to address code issues with precision.

Refining the context for LLM fixes

To understand how our system improves code fixes, let’s consider a specific violation flagged by Pylint. Say we receive a message that there’s a violation on line 60 of our code file block_listed_name.py:

{
  "type": "convention",
  "module": "block_listed_name",
  "obj": "do_front",
  "line": 60,
  "column": 4,
  "endLine": 60,
  "endColumn": 7,
  "path": "cloned_repo/naming_conventions/block_listed_name.py",
  "symbol": "disallowed-name",
  "message": "Disallowed name \"foo\"",
  "message-id": "C0104"
}

From this Pylint violation, we know that the variable foo is a disallowed name. However, if we tried to ask the LLM to fix this issue based solely on this snippet of information, the response wouldn’t be as effective. Why? The LLM lacks the surrounding context — the full picture of the function in which this violation occurs.

This is where indexing the codebase becomes essential

Because we’ve mapped out the codebase, we can now ask the LLM to query the index and retrieve the surrounding code that includes the do_front function. The LLM can even generate the SQL query for us, thanks to its knowledge of the database schema. Once we’ve retrieved the full function definition, the LLM can work with a more complete view of the problem:

def do_front(front_filename, back_filename):
   """
   Front strategy: loop over front image,
   detect blue pixels there,
   substitute in pixels from back.
   Return changed front image.
   """
   foo = SimpleImage(front_filename)
   back = SimpleImage(back_filename)
   for y in range(foo.height):xc
       for x in range(foo.width):
           pixel = foo.get_pixel(x, y)
           # Detect blue pixels in front and replace with back pixels
           if pixel[2] > 2 * max(pixel[0], pixel[1]):
               back_pixel = back.get_pixel(x, y)
               foo.set_pixel(x, y, back_pixel)
   return foo

Now that the LLM can see the whole function, it’s able to propose a more meaningful fix. Here’s what it suggests after querying the indexed codebase and running the fix:

def do_front(front_filename, back_filename):
 """
    Front strategy: loop over front image,
 detect blue pixels there,
    substitute in pixels from back.
    Return changed front image.
    """
    front_image = SimpleImage(front)
    back_image = SimpleImage(back_filename)
    for y in range(front_image.height):
        for x in range(front_image.width            pixel = front_image.get_pixel(x y)
            # Detect blue pixels in front and replace with back pixels
            if pixel[2 >  * max(pixel[0 pixel[1]):
                back_pixel = back_image.get_pixel(x,)
                front_image.set_pixel(x,, back_pixel)
    return front_image

Here, the variable foo has been replaced with the more descriptive front_image, making the code more readable and understandable. The key step was providing the LLM with the correct level of detail — the top-level range — instead of just a single line or violation message. With the right context, the LLM’s ability to fix code becomes much more effective, which ultimately streamlines the development process.

Remember, all of this information is retrieved and indexed by the LLM itself through the prompts we’ve set up. Through this series of prompts, we’ve reached a point where the assistant has a comprehensive understanding of the codebase. 

At this stage, not only can I ask for a fix, but I can even ask questions like “what’s the violation at line 60 in naming_conventions/block_listed_name.py?” and the assistant responds with:

On line 60 of naming_conventions/block_listed_name.py, there's a violation: Disallowed name 'foo'. The variable name 'foo' is discouraged because it doesn't convey meaningful information about its purpose.

Although Pylint has been our focus here, this approach points to a new conversational way to interact with many tools that map code to issues. By integrating LLMs with containerized tools through architectures like the Docker prompt runner, we can enhance various aspects of the development workflow.

We’ve learned that combining tool integration, cognitive preparation of the LLM, and a seamless workflow can significantly improve the development experience. This integration allows an LLM to use tools to directly help while developing, and while Pylint has been the focus here, this also points to a new conversational way to interact with many tools that map code to issues.

To follow along with this effort, check out the GitHub repository for this project.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Announcing IBM Granite AI Models Now Available on Docker Hub

21 October 2024 at 11:01

We are thrilled to announce that Granite models, IBM’s family of open source and proprietary models built for business, as well as Red Hat InstructLab model alignment tools, are now available on Docker Hub

Now, developer teams can easily access, deploy, and scale applications using IBM’s AI models specifically designed for developers.

This news will be officially announced during the AI track of the keynote at IBM TechXchange on October 22. Attendees will get an exclusive look at how IBM’s Granite models on Docker Hub accelerate AI-driven application development across multiple programming languages.

2400x1260 evergreen docker blog d

Why Granite on Docker Hub?

With a principled approach to data transparency, model alignment, and security, IBM’s open source Granite models represent a significant leap forward in natural language processing. The models are available under an Apache 2.0 license, empowering developer teams to bring generative AI into mission-critical applications and workflows. 

Granite models deliver superior performance in coding and targeted language tasks at lower latencies, all while requiring a fraction of the compute resources and reducing the cost of inference. This efficiency allows developers to experiment, build, and scale generative AI applications both on-premises and in the cloud, all within departmental budgetary limits.

Here’s what this means for you:

  • Simplified deployment: Pull the Granite image from Docker Hub and get up and running in minutes.
  • Scalability: Docker offers a lightweight and efficient method for scaling artificial intelligence and machine learning (AI/ML) applications. It allows you to run multiple containers on a single machine or distribute them across different machines in a cluster, enabling horizontal scalability.
  • Flexibility: Customize and extend the model to suit your specific needs without worrying about underlying infrastructure.
  • Portability: By creating Docker images once and deploying them anywhere, you eliminate compatibility problems and reduce the need for configurations. 
  • Community support: Leverage the vast Docker and IBM communities for support, extensions, and collaborations.

In addition to the IBM Granite models, Red Hat also made the InstructLab model alignment tools available on Docker Hub. Developers using InstructLab can adapt pre-trained LLMs using far less real-world data and computing resources than alternative methodologies. InstructLab is model-agnostic and can be used to fine-tune any LLM of your choice by providing additional skills and knowledge.

With IBM Granite AI models and InstructLab available on Docker Hub, Docker and IBM enable easy integration into existing environments and workflows.

Getting started with Granite

You can find the following images available on Docker Hub:

  • InstructLab: Ideal for desktop or Mac users looking to explore InstructLab, this image provides a simple introduction to the platform without requiring specialized hardware. It’s perfect for prototyping and testing before scaling up.
  • Granite-7b-lab: This image is optimized for model serving and inference on desktop or Mac environments, using the Granite-7B model. It allows for efficient and scalable inference tasks without needing a GPU, perfect for smaller-scale deployments or local testing.

How to pull and run IBM Granite images from Docker Hub 

IBM Granite provides a toolset for building and managing cloud-native applications. Follow these steps to pull and run an IBM Granite image using Docker and the CLI. You can follow similar steps for the Red Hat InstructLab images.

Authenticate to Docker Hub

Enter your Docker username and password when prompted.

Pull the IBM Granite Image

Pull the IBM Granite image from Docker Hub.  

  • redhat/granite-7b-lab-gguf: For Mac/desktop users with no GPU support

Run the Image in a Container

Start a container with the IBM Granite image. The container can be started in two modes: CLI (default) and server.

To start the container in CLI mode, run the following:
docker run --ipc=host -it redhat/granite-7b-lab-gguf 

This command opens an interactive bash session within the container, allowing you to use the tools.

ibm granite f1

To run the container in server mode, run the following command:

docker run --ipc=host -it redhat/granite-7b-lab-gguf -s

You can check IBM Granite’s documentation for details on using IBM Granite Models.

Join us at IBM TechXchange

Granite on Docker Hub will be officially announced at the IBM TechXchange Conference, which will be held October 21-24 in Las Vegas. Our head of technical alliances, Eli Aleyner, will show a live demonstration at the AI track of the keynote during IBM TechXchange. Oleg Šelajev, Docker’s staff developer evangelist, will show how app developers can test their GenAI apps with local models. Additionally, you’ll learn how Docker’s collaboration with Red Hat is improving developer productivity.

The availability of Granite on Docker Hub marks a significant milestone in making advanced AI models accessible to all. We’re excited to see how developer teams will harness the power of Granite to innovate and solve complex challenges.

Stay anchored for more updates, and as always, happy coding!

Learn more

New Docker Terraform Provider: Automate, Secure, and Scale with Ease

17 October 2024 at 20:30

We’re excited to announce the launch of the Docker Terraform Provider, designed to help users and organizations automate and securely manage their Docker-hosted resources. This includes repositories, teams, organization settings, and more, all using Terraform’s infrastructure-as-code approach. This provider brings a unified, scalable, and secure solution for managing Docker resources in an automated fashion — whether you’re managing a single repository or a large-scale organization.

2400x1260 evergreen docker blog g

A new way of working with Docker Hub

The Docker Terraform Provider introduces a new way of working with Docker Hub, enabling infrastructure-as-code best practices that are already widely adopted across cloud-native environments. By integrating Docker Hub with Terraform, organizations can streamline resource management, improve security, and collaborate more effectively, all while ensuring Docker resources remain in sync with other infrastructure components.

The Problem

Managing Docker Hub resources manually can become cumbersome and prone to errors, especially as teams grow and projects scale. Maintaining configurations can lead to inconsistencies, reduced security, and a lack of collaboration between teams without a streamlined, version-controlled system. The Docker Terraform Provider solves this by allowing you to manage Docker Hub resources in the same way you manage your other cloud resources, ensuring consistency, auditability, and automation across the board.

The solution

The Docker Terraform Provider offers:

  • Unified management: With this provider, you can manage Docker repositories, teams, users, and organizations in a consistent workflow, using the same code and structure across environments.
  • Version control: Changes to Docker Hub resources are captured in your Terraform configuration, providing a version-controlled, auditable way to manage your Docker infrastructure.
  • Collaboration and automation: Teams can now collaborate seamlessly, automating the provisioning and management of Docker Hub resources with Terraform, enhancing productivity and ensuring best practices are followed.
  • Scalability: Whether you’re managing a few repositories or an entire organization, this provider scales effortlessly to meet your needs.

Example

At Docker, even we faced challenges managing our Docker Hub resources, especially when adding repositories without owner permissions — it was a frustrating, manual process. With the Terraform provider, anyone in the company can create a new repository without having elevated Docker Hub permissions. All levels of employees are now empowered to write code rather than track down coworkers. This streamlines developer workflows with familiar tooling and reduces employee permissions. Security and developers are happy!

Here’s an example where we are managing a repository, an org team, the permissions for the created repo, and a PAT token:

terraform {
  required_providers {
    docker = {
      source  = "docker/docker"
      version = "~> 0.2"
    }
  }
}

# Initialize provider
provider "docker" {}

# Define local variables for customization
locals {
  namespace        = "my-docker-namespace"
  repo_name        = "my-docker-repo"
  org_name         = "my-docker-org"
  team_name        = "my-team"
  my_team_users    = ["user1", "user2"]
  token_label      = "my-pat-token"
  token_scopes     = ["repo:read", "repo:write"]
  permission       = "admin"
}

# Create repository
resource "docker_hub_repository" "org_hub_repo" {
  namespace        = local.namespace
  name             = local.repo_name
  description      = "This is a generic Docker repository."
  full_description = "Full description for the repository."
}

# Create team
resource "docker_org_team" "team" {
  org_name         = local.org_name
  team_name        = local.team_name
  team_description = "Team description goes here."
}

# Team association
resource "docker_org_team_member" "team_membership" {
  for_each = toset(local.my_team_users)

  org_name  = local.org_name
  team_name = docker_org_team.team.team_name
  user_name = each.value
}

# Create repository team permission
resource "docker_hub_repository_team_permission" "repo_permission" {
  repo_id    = docker_hub_repository.org_hub_repo.id
  team_id    = docker_org_team.team.id
  permission = local.permission
}

# Create access token
resource "docker_access_token" "access_token" {
  token_label = local.token_label
  scopes      = local.token_scopes
}

Future work

We’re just getting started with the Docker Terraform Provider, and there’s much more to come. Future work will expand support to other products in Docker’s suite, including Docker Scout, Docker Build Cloud, and Testcontainers Cloud. Stay tuned as we continue to evolve and enhance the provider with new features and integrations.

For feedback and issue tracking, visit the official Docker Terraform Provider repository or submit feedback via our issue tracker.

We’re confident this new provider will enhance how teams work with Docker Hub, making it easier to manage, secure, and scale their infrastructure while focusing on what matters most — building great software.

Learn more

Docker Best Practices: Using ARG and ENV in Your Dockerfiles

16 October 2024 at 21:35

If you’ve worked with Docker for any length of time, you’re likely accustomed to writing or at least modifying a Dockerfile. This file can be thought of as a recipe for a Docker image; it contains both the ingredients (base images, packages, files) and the instructions (various RUN, COPY, and other commands that help build the image).

In most cases, Dockerfiles are written once, modified seldom, and used as-is unless something about the project changes. Because these files are created or modified on such an infrequent basis, developers tend to rely on only a handful of frequently used instructions — RUN, COPY, and EXPOSE being the most common. Other instructions can enhance your image, making it more configurable, manageable, and easier to maintain. 

In this post, we will discuss the ARG and ENV instructions and explore why, how, and when to use them.

2400x1260 best practices

ARG: Defining build-time variables

The ARG instruction allows you to define variables that will be accessible during the build stage but not available after the image is built. For example, we will use this Dockerfile to build an image where we make the variable specified by the ARG instruction available during the build process.

FROM ubuntu:latest
ARG THEARG="foo"
RUN echo $THEARG
CMD ["env"]

If we run the build, we will see the echo foo line in the output:

$ docker build --no-cache -t argtest .
[+] Building 0.4s (6/6) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

However, if we run the image and inspect the output of the env command, we do not see THEARG:

$ docker run --rm argtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d19f59677dcd
HOME=/root

ENV: Defining build and runtime variables

Unlike ARG, the ENV command allows you to define a variable that can be accessed both at build time and run time:

FROM ubuntu:latest
ENV THEENV="bar"
RUN echo $THEENV
CMD ["env"]

If we run the build, we will see the echo bar line in the output:

$ docker build -t envtest .
[+] Building 0.8s (7/7) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo bar                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

If we run the image and inspect the output of the env command, we do see THEENV set, as expected:

$ docker run --rm envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=f53f1d9712a9
THEENV=bar
HOME=/root

Overriding ARG

A more advanced use of the ARG instruction is to serve as a placeholder that is then updated at build time:

FROM ubuntu:latest
ARG THEARG
RUN echo $THEARG
CMD ["env"]

If we build the image, we see that we are missing a value for $THEARG:

$ docker build -t argtest .
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo $THEARG                                                                                           0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

However, we can pass a value for THEARG on the build command line using the --build-arg argument. Notice that we now see THEARG has been replaced with foo in the output:

 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

The same can be done in a Docker Compose file by using the args key under the build key. Note that these can be set as a mapping (THEARG: foo) or a list (- THEARG=foo):

services:
  argtest:
    build:
      context: .
      args:
        THEARG: foo

If we run docker compose up --build, we can see the THEARG has been replaced with foo in the output:

$ docker compose up --build
<-- SNIP -->
 => [argtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [argtest 2/2] RUN echo foo                                                                                0.0s
 => [argtest] exporting to image                                                                                     0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->
Attaching to argtest-1
argtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argtest-1  | HOSTNAME=d9a3789ac47a
argtest-1  | HOME=/root
argtest-1 exited with code 0

Overriding ENV

You can also override ENV at build time; this is slightly different from how ARG is overridden. For example, you cannot supply a key without a value with the ENV instruction, as shown in the following example Dockerfile:

FROM ubuntu:latest
ENV THEENV
RUN echo $THEENV
CMD ["env"]

When we try to build the image, we receive an error:

$ docker build -t envtest .
[+] Building 0.0s (1/1) FINISHED                                                                     docker:desktop-linux
 => [internal] load build definition from Dockerfile                                                                 0.0s
 => => transferring dockerfile: 98B                                                                                  0.0s
Dockerfile:3
--------------------
   1 |     FROM ubuntu:latest
   2 |
   3 | >>> ENV THEENV
   4 |     RUN echo $THEENV
   5 |
--------------------
ERROR: failed to solve: ENV must have two arguments

However, we can remove the ENV instruction from the Dockerfile:

FROM ubuntu:latest
RUN echo $THEENV
CMD ["env"]

This allows us to build the image:

$ docker build -t envtest .
<-- SNIP -->
 => [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [2/2] RUN echo $THEENV                                                                                    0.0s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

Then we can pass an environment variable via the docker run command using the -e flag:

$ docker run --rm -e THEENV=bar envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=638cf682d61f
THEENV=bar
HOME=/root

Although the .env file is usually associated with Docker Compose, it can also be used with docker run.

$ cat .env
THEENV=bar

$ docker run --rm --env-file ./.env envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=59efe1003811
THEENV=bar
HOME=/root

This can also be done using Docker Compose by using the environment key. Note that we use the variable format for the value:

services:
  envtest:
    build:
      context: .
    environment:
      THEENV: ${THEENV}

If we do not supply a value for THEENV, a warning is thrown:

$ docker compose up --build
WARN[0000] The "THEENV" variable is not set. Defaulting to a blank string.
<-- SNIP -->
 => [envtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [envtest 2/2] RUN echo ${THEENV}                                                                          0.0s
 => [envtest] exporting to image                                                                                     0.0s
<-- SNIP -->
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=816d164dc067
envtest-1  | THEENV=
envtest-1  | HOME=/root
envtest-1 exited with code 0

The value for our variable can be supplied in several different ways, as follows:

  • On the compose command line:
$ THEENV=bar docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the shell environment on the host system:
$ export THEENV=bar
$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the special .env file:
$ cat .env
THEENV=bar

$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0

Finally, when running services directly using docker compose run, you can use the -e flag to override the .env file.

$ docker compose run -e THEENV=bar envtest

[+] Creating 1/0
 ✔ Synchronized File Shares                                                                                          0.0s
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=219e96494ddd
TERM=xterm
THEENV=bar
HOME=/root

The tl;dr

If you need to access a variable during the build process but not at runtime, use ARG. If you need to access the variable both during the build and at runtime, or only at runtime, use ENV.

To decide between them, consider the following flow (Figure 1):

build process

Both ARG and ENV can be overridden from the command line in docker run and docker compose, giving you a powerful way to dynamically update variables and build flexible workflows.

Learn more

How Docker IT Streamlined Docker Desktop Deployment Across the Global Team

16 October 2024 at 20:30

At Docker, innovation and efficiency are integral to how we operate. When our own IT team needed to deploy Docker Desktop to various teams, including non-engineering roles like customer support and technical sales, the existing process was functional but manual and time-consuming. Recognizing the need for a more streamlined and secure approach, we leveraged new Early Access (EA) Docker Business features to refine our deployment strategy.

2400x1260 evergreen docker blog d

A seamless deployment process

Faced with the challenge of managing diverse requirements across the organization, we knew it was time to enhance our deployment methods.

The Docker IT team transitioned from using registry.json files to a more efficient method involving registry keys and new MSI installers for Windows, along with configuration profiles and PKG installers for macOS. This transition simplified deployment, provided better control for admins, and allowed for faster rollouts across the organization.

“From setup to deployment, it took 24 hours. We started on a Monday morning, and by the next day, it was done,” explains Jeffrey Strauss, Head of Docker IT. 

Enhancing security and visibility

Security is always a priority. By integrating login enforcement with single sign-on (SSO) and System for Cross-domain Identity Management (SCIM), Docker IT ensured centralized control and compliance with security policies. The Docker Desktop Insights Dashboard (EA) offered crucial visibility into how Docker Desktop was being used across the organization. Admins could now see which versions were installed and monitor container usage, enabling informed decisions about updates, resource allocation, and compliance. (Docker Business customers can learn more about access and timelines by contacting their account reps. The Insights Dashboard is only available to Docker Business customers with enforced authentication for organization users.)

Steven Novick, Docker’s Principal Product Manager, emphasized, “With the new solution, deployment was simpler and tamper-proof, giving a clear picture of Docker usage within the organization.”

Benefits beyond deployment

The improvements made by Docker IT extended beyond just deployment efficiency:

  • Improved visibility: The Insights Dashboard provided detailed data on Docker usage, helping ensure all users are connected to the organization.
  • Efficient deployment: Docker Desktop was deployed to hundreds of computers within 24 hours, significantly reducing administrative overhead.
  • Enhanced security: Centralized control to enforce authentication via MDM tools like Intune for Windows and Jamf for macOS strengthened security and compliance.
  • Seamless user experience: Early and transparent communication ensured a smooth transition, minimizing disruptions.

Looking ahead

The successful deployment of Docker Desktop within 24 hours demonstrates Docker’s commitment to continuous improvement and innovation. We are excited about the future developments in Docker Desktop management and look forward to supporting our customers as they achieve their goals with Docker. 

Existing Docker Business customers can learn more about access and timelines by contacting their account reps. The Insights Dashboard is only available in Early Access to select Docker Business customers with enforced authentication for organization users.

Curious about how Docker’s new features can benefit your team? Get in touch to discover more or explore our customer stories to see how others are succeeding with Docker.

Learn more

Introducing Organization Access Tokens

16 October 2024 at 00:33

In the past, securely managing access to organization resources has been difficult. The only way to gain access has been through an assigned user’s personal access tokens. Whether these users are your engineer’s accounts, bot accounts, or service accounts, they often become points of risk for your organization.

Now, we’re pleased to introduce a long-awaited feature: organization access tokens.

Organization access tokens are like personal access tokens, but at an organizational level with many improvements and features. In this post, we walk through a few reasons why this feature release is so exciting.

2400x1260 evergreen docker blog a

Frictionless management

Every day, we are reducing the friction for organizations and engineers using our products. We want you working on your projects, not managing your development tools. 

Organization access tokens do not require you to manage groups and repository assignments like users require. This means you benefit from a straightforward way to manage access that each access token has instead of managing users and their placement within the organization.

If your organization has SSO enabled and enforced, you have likely run into the issue where machine or service accounts cannot log in easily because they don’t have the ability to log into your identity provider. With organization access tokens, this is no longer a problem.

Did someone leave your organization? No problem! With organization access tokens, you are still in control of the token instead of having to track down which tokens were on that user’s account and deal with the resulting challenges.

Fine-grained access

Organization access tokens introduce a new way to allow for tokens to access resources within your organization. These tokens can be assigned to specific repositories with specific actions for full access management with “least privilege” applied. Of course, you can also allow access to all resources in your organization.

Expirations

Another critical feature is the ability to set expirations for your organization access tokens. This is great for customers who have compliance requirements for token rotation or for those who just like the extra security.

Visibility

Management and registry actions all show up in your organization’s activity logs for each access token. Each token’s usage also shows up on your organization’s usage reports.

Business use cases and fair use

We believe that organization access tokens are useful in the context of teams and companies, which is why we are making them available to Docker Team and Docker Business subscribers. With the usual attention to the security aspect, avoiding any “misuse” related to the proliferation of the number of access tokens created, we are introducing a limitation in the maximum number of organization access tokens based on the type of subscription. There will be a limit of 10 for Team plans and 100 for Business plans.

Try organization access tokens

If you are on a team or business subscription, check out our documentation to learn more about using organization access tokens.

Learn more

❌
❌