Normal view

There are new articles available, click to refresh the page.
Before yesterdayDocker

How Docker Streamlines the  Onboarding Process and Sets Up Developers for Success

By: Yiwen Xu
22 January 2025 at 21:00

Nearly half (45%) of developers say they don’t have enough time for learning and development, according to a developer experience research study by Harness and Wakefield Research. Additionally, developer onboarding is a slow and painful process, with 71% of executive buyers saying that onboarding new developers takes at least two months. 

To accelerate innovation and bring products to market faster, organizations must empower developers with robust support and intuitive guardrails, enabling them to succeed within a structured yet flexible environment. That’s where Docker fits in: We help developers onboard quickly and help organizations set up the right guardrails to give developers the flexibility to innovate within the boundaries of company policies. 

2400x1260 docker evergreen logo blog C 1

Setting up developer teams for success 

Docker is recognized as one of the most used, desired, and admired developer tools, making it an essential component of any development team’s toolkit. For developers who are new to Docker, you can quickly get them up and running with Docker’s integrated development workflows, verified secure content, and accessible learning resources and community support.

Streamlined developer onboarding

When new developers join a team, Docker Desktop can significantly reduce the time and effort required to set up their development environments. Docker Desktop integrates seamlessly with popular IDEs, such as Visual Studio Code, allowing developers to containerize directly within familiar tools, accelerating learning within their usual workflows. Docker Extensions expand Docker Desktop’s capabilities and establish new functionalities, integrating developers’ favorite development tools into their application development and deployment workflows. 

Developers can also use Docker for GitHub Copilot for seamless onboarding with assistance for containerizing applications, generating Docker assets, and analyzing project vulnerabilities. In fact, the Docker extension is a top choice among developers in GitHub Copilot’s extension leaderboard, as highlighted by Visual Studio Magazine.

Docker Build Cloud integrates with Docker Compose and CI workflows, making it a seamless transition for dev teams. Verified content on Docker Hub gives developers preconfigured, trusted images, reducing setup time and ensuring a secure foundation as they onboard onto projects. 

Docker Scout provides actionable insights and recommendations, allowing developers to enhance their container security awareness, scan for vulnerabilities, and improve security posture with real-time feedback. And, Testcontainers Cloud lets developers run reliable integration tests, with real dependencies defined in code. With these tools, developers can be confident about delivering high-quality and reliable apps and experiences in production.  

Continuous learning with accessible knowledge resources

Continuous learning is a priority for Docker, with a wide range of accessible resources and tools designed to help developers deepen their knowledge and stay current in their containerization journey.

Docker Docs offers beginner-friendly guides, tutorials, and AI tools to guide developers through foundational concepts, empowering them to quickly build their container skills. Our collection of guides takes developers step by step to learn how Docker can optimize development workflows and how to use it with specific languages, frameworks, or technologies.

Docker Hub’s AI Catalog empowers developers to discover, pull, and integrate AI models into their workflows, bridging the gap between innovation and implementation. 

Docker also offers regular webinars and tech talks that help developers stay updated on new features and best practices and provide a platform to discuss real-world challenges. If you’re a Docker Business customer, you can even request additional, customized training from our Docker experts. 

Docker’s partnerships with educational platforms and organizations, such as Udemy Training and LinkedIn Learning, ensure developers have access to comprehensive training — from beginner tutorials to advanced containerization topics.

Docker’s global developer community

One of Docker’s greatest strengths is its thriving global developer community, offering organizations a unique advantage by connecting them with a wealth of shared expertise, resources, and real-world solutions.

With more than 20 million monthly active users, Docker’s community forums and events foster vibrant collaboration, giving developers access to a collective knowledge base that spans industries and expertise levels. Developers can ask questions, solve challenges, and gain insights from a diverse range of peers — from beginners to seasoned experts. Whether you’re troubleshooting an issue or exploring best practices, the Docker community ensures you’re never working in isolation.

A key pillar of this ecosystem is the Docker Captains program — a network of experienced and passionate Docker advocates who are leaders in their fields. Captains share technical knowledge through blog posts, videos, webinars, and workshops, giving businesses and teams access to curated expertise that accelerates onboarding and productivity.

Beyond forums and the Docker Captains program, Docker’s community-driven events, such as meetups and virtual workshops (Figure 1), provide developers with direct access to real-world use cases, innovative workflows, and emerging trends. These interactions foster continuous learning and help developers and their organizations keep pace with the ever-evolving software development landscape.

Photo showing a group of people sitting and standing in front of a large window at a Docker DevTools event.
Figure 1: Docker DevTools Day 1.0 Meetup in Singapore.

For businesses, tapping into Docker’s extensive community means access to a vast pool of knowledge, support, and inspiration, which is a critical asset in driving developer productivity and innovation.

Empowering developers with enhanced user management and security

In previous articles, we looked at how Docker simplifies complexity and boosts developer productivity (the right tool) and how to unlock efficiency with Docker for AI and cloud-native development (the right process).

To scale and standardize app development processes across the entire company, you also need to have the right guardrails in place for governance, compliance, and security, which is often handled through enterprise control and admin management tools. Ideally, organizations provide guardrails without being overly prescriptive and slowing developer productivity and innovation. 

Modern enterprises require a layered security approach, beginning with trusted content as the foundation for building robust and compliant applications. This approach gives your dev teams a good foundation for building securely from the start. 

Throughout the software development process, you need a secure platform. For regulated industries like finance and public sectors, this means fortified dev environments. Security vulnerability analysis and policy evaluation tools also help inform improvements and remediation. 

Additionally, you need enterprise controls and dashboards that ensure enterprise IT and security teams can confidently monitor and manage risk. 

Setting up the right guardrails 

Docker provides a number of admin tools to safeguard your software with integrated container security in the Docker Business plan. Our goal is to improve security and compliance of developer environments with minimal impact on developer experience or productivity. 

Centralized settings for improved dev environments security 

Docker provides developer teams with access to a vast library of trusted and certified application content, including Docker Official Images, Docker Verified Publisher, and Docker Trusted Open Source content. Coupled with advanced image and registry management rules — with tools like Image Access Management and Registry Access Management — you can ensure that your developers only use software that satisfies your company’s security policies. 

With a solid foundation to build securely from the start, your organization can further enhance security throughout the software development process. Docker ensures software supply chain integrity through vulnerability scanning and image analysis with Docker Scout. Rapid remediation capabilities paired with detailed CVE reporting help developers quickly find and fix vulnerabilities, resulting in speedy time to resolution.

Although containers are generally secure, container development tools still must be properly secured to reduce the risk of security breaches in the developer’s environment. Hardened Docker Desktop is an example of Docker’s fortified development environments with enhanced container isolation. It lets you enforce strict security settings and prevent developers and their containers from bypassing these controls. With air-gapped containers, you can further restrict containers from accessing network resources, limiting where data can be uploaded to or downloaded from.

Continuous monitoring and managing risks

With the Admin Console and Docker Desktop Insights, IT administrators and security teams can visualize and understand how Docker is used within their organizations and manage the implementation of organizational configurations and policies (Figure 2). 

These insights help teams streamline processes and improve efficiency. For example, you can enforce sign-in for developers who don’t sign in to an account associated with your organization. This step ensures that developers receive the benefits of your Docker subscription and work within the boundaries of the company policies. 

Screenshot of Docker Desktop Insights Dashboard containing numbers, information, and blue-colored graphs relating to Docker Desktop Users, Builds, Containers, Usage, and Images.
Figure 2: Docker Desktop Insights Dashboard provides information on product usage.

For business and engineering leaders, full visibility and governance over the development process help ensure compliance and mitigate risk while driving developer productivity. 

Unlock innovation with Docker’s development suite

Docker is the leading suite of tools purpose-built for cloud-native development, combining a best-in-class developer experience with enterprise-grade security and governance. With Docker, your organization can streamline onboarding, foster innovation, and maintain robust compliance — all while empowering your teams to deliver impactful solutions to market faster and more securely. 

Explore the Docker Business plan today and unlock the full potential of your development processes.

Learn more

Mastering Docker and Jenkins: Build Robust CI/CD Pipelines Efficiently

16 January 2025 at 20:17

Hey there, fellow engineers and tech enthusiasts! I’m excited to share one of my favorite strategies for modern software delivery: combining Docker and Jenkins to power up your CI/CD pipelines. 

Throughout my career as a Senior DevOps Engineer and Docker Captain, I’ve found that these two tools can drastically streamline releases, reduce environment-related headaches, and give teams the confidence they need to ship faster.

In this post, I’ll walk you through what Docker and Jenkins are, why they pair perfectly, and how you can build and maintain efficient pipelines. My goal is to help you feel right at home when automating your workflows. Let’s dive in.

2400x1260 evergreen docker blog g

Brief overview of continuous integration and continuous delivery

Continuous integration (CI) and continuous delivery (CD) are key pillars of modern development. If you’re new to these concepts, here’s a quick rundown:

  • Continuous integration (CI): Developers frequently commit their code to a shared repository, triggering automated builds and tests. This practice prevents conflicts and ensures defects are caught early.
  • Continuous delivery (CD): With CI in place, organizations can then confidently automate releases. That means shorter release cycles, fewer surprises, and the ability to roll back changes quickly if needed.

Leveraging CI/CD can dramatically improve your team’s velocity and quality. Once you experience the benefits of dependable, streamlined pipelines, there’s no going back.

Why combine Docker and Jenkins for CI/CD?

Docker allows you to containerize your applications, creating consistent environments across development, testing, and production. Jenkins, on the other hand, helps you automate tasks such as building, testing, and deploying your code. I like to think of Jenkins as the tireless “assembly line worker,” while Docker provides identical “containers” to ensure consistency throughout your project’s life cycle.

Here’s why blending these tools is so powerful:

  • Consistent environments: Docker containers guarantee uniformity from a developer’s laptop all the way to production. This consistency reduces errors and eliminates the dreaded “works on my machine” excuse.
  • Speedy deployments and rollbacks: Docker images are lightweight. You can ship or revert changes at the drop of a hat — perfect for short delivery process cycles where minimal downtime is crucial.
  • Scalability: Need to run 1,000 tests in parallel or support multiple teams working on microservices? No problem. Spin up multiple Docker containers whenever you need more build agents, and let Jenkins orchestrate everything with Jenkins pipelines.

For a DevOps junkie like me, this synergy between Jenkins and Docker is a dream come true.

Setting up your CI/CD pipeline with Docker and Jenkins

Before you roll up your sleeves, let’s cover the essentials you’ll need:

  • Docker Desktop (or a Docker server environment) installed and running. You can get Docker for various operating systems.
  • Jenkins downloaded from Docker Hub or installed on your machine. These days, you’ll want jenkins/jenkins:lts (the long-term support image) rather than the deprecated library/jenkins image.
  • Proper permissions for Docker commands and the ability to manage Docker images on your system.
  • A GitHub or similar code repository where you can store your Jenkins pipeline configuration (optional, but recommended).

Pro tip: If you’re planning a production setup, consider a container orchestration platform like Kubernetes. This approach simplifies scaling Jenkins, updating Jenkins, and managing additional Docker servers for heavier workloads.

Building a robust CI/CD pipeline with Docker and Jenkins

After prepping your environment, it’s time to create your first Jenkins-Docker pipeline. Below, I’ll walk you through common steps for a typical pipeline — feel free to modify them to fit your stack.

1. Install necessary Jenkins plugins

Jenkins offers countless plugins, so let’s start with a few that make configuring Jenkins with Docker easier:

  • Docker Pipeline Plugin
  • Docker
  • CloudBees Docker Build and Publish

How to install plugins:

  1. Open Manage Jenkins > Manage Plugins in Jenkins.
  2. Click the Available tab and search for the plugins listed above.
  3. Install them (and restart Jenkins if needed).

Code example (plugin installation via CLI):

# Install plugins using Jenkins CLI
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-pipeline
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-build-publish

Pro tip (advanced approach): If you’re aiming for a fully infrastructure-as-code setup, consider using Jenkins configuration as code (JCasC). With JCasC, you can declare all your Jenkins settings — including plugins, credentials, and pipeline definitions — in a YAML file. This means your entire Jenkins configuration is version-controlled and reproducible, making it effortless to spin up fresh Jenkins instances or apply consistent settings across multiple environments. It’s especially handy for large teams looking to manage Jenkins at scale.

Reference:

2. Set up your Jenkins pipeline

In this step, you’ll define your pipeline. A Jenkins “pipeline” job uses a Jenkinsfile (stored in your code repository) to specify the steps, stages, and environment requirements.

Example Jenkinsfile:

pipeline {
    agent any
    stages {
        stage('Checkout') {
            steps {
                git branch: 'main', url: 'https://github.com/your-org/your-repo.git'
            }
        }
        stage('Build') {
            steps {
                script {
                    dockerImage = docker.build("your-org/your-app:${env.BUILD_NUMBER}")
                }
            }
        }
        stage('Test') {
            steps {
                sh 'docker run --rm your-org/your-app:${env.BUILD_NUMBER} ./run-tests.sh'
            }
        }
        stage('Push') {
            steps {
                script {
                    docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-credentials') {
                        dockerImage.push()
                    }
                }
            }
        }
    }
}

Let’s look at what’s happening here:

  1. Checkout: Pulls your repository.
  2. Build: Creates a built docker image (your-org/your-app) with the build number as a tag.
  3. Test: Runs your test suite inside a fresh container, ensuring Docker containers create consistent environments for every test run.
  4. Push: Pushes the image to your Docker registry (e.g., Docker Hub) if the tests pass.

Reference: Jenkins Pipeline Documentation.

3. Configure Jenkins for automated builds

Now that your pipeline is set up, you’ll want Jenkins to run it automatically:

  • Webhook triggers: Configure your source control (e.g., GitHub) to send a webhook whenever code is pushed. Jenkins will kick off a build immediately.
  • Poll SCM: Jenkins periodically checks your repo for new commits and starts a build if it detects changes.

Which trigger method should you choose?

  • Webhook triggers are ideal if you want near real-time builds. As soon as you push to your repo, Jenkins is notified, and a new build starts almost instantly. This approach is typically more efficient, as Jenkins doesn’t have to continuously check your repository for updates. However, it requires that your source control system and network environment support webhooks.
  • Poll SCM is useful if your environment can’t support incoming webhooks — for example, if you’re behind a corporate firewall or your repository isn’t configured for outbound hooks. In that case, Jenkins routinely checks for new commits on a schedule you define (e.g., every five minutes), which can add a small delay and extra overhead but may simplify setup in locked-down environments.

Personal experience: I love webhook triggers because they keep everything as close to real-time as possible. Polling works fine if webhooks aren’t feasible, but you’ll see a slight delay between code pushes and build starts. It can also generate extra network traffic if your polling interval is too frequent.

4. Build, test, and deploy with Docker containers

Here comes the fun part — automating the entire cycle from build to deploy:

  1. Build Docker image: After pulling the code, Jenkins calls docker.build to create a new image.
  2. Run tests: Automated or automated acceptance testing runs inside a container spun up from that image, ensuring consistency.
  3. Push to registry: Assuming tests pass, Jenkins pushes the tagged image to your Docker registry — this could be Docker Hub or a private registry.
  4. Deploy: Optionally, Jenkins can then deploy the image to a remote server or a container orchestrator (Kubernetes, etc.).

This streamlined approach ensures every step — build, test, deploy — lives in one cohesive pipeline, preventing those “where’d that step go?” mysteries.

5. Optimize and maintain your pipeline

Once your pipeline is up and running, here are a few maintenance tips and enhancements to keep everything running smoothly:

  • Clean up images: Routine cleanup of Docker images can reclaim space and reduce clutter.
  • Security updates: Stay on top of updates for Docker, Jenkins, and any plugins. Applying patches promptly helps protect your CI/CD environment from vulnerabilities.
  • Resource monitoring: Ensure Jenkins nodes have enough memory, CPU, and disk space for builds. Overworked nodes can slow down your pipeline and cause intermittent failures.

Pro tip: In large projects, consider separating your build agents from your Jenkins controller by running them in ephemeral Docker containers (also known as Jenkins agents). If an agent goes down or becomes stale, you can quickly spin up a fresh one — ensuring a clean, consistent environment for every build and reducing the load on your main Jenkins server.

Why use Declarative Pipelines for CI/CD?

Although Jenkins supports multiple pipeline syntaxes, Declarative Pipelines stand out for their clarity and resource-friendly design. Here’s why:

  • Simplified, opinionated syntax: Everything is wrapped in a single pipeline { ... } block, which minimizes “scripting sprawl.” It’s perfect for teams who want a quick path to best practices without diving deeply into Groovy specifics.
  • Easier resource allocation: By specifying an agent at either the pipeline level or within each stage, you can offload heavyweight tasks (builds, tests) onto separate worker nodes or Docker containers. This approach helps prevent your main Jenkins controller from becoming overloaded.
  • Parallelization and matrix builds: If you need to run multiple test suites or support various OS/browser combinations, Declarative Pipelines make it straightforward to define parallel stages or set up a matrix build. This tactic is incredibly handy for microservices or large test suites requiring different environments in parallel.
  • Built-in “escape hatch”: Need advanced Groovy features? Just drop into a script block. This lets you access Scripted Pipeline capabilities for niche cases, while still enjoying Declarative’s streamlined structure most of the time.
  • Cleaner parameterization: Want to let users pick which tests to run or which Docker image to use? The parameters directive makes your pipeline more flexible. A single Jenkinsfile can handle multiple scenarios — like unit vs. integration testing — without duplicating stages.

Declarative Pipeline examples

Below are sample pipelines to illustrate how declarative syntax can simplify resource allocation and keep your Jenkins controller healthy.

Example 1: Basic Declarative Pipeline

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                echo 'Building...'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing...'
            }
        }
    }
}
  • Runs on any available Jenkins agent (worker).
  • Uses two stages in a simple sequence.

Example 2: Stage-level agents for resource isolation

pipeline {
    agent none  // Avoid using a global agent at the pipeline level
    stages {
        stage('Build') {
            agent { docker 'maven:3.9.3-eclipse-temurin-17' }
            steps {
                sh 'mvn clean package'
            }
        }
        stage('Test') {
            agent { docker 'openjdk:17-jdk' }
            steps {
                sh 'java -jar target/my-app-tests.jar'
            }
        }
    }
}
  • Each stage runs in its own container, preventing any single node from being overwhelmed.
  • agent none at the top ensures no global agent is allocated unnecessarily.

Example 3: Parallelizing test stages

pipeline {
    agent none
    stages {
        stage('Test') {
            parallel {
                stage('Unit Tests') {
                    agent { label 'linux-node' }
                    steps {
                        sh './run-unit-tests.sh'
                    }
                }
                stage('Integration Tests') {
                    agent { label 'linux-node' }
                    steps {
                        sh './run-integration-tests.sh'
                    }
                }
            }
        }
    }
}
  • Splits tests into two parallel stages.
  • Each stage can run on a different node or container, speeding up feedback loops.

Example 4: Parameterized pipeline

pipeline {
    agent any

    parameters {
        choice(name: 'TEST_TYPE', choices: ['unit', 'integration', 'all'], description: 'Which test suite to run?')
    }

    stages {
        stage('Build') {
            steps {
                echo 'Building...'
            }
        }
        stage('Test') {
            when {
                expression { return params.TEST_TYPE == 'unit' || params.TEST_TYPE == 'all' }
            }
            steps {
                echo 'Running unit tests...'
            }
        }
        stage('Integration') {
            when {
                expression { return params.TEST_TYPE == 'integration' || params.TEST_TYPE == 'all' }
            }
            steps {
                echo 'Running integration tests...'
            }
        }
    }
}
  • Lets you choose which tests to run (unit, integration, or both).
  • Only executes relevant stages based on the chosen parameter, saving resources.

Example 5: Matrix builds

pipeline {
    agent none

    stages {
        stage('Build and Test Matrix') {
            matrix {
                agent {
                    label "${PLATFORM}-docker"
                }
                axes {
                    axis {
                        name 'PLATFORM'
                        values 'linux', 'windows'
                    }
                    axis {
                        name 'BROWSER'
                        values 'chrome', 'firefox'
                    }
                }
                stages {
                    stage('Build') {
                        steps {
                            echo "Build on ${PLATFORM} with ${BROWSER}"
                        }
                    }
                    stage('Test') {
                        steps {
                            echo "Test on ${PLATFORM} with ${BROWSER}"
                        }
                    }
                }
            }
        }
    }
}
  • Defines a matrix of PLATFORM x BROWSER, running each combination in parallel.
  • Perfect for testing multiple OS/browser combinations without duplicating pipeline logic.

Additional resources:

Using Declarative Pipelines helps ensure your CI/CD setup is easier to maintain, scalable, and secure. By properly configuring agents — whether Docker-based or label-based — you can spread workloads across multiple worker nodes, minimize resource contention, and keep your Jenkins controller humming along happily.

Best practices for CI/CD with Docker and Jenkins

Ready to supercharge your setup? Here are a few tried-and-true habits I’ve cultivated:

  • Leverage Docker’s layer caching: Optimize your Dockerfiles so stable (less frequently changing) layers appear early. This drastically reduces build times.
  • Run tests in parallel: Jenkins can run multiple containers for different services or microservices, letting you test them side by side. Declarative Pipelines make it easy to define parallel stages, each on its own agent.
  • Shift left on security: Integrate security checks early in the pipeline. Tools like Docker Scout let you scan images for vulnerabilities, while Jenkins plugins can enforce compliance policies. Don’t wait until production to discover issues.
  • Optimize resource allocation: Properly configure CPU and memory limits for Jenkins and Docker containers to avoid resource hogging. If you’re scaling Jenkins, distribute builds across multiple worker nodes or ephemeral agents for maximum efficiency.
  • Configuration management: Store Jenkins jobs, pipeline definitions, and plugin configurations in source control. Tools like Jenkins Configuration as Code simplify versioning and replicating your setup across multiple Docker servers.

With these strategies — plus a healthy dose of Declarative Pipelines — you’ll have a lean, high-octane CI/CD pipeline that’s easier to maintain and evolve.

Troubleshooting Docker and Jenkins Pipelines

Even the best systems hit a snag now and then. Here are a few hurdles I’ve seen (and conquered):

  • Handling environment variability: Keep Docker and Jenkins versions synced across different nodes. If multiple Jenkins nodes are in play, standardize Docker versions to avoid random build failures.
  • Troubleshooting build failures: Use docker logs -f <container-id> to see exactly what happened inside a container. Often, the logs reveal missing dependencies or misconfigured environment variables.
  • Networking challenges: If your containers need to talk to each other — especially across multiple hosts — make sure you configure Docker networks or an orchestration platform properly. Read Docker’s networking documentation for details, and check out the Jenkins diagnosing issues guide for more troubleshooting tips.

Conclusion

Pairing Docker and Jenkins offers a nimble, robust approach to CI/CD. Docker locks down consistent environments and lightning-fast rollouts, while Jenkins automates key tasks like building, testing, and pushing your changes to production. When these two are in harmony, you can expect shorter release cycles, fewer integration headaches, and more time to focus on developing awesome features.

A healthy pipeline also means your team can respond quickly to user feedback and confidently roll out updates — two crucial ingredients for any successful software project. And if you’re concerned about security, there are plenty of tools and best practices to keep your applications safe.

I hope this guide helps you build (and maintain) a high-octane CI/CD pipeline that your team will love. If you have questions or need a hand, feel free to reach out on the community forums, join the conversation on Slack, or open a ticket on GitHub issues. You’ll find plenty of fellow Docker and Jenkins enthusiasts who are happy to help.

Thanks for reading, and happy building!

Learn more

Protecting the Software Supply Chain: The Art of Continuous Improvement

16 January 2025 at 20:04

Without continuous improvement in software security, you’re not standing still — you’re walking backward into oncoming traffic. Attack vectors multiply, evolve, and look for the weakest link in your software supply chain daily. 

Cybersecurity Ventures forecasts that the global cost of software supply chain attacks will reach nearly $138 billion by 2031, up from $60 billion in 2025 and $46 billion in 2023. A single overlooked vulnerability isn’t just a flaw; it’s an open invitation for compromise, potentially threatening your entire system. The cost of a breach doesn’t stop with your software — it extends to your reputation and customer trust, which are far harder to rebuild. 

Docker’s suite of products offers your team peace of mind. With tools like Docker Scout, you can expose vulnerabilities before they expose you. Continuous image analysis doesn’t just find the cracks; it empowers your teams to seal them from code to production. But Docker Scout is just the beginning. Tools like Docker Hub’s trusted content, Docker Official Images (DOI), Image Access Management (IAM), and Hardened Docker Desktop work together to secure every stage of your software supply chain. 

In this post, we’ll explore how these tools provide built-in security, governance, and visibility, helping your team innovate faster while staying protected. 

2400x1260 evergreen docker blog a

Securing the supply chain

Your software supply chain isn’t just an automated sequence of tools and processes. It’s a promise — to your customers, team, and future. Promises are fragile. The cracks can start to show with every dependency, third-party integration, and production push. Tools like Image Access Management help protect your supply chain by providing granular control over who can pull, share, or modify images, ensuring only trusted team members access sensitive assets. Meanwhile, Hardened Docker Desktop ensures developers work in a secure, tamper-proof environment, giving your team confidence that development is aligned with enterprise security standards. The solution isn’t to slow down or second-guess; it’s to continuously improve on securing your software supply chain, such as automated vulnerability scans and trusted content from Docker Hub.

A breach is more than a line item in the budget. Customers ask themselves, “If they couldn’t protect this, what else can’t they protect?” Downtime halts innovation as fines for compliance failures and engineering efforts re-route to forensic security analysis. The brand you spent years perfecting could be reduced to a cautionary tale. Regardless of how innovative your product is, it’s not trusted if it’s not secure. 

Organizations must stay prepared by regularly updating their security measures and embracing new technologies to outpace evolving threats. As highlighted in the article Rising Tide of Software Supply Chain Attacks: An Urgent Problem, software supply chain attacks are increasingly targeting critical points in development workflows, such as third-party dependencies and build environments. High-profile incidents like the SolarWinds attack have demonstrated how adversaries exploit trust relationships and weaknesses in widely used components to cause widespread damage. 

Preventing security problems from the start

Preventing attacks like the SolarWinds breach requires prioritizing code integrity and adopting secure software development practices. Tools like Docker Scout seamlessly integrate security into developers’ workflows, enabling proactive identification of vulnerabilities in dependencies and ensuring that trusted components form the backbone of your applications.

Docker Hub’s trusted content and Docker Scout’s policy evaluation features help ensure that your organization uses compliant and secure images. Docker Official Images (DOI) provide a robust foundation for deployments, mitigating risks from untrusted components. To extend this security foundation, Image Access Management allows teams to enforce image-sharing policies and restrict access to sensitive components, preventing accidental exposure or misuse. For local development, Hardened Docker Desktop ensures that developers operate in a secure, enterprise-grade environment, minimizing risks from the outset. This combination of tools enables your engineering team to put out fires and, more importantly, prevent them from starting in the first place.

Building guardrails

Governance isn’t a roadblock; it’s the blueprint for progress. The problem is that some companies treat security like a fire extinguisher — something you grab when things go wrong. That is not a viable strategy in the long run. Real innovation happens when security guardrails are so well-designed that they feel like open highways, empowering teams to move fast without compromising safety. 

A structured policy lifecycle loop — mapping connections, planning changes, deploying cleanly, and retiring the dead weight — turns governance into your competitive edge. Automate it, and you’re not just checking boxes; you’re giving your teams the freedom to move fast and trust the road ahead. 

Continuous improvement on security policy management doesn’t have to feel like a bureaucratic chokehold. Docker provides a streamlined workflow to secure your software supply chain effectively. Docker Scout integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and detailed reports and recommendations to help teams address issues before code reaches production. 

With the introduction of Docker Health Scores — a security grading system for container images — teams gain a clear and actionable snapshot of their image security posture. These scores empower developers to prioritize remediation efforts and continuously improve their software’s security from code to production.

Keeping up with continuous improvement

Security threats aren’t slowing down. New attack vectors and vulnerabilities grow every day. With cybercrime costs expected to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028, organizations face a critical choice: adapt to this evolving threat landscape or risk falling behind, exposing themselves to escalating costs and reputational damage. Continuous improvement in software security isn’t a luxury. Building and maintaining trust with your customers is essential so they know that every fresh deployment is better than the one that came before. Otherwise, expect high costs due to imminent software supply chain attacks. 

Best practices for securing the software supply chain involve integrating vulnerability scans early in the development lifecycle, leveraging verified content from trusted sources, and implementing governance policies to ensure consistent compliance standards without manual intervention. Continuous monitoring of vulnerabilities and enforcing runtime policies help maintain security at scale, adapting to the dynamic nature of modern software ecosystems.

Start today

Securing your software supply chain is a journey of continuous improvement. With Docker’s tools, you can empower your teams to build and deploy software securely, ensuring vulnerabilities are addressed before they become liabilities.

Don’t wait until vulnerabilities turn into liabilities. Explore Docker Hub, Docker Scout, Hardened Docker Desktop, and Image Access Management to embed security into every stage of development. From granular control over image access to tamper-proof local environments, Docker’s suite of tools helps safeguard your innovation, protect your reputation, and empower your organization to thrive in a dynamic ecosystem.

Learn more

  • Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.
  • Docker Health Scores: A security grading system for container images, offering teams clear insights into their image security posture.
  • Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.
  • Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for your containerized applications.
  • Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.
  • Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.

Simplify AI Development with the Model Context Protocol and Docker

15 January 2025 at 20:07

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

In December, we published The Model Context Protocol: Simplifying Building AI apps with Anthropic Claude Desktop and Docker. Along with the blog post, we also created Docker versions for each of the reference servers from Anthropic and published them to a new Docker Hub mcp namespace.

This provides lots of ways for you to experiment with new AI capabilities using nothing but Docker Desktop.

2400x1260 docker labs genai

For example, to extend Claude Desktop to use Puppeteer, update your claude_desktop_config.json file with the following snippet:

"puppeteer": {
    "command": "docker",
    "args": ["run", "-i", "--rm", "--init", "-e", "DOCKER_CONTAINER=true",          "mcp/puppeteer"]
  }

After restarting Claude Desktop, you can ask Claude to take a screenshot of any URL using a Headless Chromium browser running in Docker.

You can do the same thing for a Model Context Protocol (MCP) server that you’ve written. You will then be able to distribute this server to your users without requiring them to have anything besides Docker Desktop.

How to create an MCP server Docker Image

An MCP server can be written in any language. However, most of the examples, including the set of reference servers from Anthropic, are written in either Python or TypeScript and use one of the official SDKs documented on the MCP site.

For typical uv-based Python projects (projects with a pyproject.toml and uv.lock in the root), or npm TypeScript projects, it’s simple to distribute your server as a Docker image.

  1. If you don’t already have Docker Desktop, sign up for a free Docker Personal subscription so that you can push your images to others.
  2. Run docker login from your terminal.
  3. Copy either this npm Dockerfile or this Python Dockerfile template into the root of your project. The Python Dockerfile will need at least one update to the last line.
  4. Run the build with the Docker CLI (instructions below).

The two Dockerfiles shown above are just templates. If your MCP server includes other runtime dependencies, you can update the Dockerfiles to include these additions. The runtime of your MCP server should be self-contained for easy distribution.

If you don’t have an MCP server ready to distribute, you can use a simple mcp-hello-world project to practice. It’s a simple Python codebase containing a server with one tool call. Get started by forking the repo, cloning it to your machine, and then following the following instructions to build the MCP server image.

Building the image

Most sample MCP servers are still designed to run locally (on the same machine as the MCP client, communication over stdio). Over the next few months, you’ll begin to see more clients supporting remote MCP servers but for now, you need to plan for your server running on at least two different architectures (amd64 and arm64). This means that you should always distribute what we call multi-platform images when your target is local MCP servers. Fortunately, this is easy to do.

Create a multi-platform builder

The first step is to create a local builder that will be able to build both platforms. Don’t worry; this builder will use emulation to build the platforms that you don’t have. See the multi-platform documentation for more details.

docker buildx create \
  --name mcp-builder \
  --driver docker-container \
  --bootstrap

Build and push the image

In the command line below, substitute <your-account> and your mcp-server-name for valid values, then run a build and push it to your account.

docker buildx build \
  --builder=mcp-builder \
  --platform linux/amd64,linux/arm64 \
  -t <your-docker-account>/mcp-server-name \
  --push .

Extending Claude Desktop

Once the image is pushed, your users will be able to attach your MCP server to Claude Desktop by adding an entry to claude_desktop_config.json that looks something like:

"your-server-name": {
    "command": "docker",
    "args": ["run", "-i", "--rm", "--pull=always",
             "your-account/your-server-name"]
  }

This is a minimal set of arguments. You may want to pass in additional command-line arguments, environment variables, or volume mounts.

Next steps

The MCP protocol gives us a standard way to extend AI applications. Make sure your extension is easy to distribute by packaging it as a Docker image. Check out the Docker Hub mcp namespace for examples that you can try out in Claude Desktop today.

As always, feel free to follow along in our public repo.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

💾

This demo will use the Puppeteer MCP server to take a screenshot of a website and invert the colors using Claude Desktop and Docker Desktop. Doing this witho...

Meet Gordon: An AI Agent for Docker

13 January 2025 at 21:20

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

In previous articles, we focused on how AI-based tools can help developers streamline tasks and offered ideas for enabling agentic workflows, like reviewing branches and understanding code changes.

In this article, we’ll explore our experiments around the idea of creating a Docker AI Agent — something that could both help new users learn about our tools and products and help power users get things done faster.

2400x1260 docker labs genai

During our explorations around this Docker Agent and AI-based tools, we noticed that the main pain points we encountered were often the same:

  • LLMs need good context to provide good answers (garbage in -> garbage out).
  • Using AI tools often requires context switching (moving to another app, to a different website, etc.).
  • We’d like agents to be able to suggest and perform actions on behalf of the users.
  • Direct product integrations with AI are often more satisfying to use than chat interfaces.

At first, we tried to see what’s possible using off-the-shelf services like ChatGPT or Claude. 

By using testing prompts such as “optimize the following Dockerfile, following all best practices” and providing the model with a sub-par but common Dockerfile, we could sometimes get decent answers. Often, though, the resulting Dockerfile had subtle bugs, hallucinations, or simply wasn’t optimized or didn’t use many of the best practices we would’ve hoped for. Thus, this approach was not reliable enough.

Data ended up being the main issue. Training data for LLM models is always outdated by some amount of time, and the number of bad Dockerfiles that you can find online vastly outnumbers the amount of up-to-date Dockerfiles using all best practices, etc.

After doing proof-of-concept tests using a RAG approach, including some documents with lots of useful advice for creating good Dockerfiles, we realized that the AI Agent idea was definitely possible. However, setting up all the things required for a good RAG would’ve taken too much bandwidth from our small team.

Because of this, we opted to use kapa.ai for that specific part of our agent. Docker already uses them to provide the AI docs assistant on Docker docs, so most of our high-quality documentation is already available for us to reference as part of our LLM usage through their service. Using kapa.ai allowed us to experiment more, getting high-quality results faster, and allowing us to try different ideas around the AI agent concept.

Enter Gordon

Out of this experimentation came a new product that you can try: Gordon. With Gordon, we’d like to tackle these pain points. By integrating Gordon into Docker Desktop and the Docker CLI (Figure 1), we can:

  • Access much more context that can be used by the LLMs to best understand the user’s questions and provide better answers or even perform actions on the user’s behalf.
  • Be where the users are. If you launch a container via Docker Desktop and it fails, you can quickly debug with Gordon. If you’re in the terminal hacking away, Docker AI will be there, too.
  • Avoid being a purely chat-based agent by providing Gordon-based features directly as part of Docker Desktop UI elements. If Gordon detects certain scenarios, like a container that failed to start, a button will appear in the UI to directly get suggestions, or run actions, etc. (Figure 2).
Screenshot of Docker Desktop showing the Gordon icon next to a container name in the list of containers.
Figure 1: Gordon icon on Docker Desktop.
Screenshot of Docker Desktop showing the Ask Gordon tab next to Logs, Inspect, Files, Stats and other options.
Figure 2: Ask Gordon (beta).

What Gordon can do

We want to start with Gordon by optimizing for Docker-related tasks — not general-purpose questions — but we are not excluding expanding the scope to more development-related tasks as work on the agent continues.

Work on Gordon is at an early stage and its capabilities are constantly evolving, but it’s already really good at some things (Figure 3). Here are things to definitely try out:

  • Ask general Docker-related questions. Gordon knows Docker well and has access to all of our documentation.
  • Get help debugging container build or runtime errors.
  • Remediate policy deviations from Docker Scout.
  • Get help optimizing Docker-related files and configurations.
  • Ask it how to run specific containers (e.g., “How can I run MongoDB?”).
Screenshot of results after asking Docker AI to explain a Dockerfile.
Figure 3: Using Gordon to understand a Dockerfile.

How Gordon works

The Gordon backend lives on Docker servers, while the client is a CLI that lives on the user’s machine and is bundled with Docker Desktop. Docker Desktop uses the CLI to access the local machine’s files, asking the user for the directory each time it needs that context to answer a question. When using the CLI directly, it has access to the working directory it’s executed in. For example, if you are in a directory with a Dockerfile and you run “Docker AI, rate my Dockerfile”, it will find the one that’s present in that directory

Currently, Gordon does not have write access to any files, so it will not edit any of your files. We’re hard at work on future features that will allow the agent to do the work for you, instead of only suggesting solutions. 

Figure 4 shows a rough overview of how we are thinking about things behind the scenes.

Illustration showing an overview of how Gordon works, with flow steps starting with "Understand user's input" and going to "Gather context" to "prepare final prompts" then "check results", "reply to user", and more.
Figure 4: Overview of Gordon.

The first step of this pipeline, “Understand the user’s input and figure out which action to perform”, is done using “tool calling” (also known as “function calling”) with the OpenAI API

Although this is a popular approach, we noticed that the documentation online isn’t very good, and general best practices aren’t well defined yet. This led us to experiment a lot with the feature and try to figure out what works for us and what doesn’t.

Things we noticed:

  • Tool descriptions are important, and we should prefer more in-depth descriptions with examples.
  • Testing around tool-detection code is also important. Adding new tools to a request could confuse the LLM and cause it to no longer trigger the expected tool.
  • The LLM model used influences how the whole tool calling functionality should be implemented, as different models might prefer descriptions written in a certain way, behave better/worse under certain scenarios (e.g. when using lots of tools), etc.

Try Gordon for yourself

Gordon is available as an opt-in Beta feature starting with Docker Desktop version 4.37. To participate in the closed beta, all you need to do is fill out the form on the site.

Initially, Gordon will be available for use both in Docker Desktop and the Docker CLI, but our idea is to surface parts of this tech in various other parts of our products as well.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Incident Update: Docker Desktop for Mac 

9 January 2025 at 07:01

Update: January 22, 2025

Apple will soon be releasing an update of macOS Sequoia. Upgrading macOS before Docker Desktop may lead to a known issue. To avoid disruptions, ensure Docker Desktop is updated. Refer to the next steps outlined in our January 9, 2025 update.

Update: January 9, 2025

We’ve identified the issue affecting Docker Desktop for some macOS users, which caused disruptions for some existing Docker Desktop installations. 

Next steps

To minimize the impact on your Docker experience, we recommend one of the following actions:

1. Upgrade to the latest version

We recommend upgrading to Docker Desktop version 4.37.2, which includes a permanent fix. Download the update here or via the Docker Desktop in-app update.

2. Use a patch update (for older versions)

For users unable to upgrade to the latest version, we’ve released patches for versions 4.32 through 4.36. Access the relevant patch updates here.

3. Apply a one-time corrective action

If you are still seeing the malware pop-up or encountering the issue, please also follow the additional steps detailed here.

4. IT administrators 

As an IT administrator, you can use this script to take corrective actions on behalf of your users, provided your developers have either upgraded to the latest version or applied a patch.

We know how critical Docker Desktop is to your workflow and are committed to ensuring a smooth resolution. All remediation steps are documented here. Note that Docker Desktop versions 4.28 and earlier are not impacted by this issue. If you encounter any issues or need assistance, please reach out to our Support Team.


2400x1260 docker evergreen logo blog D

Update: January 8, 2025

We want to inform you about a new issue affecting Docker Desktop for some macOS users. This causes Docker Desktop to not start. Some users may also have received malware warnings. Those warnings are inaccurate.  

A temporary workaround that will restore functionality is available for any affected users. Detailed instructions for the workaround are available on GitHub.

Our team is prioritizing this issue and working diligently on a permanent fix. If you prefer to wait for the longer-term patch update, please refrain from (re)-starting Docker Desktop.

We know how important Docker Desktop is to your work, and we’re committed to resolving this issue quickly and effectively. For assistance or additional information, please reach out to our Support team or check the Docker Status page for the latest updates.

Unlocking Efficiency with Docker for AI and Cloud-Native Development

By: Yiwen Xu
8 January 2025 at 21:22

The need for secure and high quality software becomes more critical every day as the impact of vulnerabilities increases and related costs continue to rise. For example, flawed software cost the U.S. economy $2.08 trillion in 2020 alone, according to the Consortium for Information and Software Quality (CISQ). And, a software defect that might cost $100 to fix if found early in the development process can grow exponentially to $10,000 if discovered later in production. 

Docker helps you deliver secure, efficient applications by providing consistent environments and fast, reliable container management, building on best practices that let you discover and resolve issues earlier in the software development life cycle (SDLC).

2400x1260 docker evergreen logo blog E

Shifting left to ensure fewer defects

In a previous blog post, we talked about using the right tools, including Docker’s suite of products to boost developer productivity. Besides having the right tools, you also need to implement the right processes to optimize your software development and improve team productivity. 

The software development process is typically broken into two distinct loops, the inner and the outer loops. At Docker, we believe that investing in the inner loop is crucial. This means shifting security left and identifying problems as soon as you can. This approach improves efficiency and reduces costs by helping teams find and fix software issues earlier.

Using Docker tools to adopt best practices

Docker’s products help you adopt these best practices — we are focused on enhancing the software development lifecycle, especially around refining the inner loop. Products like Docker Desktop allow your dev team in the inner loop to run, test, code, and build everything fast and consistently. This consistency eliminates the “it works on my machine” issue, meaning applications behave the same in both development and production.  

Shifting left lets your dev team identify problems earlier in your software project lifecycle. When you detect issues sooner, you increase efficiency and help ensure secure builds and compliance. By shifting security left with Docker Scout, your dev teams can identify vulnerabilities sooner and help avoid issues down the road. 

Another example of shifting left involves testing — doing testing earlier in the process leads to more robust software and faster release cycles. This is when Testcontainers Cloud comes in handy because it enables developers to run reliable integration tests, with real dependencies defined in code. 

Accelerate development within the hybrid inner loop

We see more and more companies adopting the so-called hybrid inner loop, which combines the best of two worlds — local and cloud. The results provide greater flexibility for your dev teams and encourage better collaboration. For example, Docker Build Cloud uses the power of the cloud to speed up build time without sacrificing the local development experience that developers love. 

By using these Docker products across the software development life cycle, teams get quick feedback loops and faster issue resolution, ensuring a smooth development flow from inception to deployment. 

Simplifying AI application development

When you’re using the right tools and processes to accelerate your application delivery and maximize efficiency throughout your SDLC, processes that were once cumbersome become your new baseline, freeing up time for true innovation. 

Docker also helps accelerate innovation by simplifying AI/ML development. We are continually investing in AI to help your developers deliver AI-backed applications that differentiate your business and enhance competitiveness.

Docker AI tools

Docker’s GenAI Stack accelerates the incorporation of large language models (LLMs) and AI/ML into your code, enabling the delivery of AI-backed applications. All containers work harmoniously and are managed directly from Docker Desktop, allowing your team to monitor and adjust components without leaving their development environment. Deploying the GenAI Stack is quick and easy, and leveraging Docker’s containerization technology helps speed setup and simplify scaling as applications grow.

Earlier this year, we announced the preview of Docker Extension for GitHub Copilot. By standardizing best practices and enabling integrations with tools like GitHub Copilot, Docker empowers developers to focus on innovation, closing the gap from the first line of code to production.

And, more recently, we launched the Docker AI Catalog in Docker Hub. This new feature simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. Your dev team will benefit from shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Wrapping up

Docker products help you establish sound processes and practices related to shifting left and discovering issues earlier to avoid headaches down the road. This approach ultimately unlocks developer productivity, giving your dev team more time to code and innovate. Docker also allows you to quickly use AI to close knowledge gaps and offers trusted tools to build AI/ML applications and accelerate time to market. 

To see how Docker continues to empower developers with the latest innovations and tools, check out our Docker 2024 Highlights.

Learn about Docker’s updated subscriptions and find the ideal plan for your team’s needs.

Learn more

How to Set Up a Kubernetes Cluster on Docker Desktop

By: Voon Yee
7 January 2025 at 20:49

Kubernetes is an open source platform for automating the deployment, scaling, and management of containerized applications across clusters of machines. It’s become the go-to solution for orchestrating containers in production environments. But if you’re developing or testing locally, setting up a full Kubernetes cluster can be complex. That’s where Docker Desktop comes in — it lets you run Kubernetes directly on your local machine, making it easy to test microservices, CI/CD pipelines, and containerized apps without needing a remote cluster.

Getting Kubernetes up and running can feel like a daunting task, especially for developers working in local environments. But with Docker Desktop, spinning up a fully functional Kubernetes cluster is simpler than ever. Whether you’re new to Kubernetes or just want an easy way to test containerized applications locally, Docker Desktop provides a streamlined solution. In this guide, we’ll walk through the steps to start a Kubernetes cluster on Docker Desktop and offer troubleshooting tips to ensure a smooth experience. 

Note: Docker Desktop’s Kubernetes cluster is designed specially for local development and testing; it is not for production use. 

2400x1260 container docker

Benefits of running Kubernetes in Docker Desktop 

The benefits of this setup include: 

  • Easy local Kubernetes cluster: A fully functional Kubernetes cluster runs on your local machine with minimal setup, handling network access between the host and Kubernetes as well as storage management. 
  • Easier learning path and developer convenience: For developers familiar with Docker but new to Kubernetes, having Kubernetes built into Docker Desktop offers a low-friction learning path. 
  • Testing Kubernetes-based applications locally: Docker Desktop gives developers a local environment to test Kubernetes-based microservices applications that require Kubernetes features like services, pods, ConfigMaps, and secrets without needing access to a remote cluster. It also helps developers to test CI/CD pipelines locally. 

How to start Kubernetes cluster on Docker Desktop in three steps

  1. Download the latest Docker Desktop release.
  2. Install Docker Desktop on the operating system of your choice. Currently, the supported operating systems are macOS, Linux, and Windows.
  3. In the Settings menu, select Kubernetes > Enable Kubernetes and then Apply & restart to start a one-node Kubernetes cluster (Figure 1). Typically, the time it takes to set up the Kubernetes cluster depends on your internet speed to pull the needed images.
Screenshot of Settings menu with Kubernetes chosen on the left and the Enable Kubernetes option selected.
Figure 1: Starting Kubernetes.

Once the Kubernetes cluster is started successfully, you can see the status from the Docker Desktop dashboard or the command line.

From the dashboard (Figure 2):

Screenshot of Docker Desktop dashboard showing green dot next to Kubernetes is running.
Figure 2: Status from the dashboard.

The command-line status:

$ kubectl get node
NAME             STATUS   ROLES           AGE   VERSION
docker-desktop   Ready    control-plane   5d    v1.30.2

Getting Kubernetes support

Docker bundles Kubernetes but does not provide official Kubernetes support. If you are experiencing issues with Kubernetes, however, you can get support in several ways, including from the Docker community, Docker guides, and GitHub documentation: 

What to do if you experience an issue 

Generate a diagnostics file

Before troubleshooting, generate a diagnostics file using your terminal.

Refer to the documentation for diagnosing from the terminal. For example, if you are using a Mac, run the following command:

/Applications/Docker.app/Contents/MacOS/com.docker.diagnose gather -upload

The command will show you where the diagnostics file is saved:

Gathering diagnostics for ID  into /var/folders/50/<Random Character>/<Random Character>/<Machine unique ID>/<YYYYMMDDTTTT>.zip.

In this case, the file is saved at /var/folders/50/<Random Characters>/<Random Characters>/<YYYMMDDTTTT>.zip. Unzip the file (<YYYYMMDDTTTT>.zip) where you can find the logs file for Docker Desktop.

Check for logs

Checking for logs instead of guessing the issue is good practice. Understanding what Kubernetes components are available and what their functions are is essential before you start troubleshooting. You can narrow down the process by looking at the specific component logs. Look for the keyword error or fatal in the logs. 

Depending on which platform you are using, one method is to use the grep command and search for the keyword in the macOS terminal, a Linux distro for WSL2, or the Linux terminal for the file you unzipped:

$ grep -Hrni "<keyword>" <The path of the unzipped file>

## For example, one of the error found related to Kubernetes in the "com.docker.backend.exe" logs:

$ grep -Hrni "error" *
[com.docker.backend.exe.log:[2022-12-05T05:24:39.377530700Z][com.docker.backend.exe][W] starting kubernetes: 1 error occurred: 
com.docker.backend.exe.log:	* starting kubernetes: pulling kubernetes images: pulling registry.k8s.io/coredns:v1.9.3: Error response from daemon: received unexpected HTTP status: 500 Internal Server Error

Troubleshooting example

Let’s say you notice there is an issue starting up the cluster. This issue could be related to the Kubelet process, which works as a node-level agent to help with container management and orchestration within a Kubernetes cluster. So, you should check the Kubelet logs. 

But, where is the Kubelet log located? It’s at log/vm/kubelet.log in the diagnostics file.

An example of a possible related issue can be found in the kubelet.log. The images needed to set up Kubernetes are not able to be pulled due to network/internet restrictions. You might find errors related to failing to pull the necessary Kubernetes images to set up the Kubernetes cluster.

For example:

starting kubernetes: pulling kubernetes images: pulling registry.k8s.io/coredns:v1.9.3: Error response from daemon: received unexpected HTTP status: 500 Internal Server Error

Normally, 10 images are needed to set up the cluster. The following output is from a macOS running Docker Desktop version 4.33:

$ docker image ls
REPOSITORY                                TAG                                                                           IMAGE ID       CREATED         SIZE
docker/desktop-kubernetes                 kubernetes-v1.30.2-cni-v1.4.0-critools-v1.29.0-cri-dockerd-v0.3.11-1-debian   5ef3082e902d   4 weeks ago     419MB
registry.k8s.io/kube-apiserver            v1.30.2                                                                       84c601f3f72c   7 weeks ago     112MB
registry.k8s.io/kube-scheduler            v1.30.2                                                                       c7dd04b1bafe   7 weeks ago     60.5MB
registry.k8s.io/kube-controller-manager   v1.30.2                                                                       e1dcc3400d3e   7 weeks ago     107MB
registry.k8s.io/kube-proxy                v1.30.2                                                                       66dbb96a9149   7 weeks ago     87.9MB
registry.k8s.io/etcd                      3.5.12-0                                                                      014faa467e29   6 months ago    139MB
registry.k8s.io/coredns/coredns           v1.11.1                                                                       2437cf762177   11 months ago   57.4MB
docker/desktop-vpnkit-controller          dc331cb22850be0cdd97c84a9cfecaf44a1afb6e                                      3750dfec169f   14 months ago   35MB
registry.k8s.io/pause                     3.9                                                                           829e9de338bd   22 months ago   514kB
docker/desktop-storage-provisioner        v2.0                                                                          c027a58fa0bb   3 years ago     39.8MB

You can check whether you successfully pulled the 10 images by running docker image ls. If images are missing, a workaround is to save the missing image using docker image save from a machine that successfully starts the Kubernetes cluster (provided both run the same Docker Desktop version). Then, you can transfer the image to your machine, use docker image load to load the image into your machine, and tag it. 

For example, if the registry.k8s.io/coredns:v<VERSION> image is not available,  you can follow these steps:

  1. Use docker image save from a machine that successfully starts the Kubernetes cluster to save it as a tar file: docker save registry.k8s.io/coredns:v<VERSION> > <Name of the file>.tar.
  2. Manually transfer the <Name of the file>.tar to your machine.
  3. Use docker image load to load the image on your machine: docker image load < <Name of the file>.tar.
  4. Tag the image: docker image tag registry.k8s.io/coredns:v<VERSION> <Name of the file>.tar.
  5. Re-enable the Kubernetes from your Docker Desktop’s settings.
  6. Check other logs in the diagnostics log.

What to look for in the diagnostics log

In the diagnostics log, look for the folder starting named kube/. (Note that the <kube> below,  for macOS and Linux is kubectl and for Windows is kubectl.exe.)

  • kube/get-namespaces.txt: List down all the namespaces, equal to <kube> --context docker-desktop get namespaces.
  • kube/describe-nodes.txt: Describe the docker-desktop node, equal to <kube> --context docker-desktop describe nodes.
  • kube/describe-pods.txt: Description of all pods running in the Kubernetes cluster.
  • kube/describe-services.txt: Description of the services running, equal to <kube> --context docker-desktop describe services --all-namespaces.
  • You also can find other useful Kubernetes logs in the mentioned folder.

Search for known issues

For any error message found in the steps above, you can search for known Kubernetes issues on GitHub to see if a workaround or any future permanent fix is planned.

Reset or reboot 

If the previous steps weren’t helpful, try a reboot. And, if the previous steps weren’t helpful, try a reboot. And, if a reboot is not helpful, the last alternative is to reset your Kubernetes cluster, which often helps resolve issues: 

  • Reboot: To reboot, restart your machine. Rebooting a machine in a Kubernetes cluster can help resolve issues by clearing transient states and restoring the system to a clean state.
  • Reset: For a reset, navigate to Settings > Kubernetes > Reset the Kubernetes Cluster. Resetting a Kubernetes cluster can help resolve issues by essentially reverting the cluster to a clean state, and clearing out misconfigurations, corrupted data, or stuck resources that may be causing problems.

Bringing Kubernetes to your local development environment

This guide offers a straightforward way to start a Kubernetes cluster on Docker Desktop, making it easier for developers to test Kubernetes-based applications locally. It covers key benefits like simple setup, a more accessible learning path for beginners, and the ability to run tests without relying on a remote cluster. We also provide some troubleshooting tips and resources for resolving common issues. 

Whether you’re just getting started or looking to improve your local Kubernetes workflow, give it a try and see what you can achieve with Docker Desktop’s Kubernetes integration.

Learn more

Mastering Peak Software Development Efficiency with Docker

3 January 2025 at 21:00

In modern software development, businesses are searching for smarter ways to streamline workflows and deliver value faster. For developers, this means tackling challenges like collaboration and security head-on, while driving efficiency that contributes directly to business performance. But how do you address potential roadblocks before they become costly issues in production? The answer lies in optimizing the development inner loop — a core focus for the future of app development.

By identifying and resolving inefficiencies early in the development lifecycle, software development teams can overcome common engineering challenges such as slow dev cycles, spiraling infrastructure costs, and scaling challenges. With Docker’s integrated suite of development tools, developers can achieve new levels of engineering efficiency, creating high-quality software while delivering real business impact.

Let’s explore how Docker is transforming the development process, reducing operational overhead, and empowering teams to innovate faster.

2400x1260 evergreen docker blog e

Speed up software development lifecycles: Faster gains with less effort

A fast software development lifecycle is a crucial aspect for delivering value to users, maintaining a competitive edge, and staying ahead of industry trends. To enable this, software developers need workflows that minimize friction and allow them to iterate quickly without sacrificing quality. That’s where Docker makes a difference. By streamlining workflows, eliminating bottlenecks, and automating repetitive tasks, Docker empowers developers to focus on high-impact work that drives results.

Consistency across development environments is critical for improving speed. That’s why Docker helps developers create consistent environments across local, test, and production systems. In fact, a recent study reported developers experiencing a 6% increase in productivity when leveraging Docker Business. This consistency eliminates guesswork, ensuring developers can concentrate on writing code and improving features rather than troubleshooting issues. With Docker, applications behave predictably across every stage of the development lifecycle.

Docker also accelerates development by significantly reducing time spent on iteration and setup. More specifically, organizations leveraging Docker Business achieved a three-month faster time-to-market for revenue-generating applications. Engineering teams can move swiftly through development stages, delivering new features and bug fixes faster. By improving efficiency and adapting to evolving needs, Docker enables development teams to stay agile and respond effectively to business priorities.

Improve scaling agility: Flexibility for every scenario

Scalability is another essential for businesses to meet fluctuating demands and seize opportunities. Whether handling a surge in user traffic or optimizing resources during quieter periods, the ability to scale applications and infrastructure efficiently is a critical advantage. Docker makes this possible by enabling teams to adapt with speed and flexibility.

Docker’s cloud-native approach allows software engineering teams to scale up or down with ease to meet changing requirements. This flexibility supports experimentation with cutting-edge technologies like AI, machine learning, and microservices without disrupting existing workflows. With this added agility, developers can explore new possibilities while maintaining focus on delivering value.

Whether responding to market changes or exploring the potential of emerging tools, Docker equips companies to stay agile and keep evolving, ensuring their development processes are always ready to meet the moment.

Optimize resource efficiency: Get the most out of what you’ve got

Maximizing resource efficiency is crucial for reducing costs and maintaining agility. By making the most of existing infrastructure, businesses can avoid unnecessary expenses and minimize cloud scaling costs, meaning more resources for innovation and growth. Docker empowers teams to achieve this level of efficiency through its lightweight, containerized approach.

Docker containers are designed to be resource-efficient, enabling multiple applications to run in isolated environments on the same system. Unlike traditional virtual machines, containers minimize overhead while maintaining performance, consolidating workloads, and lowering the operational costs of maintaining separate environments. For example, a leading beauty company reduced infrastructure costs by 25% using Docker’s enhanced CPU and memory efficiency. This streamlined approach ensures businesses can scale intelligently while keeping infrastructure lean and effective.

By containerizing applications, businesses can optimize their infrastructure, avoiding costly upgrades while getting more value from their current systems. It’s a smarter, more efficient way to ensure your resources are working at their peak, leaving no capacity underutilized.

Establish cost-effective scaling: Growth without growing pains

Similarly, scaling efficiently is essential for businesses to keep up with growing demands, introduce new features, or adopt emerging technologies. However, traditional scaling methods often come with high upfront costs and complex infrastructure changes. Docker offers a smarter alternative, enabling development teams to scale environments quickly and cost-effectively.

With a containerized model, infrastructure can be dynamically adjusted to match changing needs. Containers are lightweight and portable, making it easy to scale up for spikes in demand or add new capabilities without overhauling existing systems. This flexibility reduces financial strain, allowing businesses to grow sustainably while maximizing the use of cloud resources.

Docker ensures that scaling is responsive and budget-friendly, empowering teams to focus on innovation and delivery rather than infrastructure costs. It’s a practical solution to achieve growth without unnecessary complexity or expense.

Software engineering efficiency at your fingertips

The developer community consistently ranks Docker highly, including choosing it as the most-used and most-admired developer tool in Stack Overflow’s Developer Survey. With Docker’s suite of products, teams can reach a new level of efficient software development by streamlining the dev lifecycle, optimizing resources, and providing agile, cost-effective scaling solutions. By simplifying complex processes in the development inner loop, Docker enables businesses to deliver high-quality software faster while keeping operational costs in check. This allows developers to focus on what they do best: building innovative, impactful applications.

By removing complexity, accelerating development cycles, and maximizing resource usage, Docker helps businesses stay competitive and efficient. And ultimately, their teams can achieve more in less time — meeting market demands with efficiency and quality.

Ready to supercharge your development team’s performance? Download our white paper to see how Docker can help streamline your workflow, improve productivity, and deliver software that stands out in the market.

Learn more

Why Secure Development Environments Are Essential for Modern Software Teams

2 January 2025 at 21:00

“You don’t want to think about security — until you have to.”

That’s what I’d tell you if I were being honest about the state of development at most organizations I have spoken to. Every business out there is chasing one thing: speed. Move faster. Innovate faster. Ship faster. To them, speed is survival. There’s something these companies are not seeing — a shadow. An unseen risk hiding behind every shortcut, every unchecked tool, and every corner cut in the name of “progress.”

Businesses are caught in a relentless sprint, chasing speed and progress at all costs. However, as Cal Newport reminds us in Slow Productivity, the race to do more — faster — often leads to chaos, inefficiency, and burnout. Newport’s philosophy calls for deliberate, focused work on fewer tasks with greater impact. This philosophy isn’t just about how individuals work — it’s about how businesses innovate. Development teams rushing to ship software often cut corners, creating vulnerabilities that ripple through the entire supply chain. 

2400x1260 docker evergreen logo blog B 1

The strategic risk: An unsecured development pipeline

Development environments are the foundation of your business. You may think they’re inherently secure because they’re internal. Foundations crumble when you don’t take care of them, and that crack doesn’t just swallow your software — it swallows established customer trust and reputation. That’s how it starts: a rogue tool here, an unpatched dependency there, a developer bypassing IT to do things “their way.” They’re not trying to ruin your business. They’re trying to get their jobs done. But sometimes you can’t stop a fire after it’s started. Shadow IT isn’t just inconvenient — it’s dangerous. It’s invisible, unmonitored, and unregulated. It’s the guy leaving the back door open in a neighborhood full of burglars.

You need control, isolation, and automation — not because they’re nice to have, but because you’re standing on a fault line without them. Docker gives you that control. Fine-grained, role-based access ensures that the only people touching your most critical resources are the ones you trust. Isolation through containerization keeps every piece of your pipeline sealed tight so vulnerabilities don’t spread. Automation takes care of the updates, the patch management, and the vulnerabilities before they become a problem. In other words, you don’t have to hope your foundation is solid — you’ll know it is.

Shadow IT: A growing concern

While securing official development environments is critical, shadow IT remains an insidious and hidden threat. Shadow IT refers to tools, systems, or environments implemented without explicit IT approval or oversight. In the pursuit of speed, developers may bypass formal processes to adopt tools they find convenient. However, this creates unseen vulnerabilities with far-reaching consequences.

In the pursuit of performative busywork, developers often take shortcuts, grabbing tools and spinning up environments outside the watchful eyes of IT. The intent may not be malicious; it’s just human nature. Here’s the catch: What you don’t see, you can’t protect. Shadow IT is like a crack in the dam: silent, invisible, and spreading. It lets unvetted tools and insecure code slip into your supply chain, infecting everything from development to production. Before you know it, that “quick fix” has turned into a legal nightmare, a compliance disaster, and a stain on your reputation. In industries like finance or healthcare, that stain doesn’t wash out quickly. 

A solution rooted in integration

The solution lies in a unified, secure approach to development environments that removes the need for shadow IT while fortifying the software supply chain. Docker addresses these vulnerabilities by embedding security directly into the development lifecycle. Our solution is built on three foundational principles: control, isolation, and automation.

  1. Control through role-based access management: Docker Hub establishes clear boundaries within development environments by enabling fine-grained, role-based access. You want to ensure that only authorized personnel can interact with sensitive resources, which will ideally minimize the risk of unintended or malicious actions. Docker also enables publishers to enforce role-based access controls, ensuring only authorized users can interact with development resources. It streamlines patch management through verified, up-to-date images. Docker Official Images and Docker Verified Publisher content are scanned with our in-house image analysis tool, Docker Scout. This helps find vulnerabilities before they can be exploited.
  2. Isolation through containerization: Docker’s value proposition centers on its containerization technology. By creating isolated development spaces, Docker prevents cross-environment contamination and ensures that applications and their dependencies remain secure throughout the development lifecycle.
  3. Automation for seamless security: Recognizing the need for speed in modern development cycles, Docker integrates recommendations with Scout through recommendations for software updates and patch management for CVEs. This ensures that environments remain secure against emerging threats without interrupting the flow of innovation.

Delivering tangible business outcomes

Businesses are always going to face this tension between speed and security, but the truth is you don’t have to choose. Docker gives you both. It’s not just a platform; it’s peace of mind. Because when your foundation is solid, you stop worrying about what could go wrong. You focus on what comes next.

Consider the example of a development team working on a high-stakes application feature. Without secure environments, a single oversight — such as an unregulated access point — can result in vulnerabilities that disrupt production and erode customer trust. By leveraging Docker’s integrated security solutions, the team mitigates these risks, enabling them to focus on value creation rather than crisis management.

Aligning innovation with security

As a previous post covers, securing the development pipeline is not simply deploying technical solutions but establishing trust across the entire software supply chain. With Docker Content Trust and image signing, organizations can ensure the integrity of software components at every stage, reducing the risk of third-party code introducing unseen vulnerabilities. By eliminating the chaos of shadow IT and creating a transparent, secure development process, businesses can mitigate risk without slowing the pace of innovation.

The tension between speed and security has long been a barrier to progress, but businesses can confidently pursue both with Docker. A secure development environment doesn’t just protect against breaches — it strengthens operational resilience, ensures regulatory compliance, and safeguards brand reputation. Docker empowers organizations to innovate on a solid foundation as unseen risks lurk within an organization’s fragmented tools and processes. 

Security isn’t a luxury. It’s the cost of doing business. If you care about growth, if you care about trust, if you care about what your brand stands for, then securing your development environments isn’t optional — it’s survival. Docker Business doesn’t just protect your pipeline; it turns it into a strategic advantage that lets you innovate boldly while keeping your foundation unshakable. Integrity isn’t something you hope for — it’s something you build.

Start today

Securing your software supply chain is a critical step in building resilience and driving sustained innovation. Docker offers the tools to create fortified development environments where your teams can operate at their best.

The question is not whether to secure your development pipeline — it’s how soon you can start. Explore Docker Hub and Scout today to transform your approach to innovation and security. In doing so, you position your organization to navigate the complexities of the modern development landscape with confidence and agility.

Learn more

Recipe for Efficient Development: Simplify Collaboration and Security with Docker

20 December 2024 at 20:23

Collaboration and security are essential for delivering high-quality applications in modern software development, especially in cloud-native environments. Developers navigate intricate workflows, connect diverse systems, and safeguard applications against emerging threats — all while maintaining velocity and efficiency.

Think of development as preparing a multi-course meal in a high-pressure, professional kitchen, where precision, timing, and communication are critical. Each developer is a chef working on different parts of the dish, passing ingredients (code) along the way. When one part of the system encounters delays, it can ripple across the process, impacting the final result. Similarly, poor collaboration or security gaps can derail a project, causing delays and inefficiencies. 

Docker serves as the kitchen manager, ensuring everything flows smoothly, ingredients are passed securely, and security is integrated from start to finish.

2400x1260 evergreen docker blog c

Seamless collaboration with Docker Hub and Testcontainers Cloud

Success in a professional kitchen depends on clear communication and coordination. In development, it’s no different. Docker’s collaboration tools, like Docker Hub and Testcontainers Cloud, simplify how teams work together, share resources, and test efficiently.

  • Docker Hub can be thought of as a kitchen’s “prepped ingredients station.” It’s where some of the most essential ingredients are always ready to go. With a vast selection of curated, trusted images, developers can quickly access high-quality, pre-configured containers, ensuring consistency and reducing the chance for mistakes.
  • Testcontainers Cloud is like the kitchen’s test station, providing on-demand, production-like environments for testing. Developers can spin up these environments quickly, reducing setup time and ensuring code performs in a real-world setting. 

Effective coordination is critical whether you’re in a kitchen or on a development team, especially when projects involve distributed or hybrid teams. Clear communication ensures everyone is aligned and productive. The Docker suite of products provides the tools that make it possible for companies to more easily break down silos, share resources seamlessly, and ensure alignment — no matter how large your team is or where they work!

By streamlining collaboration, Docker reduces complexity and allows teams to move forward with confidence. With Docker Hub, Testcontainers Cloud, and integrated security features, teams can share resources, track progress, and catch issues early, enabling them to deliver high-quality results on time.

These tools improve efficiency, reduce errors, and help teams move faster through the development inner loop by making collaboration seamless and resource sharing simple.

Integrated security from code to production

Embedding security into every development step is essential to maintaining speed and delivering high-quality software. With Docker, security is embedded into every step of the development process so teams can identify and fix issues earlier than ever.

  • Docker Scout monitors container images in real-time, identifying vulnerabilities early to ensure your software is production-ready. By identifying and resolving risks early, developers can maintain high-quality standards and accelerate time to market.

Docker also integrates additional security features that work behind the scenes:

By building security into the workflow, Docker helps teams identify risks earlier, improve code quality, and maintain momentum without compromising safety.

Efficiency in action with Docker

Speed, collaboration, and security are paramount in today’s development landscape. Docker simplifies and secures the development process, helping teams collaborate efficiently and deliver secure, high-quality software faster.

Just as a well-managed kitchen runs smoothly, Docker helps development teams stay coordinated, ensuring security and productivity work together in perfect harmony. Docker removes complexity, accelerates delivery, and embeds security, enabling teams to create efficient, secure applications on time.

Ready to boost efficiency and collaboration in your development process? Explore the Docker suite of products to see how they can streamline your workflow and improve your team’s productivity today. 

To learn more about fueling development efficiency, download our white paper, Reducing Every-Day Complexities for More Efficient Software Development with Docker.

Accelerate Your Docker Builds Using AWS CodeBuild and Docker Build Cloud

18 December 2024 at 20:10

Containerized application development has revolutionized modern software delivery, but slow image builds in CI/CD pipelines can bring developer productivity to a halt. Even with AWS CodeBuild automating application testing and building, teams face challenges like resource constraints, inefficient caching, and complex multi-architecture builds that lead to delays, lower release frequency, and prolonged recovery times.

Enter Docker Build Cloud, a high-performance cloud service designed to streamline image builds, integrate seamlessly with AWS CodeBuild, and reduce build times dramatically. With Docker Build Cloud, you gain powerful cloud-based builders, shared caching, and native multi-architecture support — all while keeping your CI/CD pipelines efficient and your developers focused on delivering value faster.

In this post, we’ll explore how AWS CodeBuild combined with Docker Build Cloud tackles common bottlenecks, boosts build performance, and simplifies workflows, enabling teams to ship more quickly and reliably.

2400x1260 generic dbc blog e

By using AWS CodeBuild, you can automate the build and testing of container applications, enabling the construction of efficient CI/CD workflows. AWS CodeBuild is also integrated with AWS Identity and Access Management (IAM), allowing detailed configuration of access permissions for build processes and control over AWS resources.

Container images built with AWS CodeBuild can be stored in Amazon Elastic Container Registry (Amazon ECR) and deployed to various AWS services, such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, or AWS Lambda (Figure 1). Additionally, these services can leverage AWS Graviton, which adopts Arm-based architectures, to improve price performance for compute workloads.

Illustration of CI/CD pipeline outlining steps for check in code, source code commit, build code, and deploy code.
Figure 1: CI/CD pipeline for AWS ECS using AWS CodeBuild (ECS Workshop).

Challenges of container image builds with AWS CodeBuild

Regardless of the tool used, building container images in a CI pipeline often takes a significant amount of time. This can lead to the following issues:

  • Reduced development productivity
  • Lower release frequency
  • Longer recovery time in case of failures

The main reasons why build times can be extended include:

1. Machines for building

Building container images requires substantial resources (CPU, RAM). If the machine specifications used in the CI pipeline are inadequate, build times can increase.

For simple container image builds, the impact may be minimal, but in cases of multi-stage builds or builds with many dependencies, the effect can be significant.

AWS CodeBuild allows changing instance types to improve these situations. However, such changes can apply to parts of the pipeline beyond container image builds, and they also increase costs.

Developers need to balance cost and build speed to optimize the pipeline.

2. Container image cache

In local development environments, Docker’s build cache can shorten rebuild times significantly by reusing previously built layers, avoiding redundant processing for unchanged parts of the Dockerfile. However, in cloud-based CI services, clean environments are used by default, so cache cannot be utilized, resulting in longer build times.

Although there are ways to use storage or container registries to leverage caching, these often are not employed because they introduce complexity in configuration and overhead from uploading and downloading cache data.

3. Multi-architecture builds (AMD64, Arm64)

To use Arm-based architectures like AWS Graviton in Amazon EKS or Amazon ECS, Arm64-compatible container image builds are required.

With changes in local environments, such as Apple Silicon, cases requiring multi-architecture support for AMD64 and Arm64 have increased. However, building images for different architectures (for example, building x86 on Arm, or vice versa) often requires emulation, which can further increase build times (Figure 2).

Although AWS CodeBuild provides both AMD64 and Arm64 instances, running them as separate pipelines is necessary, leading to more complex configurations and operations.

Illustration of steps for creating multi-architecture Docker images including Build and push, Test, Build/push multi-arch manifest, Deploy.
Figure 2: Creating multi-architecture Docker images using AWS CodeBuild.

Accelerating container image builds with Docker Build Cloud

The Docker Build Cloud service executes the Docker image build process in the cloud, significantly reducing build time and improving developer productivity (Figure 3).

Illustration of how Docker Build Cloud works, showing CI Runner/CI job, Local Machine, and Cloud Builder elements.
Figure 3: How Docker Build Cloud works.

Particularly in CI pipelines, Docker Build Cloud enables faster container image builds without the need for significant changes or migrations to existing pipelines.

Docker Build Cloud includes the following features:

  • High-performance cloud builders: Cloud builders equipped with 16 vCPUs and 32GB RAM are available. This allows for faster builds compared to local environments or resource-constrained CI services.
  • Shared cache utilization: Cloud builders come with 200 GiB of shared cache, significantly reducing build times for subsequent builds. This cache is available without additional configuration, and Docker Build Cloud handles the cache maintenance for you.
  • Multi-architecture support (AMD64, Arm64): Docker Build Cloud supports native builds for multi-architecture with a single command. By specifying --platform linux/amd64,linux/arm64 in the docker buildx build command or using Bake, images for both Arm64 and AMD64 can be built simultaneously. This approach eliminates the need to split the pipeline for different architectures.

Architecture of AWS CodeBuild + Docker Build Cloud

Figure 4 shows an example of how to use Docker Build Cloud to accelerate container image builds in AWS CodeBuild:

Illustration of of AWS CodeBuild pipeline showing flow from Source Code to AWS CodeBuild, to Docker Build Cloud to Amazon ECR.
Figure 4: AWS CodeBuild + Docker Build Cloud architecture.
  1. The AWS CodeBuild pipeline is triggered from a commit to the source code repository (AWS CodeCommit, GitHub, GitLab).
  2. Preparations for running Docker Build Cloud are made in AWS CodeBuild (Buildx installation, specifying Docker Build Cloud builders).
  3. Container images are built on Docker Build Cloud’s AMD64 and Arm64 cloud builders.
  4. The built AMD64 and Arm64 container images are pushed to Amazon ECR.

Setting up Docker Build Cloud

First, set up Docker Build Cloud. (Note that new Docker subscriptions already include a free tier for Docker Build Cloud.)

Then, log in with your Docker account and visit the Docker Build Cloud Dashboard to create new cloud builders.

Once the builder is successfully created, a guide is displayed for using it in local environments (Docker Desktop, CLI) or CI/CD environments (Figure 5).

Screenshot from Docker Build Cloud showing setup instructions with local installation selected.
Figure 5: Setup instructions of Docker Build Cloud.

Additionally, to use Docker Build Cloud from AWS CodeBuild, a Docker personal access token (PAT) is required. Store this token in AWS Secrets Manager for secure access.

Setting up the AWS CodeBuild pipeline

Next, set up the AWS CodeBuild pipeline. You should prepare an Amazon ECR repository to store the container images beforehand.

The following settings are used to create the AWS CodeBuild pipeline:

  • AMD64 instance with 3GB memory and 2 vCPUs.
  • Service role with permissions to push to Amazon ECR and access the Docker personal access token from AWS Secrets Manager.

The buildspec.yml file is configured as follows:

version: 0.2

env:
  variables:
    ARCH: amd64
    ECR_REGISTRY: [ECR Registry]
    ECR_REPOSITORY: [ECR Repository]
    DOCKER_ORG: [Docker Organization]
  secrets-manager:
    DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
    DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT

phases:
  install:
    commands:
      # Installing Buildx
      - BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
      - mkdir -vp ~/.docker/cli-plugins/
      - curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
      - chmod a+x ~/.docker/cli-plugins/docker-buildx

  pre_build:
    commands:
      # Logging in to Amazon ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
      # Logging in to Docker (Build Cloud)
      - echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
      # Specifying the cloud builder
      - docker buildx create --use --driver cloud $DOCKER_ORG/demo

  build:
    commands:
      # Image tag
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
      # Build container image & push to Amazon ECR
      - docker buildx build --platform linux/amd64,linux/arm64 --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .

In the install phase, Buildx, which is necessary for using Docker Build Cloud, is installed.

Although Buildx may already be installed in AWS CodeBuild, it might be an unsupported version for Docker Build Cloud. Therefore, it is recommended to install the latest version.

In the pre_build phase, the following steps are performed:

  • Log in to Amazon ECR.
  • Log in to Docker (Build Cloud).
  • Specify the cloud builder.

In the build phase, the image tag is specified, and the container image is built and pushed to Amazon ECR.

Instead of separating the build and push commands, using --push to directly push the image to Amazon ECR helps avoid unnecessary file transfers, contributing to faster builds.

Results comparison

To make a comparison, an AWS CodeBuild pipeline without Docker Build Cloud is created. The same instance type (AMD64, 3GB memory, 2vCPU) is used, and the build is limited to AMD64 container images.

Additionally, Docker login is used to avoid the pull rate limit imposed by Docker Hub.

version: 0.2

env:
  variables:
    ECR_REGISTRY: [ECR Registry]
    ECR_REPOSITORY: [ECR Repository]
  secrets-manager:
    DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
    DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT

phases:
  pre_build:
    commands:
      # Logging in to Amazon ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
      # Logging in to Docker
      - echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin

  build:
    commands:
      # Image tag
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
      # Build container image & push to Amazon ECR
      - docker build --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .

Figure 6 shows the result of the execution:

Screenshot of results using AWS CodeBuild pipeline without Docker Build Cloud, showing execution time of 5 minutes and 59 seconds.
Figure 6: The result of the execution without Docker Build Cloud.

Figure 7 shows the execution result of the AWS CodeBuild pipeline using Docker Build Cloud:

Screenshot of results using AWS CodeBuild pipeline with Docker Build Cloud, showing execution time of 1 minutes and 4 seconds.
Figure 7: The result of the execution with Docker Build Cloud.

The results may vary depending on the container images being built and the state of the cache, but it was possible to build container images much faster and achieve multi-architecture builds (AMD64 and Arm64) within a single pipeline.

Conclusion

Integrating Docker Build Cloud into a CI/CD pipeline using AWS CodeBuild can dramatically reduce build times and improve release frequency. This allows developers to maximize productivity while delivering value to users more quickly.

As mentioned previously, the new Docker subscription already includes a free tier for Docker Build Cloud. Take advantage of this opportunity to test how much faster you can build container images for your current projects.

Learn more

Docker 2024 Highlights: Innovations in AI, Security, and Empowering Development Teams

17 December 2024 at 20:45

In 2024, as developers and engineering teams focused on delivering high-quality, secure software faster, Docker continued to evolve with impactful updates and a streamlined user experience. This commitment to empowering developers was recognized in the annual Stack Overflow Developer Survey, where Docker ranked as one of the most loved and widely used tools for yet another year. Here’s a look back at Docker’s 2024 milestones and how we helped teams build, test, and deploy with greater ease, security, and control than ever.

2400x1260 docker evergreen logo blog D 1

Streamlining the developer experience

Docker focused heavily on streamlining workflows, creating efficiencies, and reducing the complexities often associated with managing multiple tools. One big announcement in 2024 is our upgraded Docker plans. With the launch of updated Docker subscriptions, developers now have access to the entire suite of Docker products under their existing subscription. 

The all-in-one subscription model enables seamless integration of Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud, giving developers everything they need to build efficiently. By providing easy access to the suite of products and flexibility to scale, Docker allows developers to focus on what matters most — building and innovating without unnecessary distractions.

For more details on Docker’s all-in-one subscription approach, check out our Docker plans announcement.

Build up to 39x faster with Docker Build Cloud

Docker Build Cloud, introduced in 2024, brings the best of two worlds — local development and the cloud to developers and engineering teams worldwide. It offloads resource-intensive build processes to the cloud, ensuring faster, more consistent builds while freeing up local machines for other tasks.

A standout feature is shared build caches, which dramatically improve efficiency for engineering teams working on large-scale projects. Shared caches allow teams to avoid redundant rebuilds by reusing intermediate layers of images across builds, accelerating iteration cycles and reducing resource consumption. This approach is especially valuable for collaborative teams working on shared codebases, as it minimizes duplicated effort and enhances productivity.

Docker Build Cloud also offers native support for multi-architecture builds, eliminating the need for setting up and maintaining multiple native builders. This support removes the challenges associated with emulation, further improving build efficiency.

We’ve designed Docker Build Cloud to be easy to set up wherever you run your builds, without requiring a massive lift-and-shift effort. Docker Build Cloud also works well with Docker Compose, GitHub Actions, and other CI solutions. This means you can seamlessly incorporate Docker Build Cloud into your existing development tools and services and immediately start reaping the benefits of enhanced speed and efficiency.

Check out our build time savings calculator to estimate your potential savings in hours and dollars. 

Optimizing development workflows with performance enhancements

In 2024, Docker Desktop introduced a series of enterprise-grade performance enhancements designed to streamline development workflows at scale. These updates cater to the unique needs of development teams operating in diverse, high-performance environments.

One notable feature is the Virtual Machine Manager (VMM) in Docker Desktop for Mac, which provides a robust alternative to the Apple Virtualization Framework. Available since Docker Desktop 4.35, VMM significantly boosts performance for native Arm-based images, delivering faster and more efficient workflows for M1 and M2 Mac users. For development teams relying on Apple’s latest hardware, this enhancement translates into reduced build times and a smoother experience when working with containerized applications.

Additionally, Docker Desktop expanded its platform support to include Red Hat Enterprise Linux (RHEL) and Windows on Arm architectures, enabling organizations to maintain a consistent Docker Desktop experience across a wide array of operating systems. This flexibility ensures that development teams can optimize their workflows regardless of the underlying platform, leveraging platform-specific optimizations while maintaining uniformity in their tooling.

These advancements reflect Docker’s unwavering commitment to speed, reliability, and cross-platform support, ensuring that development teams can scale their operations without bottlenecks. By minimizing downtime and enhancing performance, Docker Desktop empowers developers to focus on innovation, improving productivity across even the most demanding enterprise environments.

More options to improve file operations for large projects

We enhanced Docker Desktop with synchronized file shares (Figure 1), a feature that can significantly improve file operation speeds by 2-10x. This enhancement brings fast and flexible host-to-VM file sharing, offering a performance boost for developers dealing with extensive codebases.

Synchronized file sharing is ideal for developers who:

  • Develop on projects that consist of a significant number of files (such as PHP or Node projects).
  • Develop using large repositories or monorepos with more than 100,000 files, totaling significant storage.
  • Utilize virtual file systems (such as VirtioFS, gRPC FUSE, or osxfs) and face scalability issues with their workflows.
  • Encounter performance limitations and want a seamless file-sharing solution without worrying about ownership conflicts.

This integration streamlines workflows, allowing developers to focus more on coding and less on managing file synchronization issues and slow file read times. 

Screenshot of Docker Desktop showing Synchronized file shares within Resources.
Figure 1: Synchronized file shares.

Enhancing developer productivity with Docker Debug 

Docker Debug enhances the ability of developer teams to debug any container, especially those without a shell (that is, distroless or scratch images). The ability to peek into “secure” images significantly improves the debugging experience for both local and remote containerized applications. 

Docker Debug does this by attaching a dedicated debugging toolkit to any image and allows developers to easily install additional tools for quick issue identification and resolution. Docker Debug not only streamlines debugging for both running and stopped containers but also is accessible directly from both the Docker Desktop CLI and GUI (Figure 2). 

Screenshot of Docker Desktop showing Docker Debug.
Figure 2: Docker Debug.

Being able to troubleshoot images without modifying them is crucial for maintaining the security and performance of containerized applications, especially those images that traditionally have been hard to debug. Docker Debug offers:

  • Streamlined debugging process: Easily debug local and remote containerized applications, even those not running, directly from Docker Desktop.
  • Cross-device and cloud compatibility: Initiate debugging effortlessly from any device, whether local or in the cloud, enhancing flexibility and productivity.

Docker Debug improves productivity and seamless integration. The docker debug command simplifies attaching a shell to any container or image. This capability reduces the cognitive load on developers, allowing them to focus on solving problems rather than configuring their environment. 

Ensuring reliable image builds with Docker Build checks

Docker Desktop 4.33 was a big release because, in addition to including the GA release of Docker Debug, it included the GA release of Docker Build checks, a new feature that ensures smoother and more reliable image builds. Build checks automatically validate common issues in your Dockerfiles before the build process begins, catching errors like invalid syntax, unsupported instructions, or missing dependencies. By surfacing these issues upfront, Docker Build checks help developers save time and avoid costly build failures.

You can access Docker Build checks in the CLI and in the Docker Desktop Builds view. The feature also works seamlessly with Docker Build Cloud, both locally and through CI. Whether you’re optimizing your Dockerfiles or troubleshooting build errors, Docker Build checks let you create efficient, high-quality container images with confidence — streamlining your development workflow from start to finish.

Onboarding and learning resources for developer success  

To further reduce friction, Docker revamped its learning resources and integrated new tools to enhance developer onboarding. By adding beginner-friendly tutorials, Docker’s learning center makes it easier for developers to ramp up and quickly learn to use Docker tools, helping them spend more time coding and less time troubleshooting. 

As Docker continues to rank as a top developer tool globally, we’re dedicated to empowering our community with continuous learning support.

Built-in container security from code to production

In an era where software supply chain security is essential, Docker has raised the bar on container security. With integrated security measures across every phase of the development lifecycle, Docker helps teams build, test, and deploy confidently.

Proactive security insights with Docker Scout Health Scores

Docker Scout, launched in 2023,  has become a cornerstone of Docker’s security ecosystem, empowering developer teams to identify and address vulnerabilities in container images early in the development lifecycle. By integrating with Docker Hub, Docker Desktop, and CI/CD workflows, Scout ensures that security is seamlessly embedded into every build. 

Addressing vulnerabilities during the inner loop — the development phase — is estimated to be up to 100 times less costly than fixing them in production. This underscores the critical importance of early risk visibility and remediation for engineering teams striving to deliver secure, production-ready software efficiently.

In 2024, we announced Docker Scout Health Scores (Figure 3), a feature designed to better communicate the security posture of container images development teams use every day. Docker Scout Health Scores provide a clear, alphabetical grading system (A to F) that evaluates common vulnerabilities and exposures (CVEs) for software components within Docker Hub. This feature allows developers to quickly assess and wisely choose trusted content for a secure software supply chain. 

creenshot of Docker Scout health score page showing checks for high profile vulnerabilities, Supply chain attestations, unapproved images, outdated images, and more.
Figure 3: Docker Scout health score.

For a deeper dive, check out our blog post on enhancing container security with Docker Scout and secure repositories.

Air-gapped containers: Enhanced security for isolated environments

Docker introduced support for air-gapped containers in Docker Desktop 4.31, addressing the unique needs of highly secure, offline environments. Air-gapped containers enable developers to build, run, and test containerized applications without requiring an active internet connection. 

This feature is crucial for organizations operating in industries with stringent compliance and security requirements, such as government, healthcare, and finance. By allowing developers to securely transfer container images and dependencies to air-gapped systems, Docker simplifies workflows and ensures that even isolated environments benefit from the power of containerization.

Strengthening trust with SOC 2 Type 2 and ISO 27001 certifications

Docker also achieved two major milestones in its commitment to security and reliability: SOC 2 Type 2 attestation and ISO 27001 certification. These globally recognized standards validate Docker’s dedication to safeguarding customer data, maintaining robust operational controls, and adhering to stringent security practices. SOC 2 Type 2 attestation focuses on the effective implementation of security, availability, and confidentiality controls, while ISO 27001 certification ensures compliance with best practices for managing information security systems.

These certifications provide developers and organizations with increased confidence in Docker’s ability to support secure software supply chains and protect sensitive information. They also demonstrate Docker’s focus on aligning its services with the needs of modern enterprises.

Accelerating success for development teams and organizations

In 2024, Docker introduced a range of features and enhancements designed to empower development teams and streamline operations across organizations. From harnessing the potential of AI to simplifying deployment workflows and improving security, Docker’s advancements are focused on enabling teams to work smarter and build with confidence. By addressing key challenges in development, management, and security, Docker continues to drive meaningful outcomes for developers and businesses alike.

Docker Home: A central hub to access and manage Docker products

Docker introduced Docker Home (Figure 4), a central hub for users to access Docker products, manage subscriptions, adjust settings, and find resources — all in one place. This approach simplifies navigation for developers and admins. Docker Home allows admins to manage organizations, users, and onboarding processes, with access to dashboards for monitoring Docker usage.

Future updates will add personalized features for different roles, and business subscribers will gain access to tools like the Docker Support portal and organization-wide notifications.

Screenshot of Docker Home showing options to explore Docker products, Admin console, and more.
Figure 4: Docker Home.

Empowering AI innovation  

Docker’s ecosystem supports AI/ML workflows, helping developers work with these cutting-edge technologies while staying cloud-native and agile. Read the Docker Labs GenAI series to see how we’re innovating and experimenting in the open.

Through partnerships like those with NVIDIA and GitHub, Docker ensures seamless integration of AI tools, allowing teams to rapidly experiment, deploy, and iterate. This emphasis on enabling advanced tech aligns Docker with organizations looking to leverage AI and ML in containerized environments.

Optimizing AI application development with Docker Desktop and NVIDIA AI Workbench

Docker and NVIDIA partnered to integrate Docker Desktop with NVIDIA AI Workbench, streamlining AI development workflows. This collaboration simplifies setup by automatically installing Docker Desktop when selected as the container runtime in AI Workbench, allowing developers to focus on creating, testing, and deploying AI models without configuration hassles. By combining Docker’s containerization capabilities with NVIDIA’s advanced AI tools, this integration provides a seamless platform for model training and deployment, enhancing productivity and accelerating innovation in AI application development. 

Docker + GitHub Copilot: AI-powered developer productivity

We announced that Docker joined GitHub’s Partner Program and unveiled the Docker extension for GitHub Copilot (@docker). This extension is designed to assist developers in working with Docker directly within their GitHub workflows. This integration extends GitHub Copilot’s technology, enabling developers to generate Docker assets, learn about containerization, and analyze project vulnerabilities using Docker Scout, all from within the GitHub environment.

Accelerating AI development with the Docker AI catalog

Docker launched the AI Catalog, a curated collection of generative AI images and tools designed to simplify and accelerate AI application development. This catalog offers developers access to powerful models like IBM Granite, Llama, Mistral, Phi 2, and SolarLLM, as well as applications such as JupyterHub and H2O.ai. By providing essential tools for machine learning, model deployment, inference optimization, orchestration, ML frameworks, and databases, the AI Catalog enables developers to build and deploy AI solutions more efficiently. 

The Docker AI Catalog addresses common challenges in AI development, such as decision overload from the vast array of tools and frameworks, steep learning curves, and complex configurations. By offering a curated list of trusted content and container images, Docker simplifies the decision-making process, allowing developers to focus on innovation rather than setup. This initiative underscores Docker’s commitment to empowering developers and publishers in the AI space, fostering a more streamlined and productive development environment. 

Streamlining enterprise administration 

Simplified deployment and management with Docker’s MSI and PKG installers

Docker simplifies deploying and managing Docker Desktop with the new MSI Installer for Windows and PKG Installer for macOS. The MSI Installer enables silent installations, automated updates, and login enforcement, streamlining workflows for IT admins. Similarly, the PKG Installer offers macOS users easy deployment and management with standard tools. These installers enhance efficiency, making it easier for organizations to equip teams and maintain secure, compliant environments.

These new installers also align with Docker’s commitment to simplifying the developer experience and improving organizational management. Whether you’re setting up a few machines or deploying Docker Desktop across an entire enterprise, these tools provide a reliable and efficient way to keep teams equipped and ready to build.

New sign-in enforcement options enhance security and help streamline IT administration 

Docker simplifies IT administration and strengthens organizational security with new sign-in enforcement options for Docker Desktop. These features allow organizations to ensure users are signed in while using Docker, aligning local software with modern security standards. With flexible deployment options — including macOS Config Profiles, Windows Registry Keys, and the cross-platform registry.json file — IT administrators can easily enforce policies that prevent tampering and enhance security. These tools empower organizations to manage development environments more effectively, providing a secure foundation for teams to build confidently.

Desktop Insights: Unlocking performance and usage analytics

Docker introduced Desktop Insights, a powerful feature that provides developers and teams with actionable analytics to optimize their use of Docker Desktop. Accessible through the Docker Dashboard, Desktop Insights offers a detailed view of resource usage, build times, and performance metrics, helping users identify inefficiencies and fine-tune their workflows (Figure 5).

Whether you’re tracking the speed of container builds or understanding how resources like CPU and memory are being utilized, Desktop Insights empowers developers to make data-driven decisions. By bringing transparency to local development environments, this feature aligns with Docker’s mission to streamline container workflows and ensure developers have the tools to build faster and more effectively.

Screenshot of Docker Insights within Admin console, showing data for Total active users, Users with license, Total Builds, Total Containers run, and more
Figure 5: Desktop Insights dashboard.

New usage dashboards in Docker Hub

Docker introduced Usage dashboards in Docker Hub, giving organizations greater visibility into how they consume resources. These dashboards provide detailed insights into storage and image pull activity, helping teams understand their usage patterns at a granular level (Figure 6). 

By breaking down data by repository, tag, and even IP address, the dashboards make it easy to identify high-traffic images or repositories that might require optimization. With this added transparency, teams can better manage their storage, avoid unnecessary pull requests, and optimize workflows to control costs. 

Usage dashboards enhance accountability and empower organizations to fine-tune their Docker Hub usage, ensuring resources are used efficiently and effectively across all projects.

Screenshot of Docker Usage dashboard showing a graph of daily pulls over time.
Figure 6: Usage dashboard.

Enhancing security with organization access tokens

Docker introduced organization access tokens, which let teams manage access to Docker Hub repositories at an organizational level. Unlike personal access tokens tied to individual users, these tokens are associated with the organization itself, allowing for centralized control and reducing reliance on individual accounts. This approach enhances security by enabling fine-grained permissions and simplifying the management of automated processes and CI/CD pipelines. 

Organization access tokens offer several advantages, including the ability to set specific access permissions for each token, such as read or write access to selected repositories. They also support expiration dates, aligning with compliance requirements and bolstering security. By providing visibility into token usage and centralizing management within the Admin Console, these tokens streamline operations and improve governance for organizations of all sizes. 

Docker’s vision for 2025

Docker’s journey doesn’t end here. In 2025, Docker remains committed to expanding its support for cloud-native and AI/ML development, reinforcing its position as the go-to container platform. New integrations and expanded multi-cloud capabilities are on the horizon, promising a more connected and versatile Docker ecosystem.

As Docker continues to build for the future, we’re committed to empowering developers, supporting the open source community, and driving efficiency in software development at scale. 

2024 was a year of transformation for Docker and the developer community. With major advances in our product suite, continued focus on security, and streamlined experiences that deliver value, Docker is ready to help developer teams and organizations succeed in an evolving tech landscape. As we head into 2025, we invite you to explore Docker’s suite of tools and see how Docker can help your team build, innovate, and secure software faster than ever.

Learn more

How to Create and Use an AI Git Agent

16 December 2024 at 21:23

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

In our past experiments, we started our work from the assumption that we had a project ready to work on. That means someone like a UI tech writer would need to understand Git operations in order to use the tools we built for them. Naturally, because we have been touching on Git so frequently, we wanted to try getting a Git agent started. Then, we want to use this Git agent to understand PR branches for a variety of user personas — without anyone needing to know the ins and outs of Git.

2400x1260 docker labs genai

Git as an agent

We are exploring the idea that tools are agents. So, what would a Git agent do? 

Let’s tackle our UI use case prompt. 

Previously:

You are at $PWD of /project, which is a git repo.
Force checkout {{branch}}
Run a three-dot diff of the files changed in {{branch}} compared to main using --name-only.

A drawback that isn’t shown here, is that there is no authentication. So, if you haven’t fetched that branch or pulled commits already, this prompt at best will be unreliable and more than likely will fail (Figure 1):

Screenshot of Logs showing failure to authenticate.
Figure 1: No authentication occurs.

Now:

You are a helpful assistant that checks a PR for user-facing changes.
1. Fetch everything and get on latest main.
2. Checkout the PR branch and pull latest.
3. Run a three-dot git diff against main for just files. Write the output to /thread/diff.txt.

This time around, you can see that we are being less explicit about the Git operations, we have the ability to export outputs to the conversation thread and, most importantly, we have authentication with a new prompt!

Preparing GitHub authentication

Note: These prompts should be easily adaptable to other Git providers, but we use GitHub at Docker.

Before we can do anything with GitHub, we have to authenticate. There are several ways to do this, but for this post we’ll focus on SSH-based auth rather than using HTTPS through the CLI. Without getting too deep into the Git world, we will be authenticating with keys on our machine that are associated with our account. These keys and configurations are commonly located at ~/.ssh on Linux/Mac. Furthermore, users commonly maintain Git config at ~/.gitconfig

The .gitconfig file is particularly useful because it lets us specify carriage return rules — something that can easily cause Git to fail when running in a Linux container. We will also need to modify our SSH config to remove UseKeychain. We found these changes are enough to authenticate using SSH in Alpine/Git. But we, of course, don’t want to modify any host configuration.

We came up with a fairly simple flow that lets us prepare to use Git in a container without messing with any host SSH configs.

  1. Readonly mounts: Git config and SSH keys are stored on specific folders on the host machine. We need to mount those in.
    a. Mount ~/.ssh into a container as /root/.ssh-base readonly.
    b. Mount ~/.gitconfig into the same container as /root/.gitconfig.
  2. Copy /root/.ssh-base to /root/.ssh and make the new file readwrite.
  3. Make necessary changes to config.
  4. For the LLM, we also need it to verify the config is in the thread and the changes were made to it. In the event that it fails to make the right changes, the LLM can self-correct.
  5. Copy the .ssh directory and .gitconfig to /thread.

All of this is baked in a prompt you can find and run on GitHub. Simply replace <username> with your system username (Figure 2).

Screenshot of mounts, showing storage of Git config and SSH keys.
Figure 2: Readonly mounts.

If you’re using the default behavior in our extension for VSCode, then the /thread/ volume is ephemeral, so you would need to make sure your agent runs it before any Git tools. However, if you use the Set prompt thread ID command in VSCode, you can keep the /thread volume around under your own tag (Figure 3). 

Tagging the thread persists everything after the prompt completes, so you would only need to run the prompt once before going on to develop your Git tools.

Screenshot of setting the prompt thread ID.
Figure 3: Set the prompt thread ID.

If you opt to persist the thread, then this prompt would only need to be run once and the Git credentials will stick around (Figure 4).

Screenshot showing the stored configuration info.
Figure 4: Stored credential data.

Now that we have our credentials ready, we need our Git tool to pick up on them.

container:
   image: alpine/git
   entrypoint:
     - /bin/sh
     - -c
   command:
     - "cp -r /thread/.ssh /root/.ssh && cp -r /thread/.gitconfig /root/.gitconfig && git --no-pager {{args|safe}}" 

Our container definition copies the SSH credentials and .gitconfig files to the correct directory before running Git commands.

Unpacking objects:  90% (9/10)
Unpacking objects: 100% (10/10)
Unpacking objects: 100% (10/10), 4.14 KiB | 141.00 KiB/s, done.
From github.com:docker/repo
   abcd123..abcd123  main           -> origin/main
 * [new tag]           branch/v2982.0.0 -> branch/v2982.0.0
# ROLE assistant
I have fetched all origin branches. The latest updates have been applied to your local repository, and any new tags have been downloaded.
```
Now, we can see our Git tool is authenticated as if it were us.
Running the agent now, it can:
## ROLE assistant
I have run the git fetch --all command, which retrieves updates from all remotes. If you need further assistance with your Git repository, feel free to ask!
```

# ROLE assistant
```json
{"request":"checkout main"}
```

## ROLE assistant
```json
{"args":"checkout main"}
```

## ROLE assistant
You are already on the 'main' branch, and it is up to date with 'origin/main'.
```

# ROLE assistant
```json
{"request":"checkout the branch branch and pull latest changes."}
```

...
## ROLE assistant
The branch branch has been successfully checked out, and it's already up to date with the latest changes.
...

# ROLE assistant
```json
{"request":"Run a three-dot git diff against main and write the output for just files to /thread/diff.txt."}
```

...

## ROLE assistant
```json
{"args":"diff main... --name-only","outfile":"/thread/diff.txt"}
```

*entering tool*

## ROLE tool (git)
file1.tsx
file2.tsx
...
## ROLE assistant

The three-dot Git diff against the main branch has been executed, and the output has been saved to /thread/diff.txt. This includes the names of files that have differences.
``

With the new Git agent, we have enabled our assistant to fetch new information from upstream, switch branches, and pull in new code to review. If we provide our agent with credentials that are limited to read-only access, this agent could be used by someone that does not have a local git install.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

How AI Assistants Can Decode GitHub Repos for UI Writers

16 December 2024 at 21:19

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

Can an AI-powered assistant understand a GitHub repo enough to answer questions for UI writers?

2400x1260 docker labs genai

Across many projects, user-facing content is rendered based on some sort of client-side code. Whether a website, a game, or a mobile app, it’s critical to nail the text copy displayed to the user.

So let’s take a sample question: Do any open PRs in this project need to be reviewed for UI copy? In other words, we want to scan a GitHub repo’s PRs and gain intelligence about the changes included.

Disclaimer: The best practice to accomplish this at a mature organization would be to implement Localization (i18n), which would facilitate centralized user-facing text. However, in a world of AI-powered tools, we believe our assistants will help minimize friction for all projects, not just ones that have adopted i18n.

So, let’s start off by seeing what options we already have.

The first instinct someone might have is to open the new copilot friend in the GitHub nav

genai series 13 f1
Figure 1: Type / to search.

We tried to get it to answer basic questions, first: “How many PR’s are open?”

genai series 13 f2
Figure 2: How many PR’s are there open? The answer doesn’t give a number.

Despite having access to the GitHub repo, the Copilot agent provides less helpful information than we might expect.

genai series 13 f3
Figure 3: Copilot is powered by AI, so mistakes are possible.

We don’t even get a number like we asked, despite GitHub surfacing that information on the repository’s main page. Following up our first query with the main query we want to ask effectively just gives us the same answer

genai series 13 f4
Figure 4: The third PR is filesharing: add some missing contexts.

And, after inspecting the third PR in the list, it doesn’t contain user-facing changes. One great indicator for this web project is the lack of any clientside code being modified. This was a backend change so we didn’t want to see this one.

genai series 13 f5
Figure 5: The PR doesn’t contain user-facing changes.

So let’s try to improve this:

First prompt file

---
functions:
  - name: bash
	description: Run a bash script in the utilities container.
	parameters:
  	  type: object
  	  properties:
    	    command:
      	      type: string
      	description: The command to send to bash
	container:
    	  image: wbitt/network-multitool  
    	  command:
      	    - "bash"
      	    - "-c"
      	    - "{{command|safe}}"
  - name: git
	description: Run a git command.
	parameters:
  	  type: object
  	  properties:
    	    command:
      	      type: string
      	description: The git command to run, excluding the `git` command itself
	container:
  	  image: alpine/git
  	  entrypoint:
    	    - "/bin/sh"
  	  command:
    	    - "-c"
    	    - "git --no-pager {{command|safe}}"
---

# prompt system

You are a helpful assistant that helps the user to check if a PR contains any user-facing changes.

You are given a container to run bash in with the following tools:

  curl, wget, jq
and default alpine linux tools too.

# prompt user
You are at $PWD of /project, which is a git repo.

Checkout branch `{{branch}}`.

Diff the changes and report any containing user facing changes

This prompt was promising, but it ended up with a few blocking flaws. The reason is that using git to compare files is quite tricky for an LLM.

  • git diff uses a pager, and therefore needs the --no-pager arg to send stdout to the conversation.
  • The total number of files affected via git diff can be quite large.
  • Given each file, the raw diff output can be massive and difficult to parse.
  • The important files changed in a PR might be buried with many extra files in the diff output.
  • The container has many more tools than necessary, allowing the LLM to hallucinate.

The agent needs some understanding of the repo to determine the sorts of files that contain user-facing changes, and it needs to be capable of seeing just the important pieces of information.

Our next pass involves a few tweaks:

  • Switch to alpine git image and a file writer as the only tools necessary.
  • Use –files-only and –no-pager args.
# ROLE assistant


The following files are likely to contain user-facing changes as they mainly consist of UI components, hooks, and API functionalities.

```
file1.ts
fil2.tsx
file3.tsx
...
```
Remember that this isn't a guarantee of whether there are user-facing changes, but just an indication of where they might be if there are any.

Remember that this isn’t a guarantee of whether there are user-facing changes, but just an indication of where they might be if there are any.

Giving the agent the tool run-javascript-sandbox allowed our agent to write a script to save the output for later.

genai series 13 f6
Figure 6: Folder called user-changes with files.txt.

To check out the final prompt here, use our Gist.

Expert knowledge

This is a great start; however, we now need to inspect the files themselves for user-facing changes. When we started this, we realized that user-facing changes could manifest in a diverse set of “diff”s so we needed to include expert knowledge. We synced up with Mark Higson, a staff SWE currently working on the frontend platform here at Docker. Mark was able to help provide some key advice for what “user-facing” changes look like in many repos at Docker, so I baked the tips into the prompt.

Straightforward approaches

Looking for changes in text nodes found in a JSX tree is the easiest example.

JSX node with interpolation

<div>{functionReturningString()}</div>

If the result is a string, the result is probably user-facing, but the components that create the string could be elsewhere, so look for:

Nuanced indicators

  • Standard user-facing components. Example: notifications. If a notification’s props change, we can likely infer that it is a user-facing change.
  • Constructors for commonly used components. Example: errors. If an Error() is constructed with a different argument, we know that error could show up differently.

The key for UI reviewers is the overall amount of text is changed, rather than layout.

So, despite not being able to catch everything that could be rendered, we will be able to deliver value by focusing on these areas.

Diffing the files

There are a few approaches to finding changes in the files. To start, we’ll use git again.

Before including expert advice, git diff was hallucinating changes that weren’t actually represented in the output. After the expert advice, we are seeing much more focused outputs from the LLM.

Our second prompt just needs to read the user-facing files that we already saved, and then compare them to main. After a bit of trial and error with git, we found that a combination of --no-color and --minimal was able to give the LLM enough context without consuming too many tokens for each change.

There’s a lot of buzz around generating and reading diffs with LLMs. The teams at Aider and Cursor are both focusing on fine-tuning LLM’s using speculative decoding:

For our use cases, however, we found that the average “big” PR at Docker was not a problem for traditional LLMs when given good advice and the right tools.

So here’s the second prompt:

## User-facing content

The following pieces of code are what we want to focus on for user-facing content:

Text Nodes in any JSX or TSX files.

Text Nodes in any HTML files.

Standard user-facing components. Example: Definitions/Constructors for notifications, toasts, and errors.

# prompt user

Can you tell me if there are any user-facing changes in {{branch}}?

To check out the full prompt here, use our Gist.

Output

Note: Some full paths, branches, and diffs are trimmed or replaced with placeholders for this article.

# ROLE assistant

```json
{
  "args": ["checkout", "branch"]
}
```

# ROLE tool (git)


Already on 'branch'
Your branch is up to date with 'origin/branch'.

# ROLE assistant

```json
{
  "path": "/thread/user-changes/files.txt"
}
```

*entering tool*

# ROLE tool (read-file)

path1
path2
path3
...rest of paths

# ROLE assistant

```json
{
  "args": ["diff", "main", "--no-color", "--minimal", path1 path2 path3...]
}
```

*entering tool*

# ROLE tool (git)

<Diff output truncated>
# ROLE assistant

The following user-facing changes were detected between the "main" branch and the "branch" branch:

1. In `File1.tsx` file, the text "Sign In" was modified to "Sign in".

2. In `File2.ts` file, titles "Databases & Storage", "Monitoring & Observability", "Developer Tools", "New Extensions", were changed to "Databases & storage", "Monitoring & observability", "Developer tools", "New extensions" respectively.

3. In `File3.rtl.test.tsx.snap` and `ImageHeader/index.tsx files`, the text "Compressed Size " was changed to "Compressed size".

4. In `File4.rtl.test.tsx` and `ImageDetail/index.tsx files`, a test-case previously checking for the existence of "Compressed Size" now looks for "Compressed size".

5. In `File5.tsx file`, the button text "Add Collaborator" has been changed to "Add collaborator".

6. In `route.tsx file`, the collaborators table title "Current Collaborators" was changed to "Current collaborators".

7. In `File6.tsx` file, the description was updated.

8. In `File7.tsx`, `RepositoryOverview.tsx`, `RepositoryReadMe.tsx`, `TagsAndScans.tsx` files, the description when there is no category or description was updated.

9. In `File8.rtl.test.tsx`, `DeleteRepo.tsx`, the text for confirming repo deletion and the delete button text was adjusted.

10. In `route.tsx file`, the filter label "Trusted Content" was changed to "Trusted content".

11. In `File9.tsx`, the text "Your Organizations" has been changed to "Your organizations".

12. In `Component/index.tsx`, the search group label "Trusted Content" has been changed to "Trusted content".

13. In `file10.test.ts` file, a test-case was updated to check for the new text "Sign in".
```

Overall, this was a positive result:

genai series 13 f7
Figure 7: Mark’s response “Damn, that’s nice output (assuming it’s accurate).”

Follow up:

genai series 13 f8
Figure 8: Mark’s response, “I checked, it’s accurate.”

Another example from a different PR:

The changes in the `branch` branch compared to `main` include several modifications that affect user-facing content, particularly in JSX and TSX files. Here is a summary of the key changes:

...

These changes primarily involve simplification and refactoring of the existing UI logic, possibly to streamline the user interaction by removing complexities related to feature flags and reducing the use of modals or conditional rendering for specific purchasing flows.

Try it yourself

Here is a markdown file that you can paste into VSCode to try these prompts on your own branch. In the last line, update my-branch to one of your local branches that you’d like to review: https://gist.github.com/ColinMcNeil/2e8f25e2d4092f3c7a0ce8992d2e197c#file-readme-md

Next steps

This is already a promising flow. For example, a tech writer could clone the git repo and run this prompt to inspect a branch for user-facing changes. From here, we might extend the functionality:

  • Allow user input for PR to review without knowing the branch or git needing to use git.
  • Automatic git clone & pull with auth.
  • Support for larger >15 files changed PR by allowing agents to automate their tasks.
  • “Baking” the final flow into CI/CD so that it can automatically assign reviewers to relevant PRs.

If you’re interested in running this prompt on your own repo or just want to follow along with the code, watch our new public repo and reach out. We also appreciate your GitHub Stars.

Everything we’ve discussed in this blog post is available for you to try out on your own projects. 

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity

By: Yiwen Xu
13 December 2024 at 20:30

Modern application development has evolved dramatically. Gone are the days when a couple of developers, a few machines, and some pizza were enough to launch an app. As the industry grew, DevOps revolutionized collaboration, and Docker popularized containerization, simplifying workflows and accelerating delivery. 

Later, DevSecOps brought security into the mix. Fast forward to today, and the demand for software has never been greater, with more than 750 million cloud-native apps expected by 2025.

This explosion in demand has created a new challenge: complexity. Applications now span multiple programming languages, frameworks, and architectures, integrating both legacy and modern systems. Development workflows must navigate hybrid environments — local, cloud, and everything in between. This complexity makes it harder for companies to deliver innovation on time and stay competitive. 

2400x1260 evergreen docker blog e

To overcome these challenges, you need a development platform that’s as reliable and ubiquitous as electricity or Wi-Fi — a platform that works consistently across diverse applications, development tools, and environments. Whether you’re just starting to move toward microservices or fully embracing cloud-native development, Docker meets your team where they are, integrates seamlessly into existing workflows, and scales to meet the needs of individual developers, teams, and entire enterprises.

Docker: Simplifying the complex

The Docker suite of products provides the tools you need to accelerate development, modernize legacy applications, and empower your team to work efficiently and securely. With Docker, you can:

  • Modernize legacy applications: Docker makes it easy to containerize existing systems, bringing them closer to modern technology stacks without disrupting operations.
  • Boost productivity for cloud-native teams: Docker ensures consistent environments, integrates with CI/CD workflows, supports hybrid development environments, and enhances collaboration

Consistent environments: Build once, run anywhere

Docker ensures consistency across development, testing, and production environments, eliminating the dreaded “works on my machine” problem. With Docker, your team can build applications in unified environments — whether on macOS, Windows, or Linux — for reliable code, better collaboration, and faster time to market.

With Docker Desktop, developers have a powerful GUI and CLI for managing containers locally. Integration with popular IDEs like Visual Studio Code allows developers to code, build, and debug within familiar tools. Built-in Kubernetes support enables teams to test and deploy applications on a local Kubernetes cluster, giving developers confidence that their code will perform in production as expected.

Integrated workflows for hybrid environments

Development today spans both local and cloud environments. Docker bridges the gap and provides flexibility with solutions like Docker Build Cloud, which speeds up build pipelines by up to 39x using cloud-based, multi-platform builders. This allows developers to focus more on coding and innovation, rather than waiting on builds.

Docker also integrates seamlessly with CI/CD tools like Jenkins, GitLab CI, and GitHub Actions. This automation reduces manual intervention, enabling consistent and reliable deployments. Whether you’re building in the cloud or locally, Docker ensures flexibility and productivity at every stage.

Team collaboration: Better together

Collaboration is central to Docker. With integrations like Docker Hub and other registries, teams can easily share container images and work together on builds. Docker Desktop features like Docker Debug and the Builds view dashboards empower developers to troubleshoot issues together, speeding up resolution and boosting team efficiency.

Docker Scout provides actionable security insights, helping teams identify and resolve vulnerabilities early in the development process. With these tools, Docker fosters a collaborative environment where teams can innovate faster and more securely.

Why Docker?

In today’s fast-paced development landscape, complexity can slow you down. Docker’s unified platform reduces complexity as it simplifies workflows, standardizes environments, and empowers teams to deliver software faster and more securely. Whether you’re modernizing legacy applications, bridging local and cloud environments, or building cutting-edge, cloud-native apps, Docker helps you achieve efficiency and scale at every stage of the development lifecycle.

Docker offers a unified platform that combines industry-leading tools — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud — into a seamless experience. Docker’s flexible plans ensure there’s a solution for every developer and every team, from individual contributors to large enterprises.

Get started today

Ready to simplify your development workflows? Start your Docker journey now and equip your team with the tools they need to innovate, collaborate, and deliver with confidence.

Looking for tips and tricks? Subscribe to Docker Navigator for the latest updates and insights delivered straight to your inbox.

Learn more

Tackle These Key Software Engineering Challenges to Boost Efficiency with Docker

13 December 2024 at 20:26

Software engineering is a dynamic, high-pressure field where development teams encounter a variety of challenges every day. As software development projects become increasingly complex, engineers must maintain high-quality code, meet time constraints, collaborate effectively, and prevent security vulnerabilities. At the same time, development teams can be held back by inefficiencies that can hinder productivity and speed.

Let’s explore some of the most common software engineering challenges and how Docker’s tools streamline the inner loop of cloud-native workflows. These tools help developers overcome pain points, boost productivity, and deliver better software faster.

2400x1260 docker evergreen logo blog B

Top 4 software engineering challenges developers face

Let’s be real — software development teams face a laundry list of challenges. From managing dependencies across teams to keeping up with the latest threats in an increasingly complex software ecosystem, these obstacles can quickly become roadblocks that stifle progress. Let’s dive into some of the most significant software engineering challenges that developers face today and how Docker can help:

1. Dependency management

One of the most common pain points in software engineering is managing dependencies. In any large development project, multiple teams might work on different parts of the codebase, often relying on various third-party libraries and services. The complexity increases when these dependencies span across different environments and versions.

The result? Version conflicts, broken builds, deployment failures, and hours spent troubleshooting. This process can become even more cumbersome when working with legacy code or when different teams work with conflicting versions.

Containerize your applications with their dependencies

Docker allows developers to package all their apps and dependencies into neat, lightweight containers. Think of these containers as “time capsules” that hold everything your app needs to run smoothly, from libraries and tools to configurations. And because these containers are portable, you get the same app behavior on your laptop, your testing server, or in production — no more hoping that “it worked on my machine” when it’s go-time.

No more version conflict drama. No more hours spent trying to figure out which version of the library your coworker’s been using. Docker ensures that everyone on the team works with the same setup. Consistent environments, happy devs, and no more dependency issues!

2. Testing complexities

Testing presents another significant challenge for developers. In an ideal world, tests would run in an environment that perfectly mirrors production; however, this is rarely the case. Developers often encounter problems when testing code in isolated environments that don’t reflect real-world conditions. As a result, bugs that might have been caught early in development are only discovered later, leading to costly fixes and delays.

Moreover, when multiple developers work in different environments or use different tools, the quality of tests can be inconsistent, and issues might be missed altogether. This leads to inefficiencies and makes it harder to ensure that your software is functional and reliable.

Leverage cloud-native testing environments that match production

One of Docker’s most significant benefits is its ability to create cloud-native testing environments. With Testcontainers Cloud, you can integrate testing within containers to create consistent, reliable testing environments that scale by defining test dependencies as code with confidence that they match production. Testing ensures that bugs and issues are caught earlier in the development cycle, reducing the time spent on troubleshooting and improving the overall quality of the software. 

Docker Hub offers a repository of pre-configured images and environments, enabling developers to quickly share and collaborate on testing setups. This eliminates inconsistencies between test environments, ensuring all teams work with the same configurations and tools.

3. Lack of visibility and collaboration

Software development today often involves many developers working on different parts of a project simultaneously. This collaborative approach has obvious benefits, but can also lead to significant challenges. In a multi-developer environment, tracking changes, ensuring consistency, and maintaining smooth collaboration across teams can be hard.

Without proper visibility into the software development process, identifying issues in real-time and keeping everyone aligned becomes difficult. In many cases, teams end up working in silos, each using their own tools and systems. This lack of coherence can lead to misunderstandings, duplication of efforts, and delays in achieving milestones.

Accelerate teamwork with shared images, caches, and insights

Docker fosters collaboration by offering an integrated ecosystem where developers can seamlessly share images, cache, templates, and more. For example, Docker Hub and Hardened Docker Desktop allow teams to push, pull, and share secure images, making it easier to get started quickly using all the right configurations. Meanwhile, teams can also cut down on time-consuming builds and resolve failed builds with the Docker Build Cloud shared cache and Build insights.

Docker’s streamlined workflows provide greater visibility into the development process. With this improved collaboration and integrated workflows, software developers can enjoy faster development cycles and more time to innovate.

4. Security risks

Security is often a major concern in software development, yet it’s a challenge that many teams struggle to address consistently. Developers are constantly working under tight deadlines to release new features and fixes, which can sometimes push security considerations to the sidelines. As a result, vulnerabilities can be unintentionally introduced into the codebase through outdated libraries, insecure configurations, and even simple coding oversights.

The main challenge with security lies in identifying and managing risks across all development stages and environments. Developers must follow security protocols diligently and vulnerabilities need to be patched quickly, especially when building software for organizations with strict security regulations. This becomes increasingly difficult when multiple teams work on separate components, each potentially introducing its own security concerns.

Embed security into every phase of the development lifecycle

Docker solves these challenges by integrating security and compliance from build to production, without sacrificing speed or flexibility. For example, Docker Scout offers continuous vulnerability scanning and actionable insights, enabling teams to identify and address risks early. And with increased visibility into dependencies, images, and remediation recommendations, developers can be set up to prevent outdated libraries and insecure configurations from reaching production.

With tools like Hardened Docker Desktop, IAM, and RAM, Docker reduces the complexity of security oversight while ensuring compliance. These features help organizations avoid costly vulnerabilities, safeguard intellectual property, and maintain customer trust without slowing development speed. This simplified security management allows developers to deliver faster without compromising security.

Adopt Docker to overcome key challenges in software development

From dependency management to security risks, software developers face numerous challenges on their journey to deliver high-quality, secure applications. Docker’s unified development suite streamlines every stage of the inner loop, combining Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud into one powerful, cloud-native workflow ecosystem.

By streamlining workflows, enhancing collaboration, embedding security into every stage of development, and providing consistent testing environments, Docker empowers teams to build, test, and ship cloud-native applications with unparalleled speed and reliability. Whether you’re tackling legacy code or scaling modern applications, Docker ensures your development process remains efficient, secure, and ready for the demands of today’s fast-paced software landscape.

Docker’s subscription plans offer flexible, scalable access to a unified inner-loop suite, allowing teams of any size to accelerate workflows, ensure consistency, and build better software faster. It’s more than a set of tools — it offers a cohesive platform designed to transform your development lifecycle and keep your team competitive, efficient, and secure.

Ready to unlock your team’s full potential? Check out our white paper, Reducing Every-Day Complexities for More Efficient Software Development with Docker, to discover more about empowering developers to work more efficiently with simplified workflows, enhanced collaboration, and integrated security.

Explore the Docker suite of products to access the full power of the unified development suite and accelerate your team’s workflows today.

Let’s Get Containerized: Simplifying Complexity for Modern Businesses

12 December 2024 at 20:42

Did you know that enterprise companies that implemented Docker saw a 126% return on investment (ROI) over three years? In today’s rapidly evolving business landscape, companies face relentless pressure to innovate while managing costs and complexity. Traditional software development methods often struggle to keep pace with technological advancements, leading to inconsistent environments, high operational costs, and slow deployment cycles. That’s where containerization comes in as a smart solution.

2400x1260 Let s get Containerized Icon

Rising technology costs are a concern

Businesses today are navigating a complex environment filled with evolving market demands and economic pressures. A recent survey revealed that 70% of executives expect economic conditions to worsen, driving concerns about inflation and cash flow. Another survey found that 50% of businesses have raised prices to combat rising costs, reflecting broader financial pressures. In this context, traditional software deployment methods often fall short, resulting in rigid, inconsistent environments that impede agility and delay feature releases.​

As cloud services costs surge, expected to surpass $1 trillion in 2024, businesses face heightened financial and operational challenges. Outdated deployment methods struggle with modern applications’ complexity, leading to persistent issues and inefficiencies. This underscores the need for a more agile, cost-effective solution.

As the adoption of cloud and hybrid cloud environments accelerates, businesses need solutions that ensure seamless integration and portability across their entire IT ecosystem. Containers provide a key to achieving this, offering unmatched agility, scalability, and security. By embracing containers, organizations can create more adaptable, resilient, and future-proof software solutions.

The solution is a container-first approach

Containerization simplifies the development and deployment of applications by encapsulating them into self-contained units known as containers. Each container includes everything an application needs to run — its code, libraries, and dependencies — ensuring consistent performance across different environments, from development to production.

Similar to how shipping containers transformed the packaging and transport industry, containerization revolutionized development. Using containers, development teams can reduce errors, optimize resources, accelerate time to market, and more.  

Key benefits of containerization

  • Improved consistency: Containers guarantee that applications perform identically regardless of where they are deployed, eliminating the notorious “it works on my machine” problem.
  • Cost efficiency: Containers reduce infrastructure costs by optimizing resource utilization. Unlike traditional virtual machines that require separate operating systems, containers share the same operating system (OS) kernel, leading to significant savings and better scalability.
  • Faster time to market: Containers accelerate development and deployment cycles, allowing businesses to bring products and updates to market more quickly.
  • Enhanced security: Containers provide isolation between applications, which helps manage vulnerabilities and prevent breaches from spreading, thereby enhancing overall security.

Seeing a true impact

A Forrester Consulting study found that enterprises using Docker experienced a three-month faster time to market for revenue-generating applications, along with notable gains in efficiency and speed. These organizations reduced their data center footprint, enhanced application delivery speeds, and saved on infrastructure costs, showcasing containerization’s tangible benefits.

For instance, Cloudflare, a company operating one of the world’s largest cloud networks, needed to address the complexities of managing a growing infrastructure and supporting over 1,000 developers. By adopting Docker’s containerization technology and leveraging innovations like manifest lists, Cloudflare successfully streamlined its development and deployment processes. Docker’s support for multi-architecture images and continuous improvements, such as IPv6 networking capabilities, allowed Cloudflare to manage complex application stacks more efficiently, ensuring consistency across diverse environments and enhancing overall agility.

Stepping into a brighter future

Containerization offers a powerful solution to modern business challenges, providing consistency, cost savings, and enhanced security. As companies face increasing complexity and market pressures, adopting a container-first approach can streamline development, improve operational efficiency, and maintain a competitive edge.

Ready to explore how containerization can drive operational excellence for your business? Our white paper Unlocking the Container: Enhancing Operational Performance through Containerization provides an in-depth analysis and actionable insights on leveraging containers to enhance your software development and deployment processes. Need containerization? Chat with us or explore more resources.

Are you navigating the ever-evolving world of developer tools and container technology? The Docker Newsletter is your essential resource, curated for Docker users like you. Keep your finger on the pulse of the Docker ecosystem. Subscribe now!

A Beginner’s Guide to Building Outdoor Light Shows Synchronized to Music with Open Source Tools

2 December 2024 at 21:37

Outdoor light displays are a fun holiday tradition — from simple light strings hung from the eaves to elaborate scenes that bring out your competitive spirit. If using open source tools, thousands of feet of electrical cables, custom controllers, and your favorite music to build complex projects appeals to you, then the holiday season offers the perfect opportunity to indulge your creative passion. 

I personally run home light shows at Halloween and Christmas that feature up to 30,000 individually addressable LED lights synchronized with dozens of different songs. It’s been an interesting learning journey over the past five years, but it is also one that almost anyone can pursue, regardless of technical ability. Read on for tips on how to make a display that’s the highlight of your neighborhood. 

A blue cube covered in a green string of holiday lights that are red and yellow.

Getting started with outdoor light shows

As you might expect, light shows are built using a combination of hardware and software. The hardware includes the lights, props, controllers, and cabling. On the software side, there are different tools for the programming, also called sequencing, of the lights as well as the playback of the show. 

coleman holiday lights f1
Figure 1: Light show hardware includes the lights, props, controllers, and cabling.

Hardware requirements

Lights

Let’s look more closely at the hardware behind the scenes starting with the lights. Multiple types of lights can be used in displays, but I’ll keep it simple and focus on the most popular choice. Most shows are built around 12mm RGB LED lights that support the WS2811 protocol, often referred to as pixels or nodes. Generally, these are not available at retail stores. That means you’ll need to order them online, and I recommend choosing a vendor that specializes in light displays. I have purchased lights from a few different vendors, but recently I’ve been using Wally’s Lights, Visionary Light Shows, and Your Pixel Store.  

Props

The lights are mounted into different props — such as a spider for Halloween or a snowflake for the winter holidays. You can either purchase these props, which are usually made out of the same plastic cardboard material used in yard signs, or you can make them yourself. Very few vendors sell pre-built props, so be ready to push the pixels by hand — yes, in my display either I or someone in my family pushed each of the 30,000 lights into place when we initially built the props. I get most of my props from EFL Designs, Gilbert Engineering, or Boscoyo Studio

coleman holiday lights f2
Figure 2: The lights are mounted into different props, which you can purchase or make yourself.

Controllers

Once your props are ready to go, you’ll need something to drive them. This is where controllers come in (Figure 3). Like the props and lights, you can get your controllers from various specialized vendors and, to a large extent, you can mix and match different brands in the same show because they all speak the same protocols to control the pixels (usually E1.31 or DDP). 

You can purchase controllers that are ready to run, or you can buy the individual components and build your own boxes — I grew up building PCs, so I love this degree of flexibility. However, I do tend to buy pre-configured controllers, because I like having a warranty from the manufacturer. My controllers all come from HolidayCoro, but Falcon controllers are also popular.

coleman holiday lights f3
Figure 3: Once your props are ready to go, you’ll need a controller.

The number of controllers you need depends on the number of lights in your show. Most controllers have multiple outputs, and each output can drive a certain number of lights. I typically plan for about 400 lights per output. Plus, I use about three main controllers and four receiver boxes. Note that long-range receivers are a way of extending the distance you can place lights from the main controller, but this is more of an advanced topic and not one I’ll cover in this introductory article.

Cables

Although controllers are powered by standard household outlets, the connection from the controllers to the lights happens over specialized cabling. These extension cables contain three wires. Two are used to send power to the lights (either 5v or 12v), and a third is used to send data. Basically, this third wire sends instructions like “light 1,232 turn green for .5 seconds then fade to off over .25 seconds.” You can get these extension cables from any vendor that sells pixels. 

Additionally, all of the controllers need to be on the same Ethernet network. Many folks run their shows on wireless networks, but I prefer a wired setup for increased performance and reliability. 

Software and music

At this point, you have a bunch of props with lights connected to networked controllers via specialized cabling. But, how do you make them dance? That’s where the software comes in.

xLights

Many hobbyists use xLights to program their lights. This software is open source and available for Mac, Windows, and Linux, and it works with three basic primitives: props, effects, and time. You can choose what effect you want to apply to a given prop at a given time (Figure 4). The timing of the effect is almost always aligned with the song you’ve chosen. For example, you might flash snowflakes off and on in synchronization with the drum beat of a song. 

Screenshot of light sequencing software, showing an array of options for turning specific lights on and off.
Figure 4: Programming lights.

Music

If this step sounds overwhelming to you, you’re not alone. In fact, I don’t sequence my own songs. I purchase them from different vendors, who create sequences for generic setups with a wide variety of props. I then import them and map them to the different elements that I actually use in my show. In terms of time, professionals can spend many hours to animate one minute of a song. I generally spend about two hours mapping an existing sequence to my show’s layout. My favorite sequence vendors include BF Light Shows, xTreme Sequences, and Magical Light Shows

Falcon Player

Once you have a sequence built, you use another piece of software to send that sequence to your show controllers. Some controllers have this software built in, but most people I know use another open source application, Falcon Player (FPP), to perform this task. Not only can FPP be run on a Raspberry Pi, but it also is shipped as a Docker image! FPP includes the ability to play back your sequence as well as to build playlists and set up a show schedule for automated playback. 

Put it all together and flip the switch

When everything is put together, you should have a system similar to Figure 5:

Illustration of system setup, showing connection of elements, from xLights to FPP to Controllers to Lights.
Figure 5: System overview.

This example shows a light display in action. 

xLights community support

Although building your own light show may seem like a daunting task, fear not; you are not alone. I have yet to mention the most important part of this whole process: the community. The xLights community is one of the most helpful I’ve ever been part of. You can get questions answered via the official Facebook group as well through as other groups dedicated to specific sequence and controller vendors. Additionally, a Zoom support meeting runs 24×7 and is staffed by hobbyists from across the globe. So, what are you waiting for? Go ahead and start planning your first holiday light show!

Learn more

💾

John Legend&#039;s &quot;What Christmas Means to Me&quot; synchronized to over 32,000 lights. Original Sequence by @xTremeSequences

Enhancing Container Security with Docker Scout and Secure Repositories

25 November 2024 at 21:43

Docker Scout simplifies the integration with container image repositories, improving the efficiency of container image approval workflows without disrupting or replacing current processes. Positioned outside the repository’s stringent validation framework, Docker Scout serves as a proactive measure to significantly reduce the time needed for an image to gain approval. 

By shifting security checks left and integrating Docker Scout into the early stages of the development cycle, issues are identified and addressed directly on the developer’s machine.

2400x1260 generic scout blog d

Minimizing vulnerabilities 

This leftward shift in security accelerates the development process by keeping developers in flow, providing immediate feedback on policy violations at the point of development. As a result, images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans (Figure 1). By resolving issues earlier, Docker Scout minimizes the number of vulnerabilities detected during the CI/CD process, freeing up the security team to focus on higher-priority tasks.

Sample secure repo pipeline showing images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans.
Figure 1: Sample secure repository pipeline.

Additionally, the Docker Scout console allows the security team to define custom security policies and manage VEX (Vulnerability Exploitability eXchange) statements. VEX is a standard that allows vendors and other parties to communicate the exploitability status of vulnerabilities, allowing for the creation of justifications for including software that has been tied to Common Vulnerabilities and Exposures (CVE).

This feature enables seamless collaboration between development and security teams, ensuring that developers are working with up-to-date compliance guidelines. The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection (Figure 2).

Sample secure repo pipeline with scout: The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection.
Figure 2: Sample secure repository pipeline with Docker Scout.

How to secure image repositories

A secure container image repository provides digitally signed, OCI-compliant images that are rebuilt and rescanned nightly. These repositories are typically used in highly regulated or security-conscious environments, offering a wide range of container images, from open source software to commercial off-the-shelf (COTS) products. Each image in the repository undergoes rigorous security assessments to ensure compliance with strict security standards before being deployed in restricted or sensitive environments.

Key components of the repository include a hardened source code repository and an OCI-compliant registry (Figure 3). All images are continuously scanned for vulnerabilities, stored secrets, problematic code, and compliance with various standards. Each image is assigned a score upon rebuild, determining its compliance and suitability for use. Scanning reports and justifications for any potential issues are typically handled using the VEX format.

Key components of the repository include a hardened source code repository and an OCI-compliant registry
Figure 3: Key components of the repository include a hardened source code repository and an OCI-compliant registry.

Why use a hardened image repository?

A hardened image repository mitigates the security risks associated with deploying containers in sensitive or mission-critical environments. Traditional software deployment can expose organizations to vulnerabilities and misconfigurations that attackers can exploit. By enforcing a strict set of requirements for container images, the hardened image repository ensures that images meet the necessary security standards before deployment. Rebuilding and rescanning each image daily allows for continuous monitoring of new vulnerabilities and emerging attack vectors.

Using pre-vetted images from a hardened repository also streamlines the development process, reducing the load on development teams and enabling faster, safer deployment.

In addition to addressing security risks, the repository also ensures software supply chain security by incorporating software bills of materials (SBOMs) with each image. The SBOM of a container image can provide an inventory of all the components that were used to build the image, including operating system packages, application specific dependencies with its versions, and license information. By maintaining a robust vetting process, the repository guarantees that all software components are traceable, verifiable, and tamper-free — essential for ensuring the integrity and reliability of deployed software.

Who uses a hardened image repository?

The main users of a hardened container image repository include internal developers responsible for creating applications, developers working on utility images, and those responsible for building base images for other containerized applications. Note that the titles for these roles can vary by organization.

  • Application developers use the repository to ensure that the images their applications are built upon meet the required security and compliance standards.
  • DevOps engineers are responsible for building and maintaining the utility images that support various internal operations within the organization.
  • Platform developers create and maintain secure base images that other teams can use as a foundation for their containerized applications.

Daily builds

One challenge with using a hardened image repository is the time needed to approve images. Daily rebuilds are conducted to assess each image for vulnerabilities and policy violations, but issues can emerge, requiring developers to make repeated passes through the pipeline. Because rebuilds are typically done at night, this process can result in delays for development teams, as they must wait for the next rebuild cycle to resolve issues.

Enter Docker Scout

Integrating Docker Scout into the pre-submission phase can reduce the number of issues that enter the pipeline. This proactive approach helps speed up the submission and acceptance process, allowing development teams to catch issues before the nightly scans. 

Vulnerability detection and management

  • Requirement: Images must be free of known vulnerabilities at the time of submission to avoid delays in acceptance.
  • Docker Scout contribution:
    • Early detection: Docker Scout can scan Docker images during development to detect vulnerabilities early, allowing developers to resolve issues before submission.
    • Continuous analysis: Docker Scout continually reviews uploaded SBOMs, providing early warnings for new critical CVEs and ensuring issues are addressed outside of the nightly rebuild process.
    • Justification handling: Docker Scout supports VEX for handling exceptions. This can streamline the justification process, enabling developers to submit justifications for potential vulnerabilities more easily.

Security best practices and configuration management

  • Requirement: Images must follow security best practices and configuration guidelines, such as using secure base images and minimizing the attack surface.
  • Docker Scout contribution:
    • Security posture enhancement: Docker Scout allows teams to set policies that align with repository guidelines, checking for policy violations such as disallowed software or unapproved base images.

Compliance with dependency management

  • Requirement: All dependencies must be declared, and internet access during the build process is usually prohibited.
  • Docker Scout contribution:
    • Dependency scanning: Docker Scout identifies outdated or vulnerable libraries included in the image.
    • Automated reports: Docker Scout generates security reports for each dependency, which can be used to cross-check the repository’s own scanning results.

Documentation and provenance

  • Requirement: Images must include detailed documentation on their build process, dependencies, and configurations for auditing purposes.
  • Docker Scout contribution:
    • Documentation support: Docker Scout contributes to security documentation by providing data on the scanned image, which can be used as part of the official documentation submitted with the image.

Continuous compliance

  • Requirement: Even after an image is accepted into the repository, it must remain compliant with new security standards and vulnerability disclosures.
  • Docker Scout contribution:
    • Ongoing monitoring: Docker Scout continuously monitors images, identifying new vulnerabilities as they emerge, ensuring that images in the repository remain compliant with security policies.

By utilizing Docker Scout in these areas, developers can ensure their images meet the repository’s rigorous standards, thereby reducing the time and effort required for submission and review. This approach helps align development practices with organizational security objectives, enabling faster deployment of secure, compliant containers.

Integrating Docker Scout into the CI/CD pipeline

Integrating Docker Scout into an organization’s CI/CD pipeline can enhance image security from the development phase through to deployment. By incorporating Docker Scout into the CI/CD process, the organization can automate vulnerability scanning and policy checks before images are pushed into production, significantly reducing the risk of deploying insecure or non-compliant images.

  • Integration with build pipelines: During the build stage of the CI/CD pipeline, Docker Scout can be configured to automatically scan Docker images for vulnerabilities and adherence to security policies. The integration would typically involve adding a Docker Scout scan as a step in the build job, for example through a GitHub action. If Docker Scout detects any issues such as outdated dependencies, vulnerabilities, or policy violations, the build can be halted, and feedback is provided to developers immediately. This early detection helps resolve issues long before images are pushed to the hardened image repository.
  • Validation in the deployment pipeline: As images move from development to production, Docker Scout can be used to perform final validation checks. This step ensures that any security issues that might have arisen since the initial build have been addressed and that the image is compliant with the latest security policies. The deployment process can be gated based on Docker Scout’s reports, preventing insecure images from being deployed. Additionally, Docker Scout’s continuous analysis of SBOMs means that even after deployment, images can be monitored for new vulnerabilities or compliance issues, providing ongoing protection throughout the image lifecycle.

By embedding Docker Scout directly into the CI/CD pipeline (Figure 1), the organization can maintain a proactive approach to security, shifting left in the development process while ensuring that each image deployed is safe, compliant, and up-to-date.

Defense in depth and Docker Scout’s role

In any organization that values security, adopting a defense-in-depth strategy is essential. Defense in depth is a multi-layered approach to security, ensuring that if one layer of defense is compromised, additional safeguards are in place to prevent or mitigate the impact. This strategy is especially important in environments that handle sensitive data or mission-critical operations, where even a single vulnerability can have significant consequences.

Docker Scout plays a vital role in this defense-in-depth strategy by providing a proactive layer of security during the development process. Rather than relying solely on post-submission scans or production monitoring, Docker Scout integrates directly into the development and CI/CD workflows (Figure 2), allowing teams to catch and resolve security issues early. This early detection prevents issues from escalating into more significant risks later in the pipeline, reducing the burden on the SecOps team and speeding up the deployment process.

Furthermore, Docker Scout’s continuous monitoring capabilities mean that images are not only secure at the time of deployment but remain compliant with evolving security standards and new vulnerabilities that may arise after deployment. This ongoing vigilance forms a crucial layer in a defense-in-depth approach, ensuring that security is maintained throughout the entire lifecycle of the container image.

By integrating Docker Scout into the organization’s security processes, teams can build a more resilient, secure, and compliant software environment, ensuring that security is deeply embedded at every stage from development to deployment and beyond.

Learn more

❌
❌