Collaboration and security are essential for delivering high-quality applications in modern software development, especially in cloud-native environments. Developers navigate intricate workflows, connect diverse systems, and safeguard applications against emerging threats — all while maintaining velocity and efficiency.
Think of development as preparing a multi-course meal in a high-pressure, professional kitchen, where precision, timing, and communication are critical. Each developer is a chef working on different parts of the dish, passing ingredients (code) along the way. When one part of the system encounters delays, it can ripple across the process, impacting the final result. Similarly, poor collaboration or security gaps can derail a project, causing delays and inefficiencies.
Docker serves as the kitchen manager, ensuring everything flows smoothly, ingredients are passed securely, and security is integrated from start to finish.
Seamless collaboration with Docker Hub and Testcontainers Cloud
Success in a professional kitchen depends on clear communication and coordination. In development, it’s no different. Docker’s collaboration tools, like Docker Hub and Testcontainers Cloud, simplify how teams work together, share resources, and test efficiently.
Docker Hub can be thought of as a kitchen’s “prepped ingredients station.” It’s where some of the most essential ingredients are always ready to go. With a vast selection of curated, trusted images, developers can quickly access high-quality, pre-configured containers, ensuring consistency and reducing the chance for mistakes.
Testcontainers Cloud is like the kitchen’s test station, providing on-demand, production-like environments for testing. Developers can spin up these environments quickly, reducing setup time and ensuring code performs in a real-world setting.
Effective coordination is critical whether you’re in a kitchen or on a development team, especially when projects involve distributed or hybrid teams. Clear communication ensures everyone is aligned and productive. The Docker suite of products provides the tools that make it possible for companies to more easily break down silos, share resources seamlessly, and ensure alignment — no matter how large your team is or where they work!
By streamlining collaboration, Docker reduces complexity and allows teams to move forward with confidence. With Docker Hub, Testcontainers Cloud, and integrated security features, teams can share resources, track progress, and catch issues early, enabling them to deliver high-quality results on time.
These tools improve efficiency, reduce errors, and help teams move faster through the development inner loop by making collaboration seamless and resource sharing simple.
Integrated security from code to production
Embedding security into every development step is essential to maintaining speed and delivering high-quality software. With Docker, security is embedded into every step of the development process so teams can identify and fix issues earlier than ever.
Docker Scout monitors container images in real-time, identifying vulnerabilities early to ensure your software is production-ready. By identifying and resolving risks early, developers can maintain high-quality standards and accelerate time to market.
Docker also integrates additional security features that work behind the scenes:
Hardened Docker Desktop (HDD) strengthens container isolation, protecting against software supply chain risks like malware.
Trusted Content offers developers secure, reliable base images, reducing vulnerabilities in the app’s foundation.
By building security into the workflow, Docker helps teams identify risks earlier, improve code quality, and maintain momentum without compromising safety.
Efficiency in action with Docker
Speed, collaboration, and security are paramount in today’s development landscape. Docker simplifies and secures the development process, helping teams collaborate efficiently and deliver secure, high-quality software faster.
Just as a well-managed kitchen runs smoothly, Docker helps development teams stay coordinated, ensuring security and productivity work together in perfect harmony. Docker removes complexity, accelerates delivery, and embeds security, enabling teams to create efficient, secure applications on time.
Ready to boost efficiency and collaboration in your development process? Explore the Docker suite of productsto see how they can streamline your workflow and improve your team’s productivity today.
Within software development, security and compliance are more than simple boxes to check. Each attestation and compliance check is backed by a well-considered risk assessment that aims to avoid ever-changing vulnerabilities and attack vectors. Software development teams don’t want to worry about vulnerabilities when they are focused on building something remarkable.
In this article, we explain how Docker Hub and Docker Scout can help development teams ensure a more secure and compliant software supply chain.
Security starts with trusted foundations
Every structure needs a strong foundation. A weak base is where cracks begin to show. Using untrusted or outdated software is like building a skyscraper on sand, and security issues can derail progress, leading to costly fixes and delayed releases. By “shifting security left” — addressing vulnerabilities early in the development process — teams can avoid these setbacks down the road.
Modern development demands a secure and compliant software supply chain. Unverified software or vulnerabilities buried deep within base images can become costly compliance issues, disrupting development timelines and eroding customer trust. One weak link in the supply chain can snowball into more significant issues, affecting product delivery and customer satisfaction. Without security and compliance checks, organizations will lack the credibility their customers rely on.
How Docker Hub and Scout help teams shift left
Software developers are like a construction crew building a skyscraper. The process requires specialized components — windows, elevators, wiring, concrete, and so on — which are found at a single supply depot and which work in harmony with each other. This idea is similar to microservices, which are pieced together to create modern applications. In this analogy, Docker Hub acts as the supply depot for a customer’s software supply chain, stocked with trusted container images that help developer teams streamline development.
Docker Hub is more than a container registry. It is the most widely trusted content distribution platform built on secure, verified, and dependable container images. Docker Official Images (DOI) and Docker Verified Publisher (DVP) programs provide a rock-solid base to help minimize risks and let development teams focus on creating their projects.
Docker Hub simplifies supply chain security by ensuring developers start with trusted components. Its library of official and verified publisher images offers secure, up-to-date resources vetted for compliance and reliability, eliminating the risk of untrusted or outdated components.
Proactive risk management is critical to software development
To avoid breaking production environments, organizations need to plan ahead by catching and tracking common vulnerabilities and exposures (CVEs) early in the development process. Docker Scout enables proactive risk management by integrating security checks early in the development lifecycle. Scout reduces the likelihood of security incidents and streamlines the development process.
Additionally, Docker Scout Health Scores provide a straightforward framework for evaluating the security posture of container images used daily by development teams. Using an easy-to-understand alphabetical grading system (A to F), these scores assess CVEs in software components within Docker Hub. This feature lets developers quickly evaluate and select trusted content, ensuring a secure software supply chain.
Avoid shadow changes with IAM and RBAC for secure collaboration
Compliance is not glamorous, but it is essential to running a business. Development teams don’t want to have to worry about whether they are meeting industry standards — they want to know they are. Docker Hub makes compliance simple with pre-certified images and many features that take the guesswork out of governance. That means you can stay compliant while your teams keep growing and innovating.
The biggest challenge to scaling a team or growing your development operations is not about adding people — it’s about maintaining control without losing momentum. Tracking, reducing, and managing shadow changes means that your team does not lose the flow state in development velocity.
Docker Hub’s Image Access Management (IAM) enforces precise permissions to ensure that only authorized people have access to modify sensitive information in repositories. Additionally, with role-based access control (RBAC), you’re not just delegating; you’re empowering your team with predefined roles that streamline onboarding, reduce mistakes, and keep everyone moving in harmony.
Docker Hub’s activity logs provide another layer of confidence as they let you track changes, enforce compliance, and build trust. These capabilities enhance security and boost collaboration by creating an environment where team members can focus on delivering high-quality applications.
Built-in trust
Without verified components, development teams can end up playing whack-a-mole with vulnerabilities. Time is lost. Money is spent. Trust is damaged. Now, picture a team working with trusted content and images that integrate security measures from the start. They deliver on time, on budget, and with confidence.
Building security into your applications doesn’t slow you down; it’s your superpower. Docker weaves trust and security into every part of your development process. Your applications are safeguarded, your delivery is accelerated, and your team is free to focus on what matters most — creating value.
Start your journey today. With Docker, you’re not just developing applications but building trust. Learn how trusted components help simplify compliance, enhance security, and empower your team to innovate fearlessly.
The Docker Desktop 4.37 release brings incremental improvements that make developers’ lives easier by addressing common challenges in modern software development. With a focus on integrating AI resources and streamlining operational workflows, this update ensures developers can work faster, smarter, and more effectively.
Unlocking AI-driven development with Docker AI Catalog integration
AI/ML development is exploding, but many developers face hurdles accessing prebuilt AI models and tools. They often need to search across multiple platforms, wasting valuable time piecing together resources and overcoming compatibility issues. This fragmentation slows down innovation and makes it harder for teams to bring AI-driven features into their applications.
With Docker Desktop 4.37, the AI Catalog in Docker Hub is now accessible directly through Docker Desktop. This seamless integration enables developers to discover, pull, and integrate AI models into their workflows effortlessly. Whether you’re incorporating pretrained machine learning models or exploring generative AI tools, Docker Desktop ensures these resources are just a click away.
Key benefits:
Streamlined discovery: You don’t need to leave your development environment to find AI tools. The AI Catalog is built into Docker Hub and can be immediately accessed from Docker Desktop.
Faster prototyping: By eliminating friction in accessing AI resources, teams can focus on building and iterating faster.
Enhanced compatibility: Docker’s containerized approach ensures AI models run consistently across environments, reducing setup headaches.
Whether you’re developing cutting-edge AI/ML applications or just beginning to experiment with AI tools, this integration empowers developers to innovate without distraction.
Command-line operations: Control Docker Desktop your way
For developers who automate workflows or work heavily in terminal environments, relying solely on graphical user interfaces (GUIs) can be limiting. Starting, stopping, or troubleshooting Docker Desktop often requires GUI navigation, which can disrupt automation pipelines and slow down power users.
Docker Desktop 4.37 introduces robust command-line capabilities for managing Docker Desktop itself. Developers can now perform essential tasks such as starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line.
Faster troubleshooting: Check the status and restart Docker Desktop without leaving the terminal, streamlining issue resolution.
Developer flexibility: A smoother, distraction-free experience for developers who prefer terminal-based workflows.
This new feature bridges the gap between GUI and command-line preferences, allowing developers to tailor their workflows to their needs.
Upgraded components: Keeping developers ahead
Docker Desktop 4.37 includes significant upgrades to its underlying components, bringing enhanced performance, security, and feature sets such as GPU- accelerated workflows.
Here’s what’s new:
Docker Buildx v0.19.2: Optimized multiplatform builds for faster, more efficient pipelines.
At Docker, we aim to provide a stable and dependable development platform so developer teams can focus on creating, not troubleshooting. Docker Desktop 4.37 also addresses several key bugs and usability concerns:
Default disk usage limit: New installations now default to a 1TB disk limit, offering additional flexibility for developers with large containerized applications.
Loopback AF_VSOCK connections: Fixed to ensure container communication reliability.
Dashboard synchronization: Ensures consistent behavior between the Docker Desktop Dashboard and the Docker daemon after engine restarts.
Resource Saver mode stability: Resolves issues with mode reengagement, improving power efficiency for resource-conscious users.
Wrapping up
Docker Desktop 4.37 offers a step forward in enabling developers to innovate. With a focus on AI-driven development and automation-friendly operations, this release aligns with the evolving needs of modern software teams.
Learn more
Authenticate and update to receive your subscription level’s newest Docker Desktop features.
Containerized application development has revolutionized modern software delivery, but slow image builds in CI/CD pipelines can bring developer productivity to a halt. Even with AWS CodeBuild automating application testing and building, teams face challenges like resource constraints, inefficient caching, and complex multi-architecture builds that lead to delays, lower release frequency, and prolonged recovery times.
Enter Docker Build Cloud, a high-performance cloud service designed to streamline image builds, integrate seamlessly with AWS CodeBuild, and reduce build times dramatically. With Docker Build Cloud, you gain powerful cloud-based builders, shared caching, and native multi-architecture support — all while keeping your CI/CD pipelines efficient and your developers focused on delivering value faster.
In this post, we’ll explore how AWS CodeBuild combined with Docker Build Cloud tackles common bottlenecks, boosts build performance, and simplifies workflows, enabling teams to ship more quickly and reliably.
By using AWS CodeBuild, you can automate the build and testing of container applications, enabling the construction of efficient CI/CD workflows. AWS CodeBuild is also integrated with AWS Identity and Access Management (IAM), allowing detailed configuration of access permissions for build processes and control over AWS resources.
Container images built with AWS CodeBuild can be stored in Amazon Elastic Container Registry (Amazon ECR) and deployed to various AWS services, such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, or AWS Lambda (Figure 1). Additionally, these services can leverage AWS Graviton, which adopts Arm-based architectures, to improve price performance for compute workloads.
Challenges of container image builds with AWS CodeBuild
Regardless of the tool used, building container images in a CI pipeline often takes a significant amount of time. This can lead to the following issues:
Reduced development productivity
Lower release frequency
Longer recovery time in case of failures
The main reasons why build times can be extended include:
1. Machines for building
Building container images requires substantial resources (CPU, RAM). If the machine specifications used in the CI pipeline are inadequate, build times can increase.
For simple container image builds, the impact may be minimal, but in cases of multi-stage builds or builds with many dependencies, the effect can be significant.
AWS CodeBuild allows changing instance types to improve these situations. However, such changes can apply to parts of the pipeline beyond container image builds, and they also increase costs.
Developers need to balance cost and build speed to optimize the pipeline.
2. Container image cache
In local development environments, Docker’s build cache can shorten rebuild times significantly by reusing previously built layers, avoiding redundant processing for unchanged parts of the Dockerfile. However, in cloud-based CI services, clean environments are used by default, so cache cannot be utilized, resulting in longer build times.
Although there are ways to use storage or container registries to leverage caching, these often are not employed because they introduce complexity in configuration and overhead from uploading and downloading cache data.
3. Multi-architecture builds (AMD64, Arm64)
To use Arm-based architectures like AWS Graviton in Amazon EKS or Amazon ECS, Arm64-compatible container image builds are required.
With changes in local environments, such as Apple Silicon, cases requiring multi-architecture support for AMD64 and Arm64 have increased. However, building images for different architectures (for example, building x86 on Arm, or vice versa) often requires emulation, which can further increase build times (Figure 2).
Although AWS CodeBuild provides both AMD64 and Arm64 instances, running them as separate pipelines is necessary, leading to more complex configurations and operations.
Accelerating container image builds with Docker Build Cloud
The Docker Build Cloud service executes the Docker image build process in the cloud, significantly reducing build time and improving developer productivity (Figure 3).
Particularly in CI pipelines, Docker Build Cloud enables faster container image builds without the need for significant changes or migrations to existing pipelines.
Docker Build Cloud includes the following features:
High-performance cloud builders: Cloud builders equipped with 16 vCPUs and 32GB RAM are available. This allows for faster builds compared to local environments or resource-constrained CI services.
Shared cache utilization: Cloud builders come with 200 GiB of shared cache, significantly reducing build times for subsequent builds. This cache is available without additional configuration, and Docker Build Cloud handles the cache maintenance for you.
Multi-architecture support (AMD64, Arm64): Docker Build Cloud supports native builds for multi-architecture with a single command. By specifying --platform linux/amd64,linux/arm64 in the docker buildx build command or using Bake, images for both Arm64 and AMD64 can be built simultaneously. This approach eliminates the need to split the pipeline for different architectures.
Architecture of AWS CodeBuild + Docker Build Cloud
Figure 4 shows an example of how to use Docker Build Cloud to accelerate container image builds in AWS CodeBuild:
The AWS CodeBuild pipeline is triggered from a commit to the source code repository (AWS CodeCommit, GitHub, GitLab).
Preparations for running Docker Build Cloud are made in AWS CodeBuild (Buildx installation, specifying Docker Build Cloud builders).
Container images are built on Docker Build Cloud’s AMD64 and Arm64 cloud builders.
The built AMD64 and Arm64 container images are pushed to Amazon ECR.
Setting up Docker Build Cloud
First, set up Docker Build Cloud. (Note that new Docker subscriptions already include a free tier for Docker Build Cloud.)
Once the builder is successfully created, a guide is displayed for using it in local environments (Docker Desktop, CLI) or CI/CD environments (Figure 5).
Additionally, to use Docker Build Cloud from AWS CodeBuild, a Docker personal access token (PAT) is required. Store this token in AWS Secrets Manager for secure access.
Setting up the AWS CodeBuild pipeline
Next, set up the AWS CodeBuild pipeline. You should prepare an Amazon ECR repository to store the container images beforehand.
The following settings are used to create the AWS CodeBuild pipeline:
AMD64 instance with 3GB memory and 2 vCPUs.
Service role with permissions to push to Amazon ECR and access the Docker personal access token from AWS Secrets Manager.
In the install phase, Buildx, which is necessary for using Docker Build Cloud, is installed.
Although Buildx may already be installed in AWS CodeBuild, it might be an unsupported version for Docker Build Cloud. Therefore, it is recommended to install the latest version.
In the pre_build phase, the following steps are performed:
Log in to Amazon ECR.
Log in to Docker (Build Cloud).
Specify the cloud builder.
In the build phase, the image tag is specified, and the container image is built and pushed to Amazon ECR.
Instead of separating the build and push commands, using --push to directly push the image to Amazon ECR helps avoid unnecessary file transfers, contributing to faster builds.
Results comparison
To make a comparison, an AWS CodeBuild pipeline without Docker Build Cloud is created. The same instance type (AMD64, 3GB memory, 2vCPU) is used, and the build is limited to AMD64 container images.
Additionally, Docker login is used to avoid the pull rate limit imposed by Docker Hub.
Figure 7 shows the execution result of the AWS CodeBuild pipeline using Docker Build Cloud:
The results may vary depending on the container images being built and the state of the cache, but it was possible to build container images much faster and achieve multi-architecture builds (AMD64 and Arm64) within a single pipeline.
Conclusion
Integrating Docker Build Cloud into a CI/CD pipeline using AWS CodeBuild can dramatically reduce build times and improve release frequency. This allows developers to maximize productivity while delivering value to users more quickly.
As mentioned previously, the new Docker subscription already includes a free tier for Docker Build Cloud. Take advantage of this opportunity to test how much faster you can build container images for your current projects.
In 2024, as developers and engineering teams focused on delivering high-quality, secure software faster, Docker continued to evolve with impactful updates and a streamlined user experience. This commitment to empowering developers was recognized in the annual Stack Overflow Developer Survey, where Docker ranked as one of the most loved and widely used tools for yet another year. Here’s a look back at Docker’s 2024 milestones and how we helped teams build, test, and deploy with greater ease, security, and control than ever.
Streamlining the developer experience
Docker focused heavily on streamlining workflows, creating efficiencies, and reducing the complexities often associated with managing multiple tools. One big announcement in 2024 is our upgraded Docker plans. With the launch of updated Docker subscriptions, developers now have access to the entire suite of Docker products under their existing subscription.
The all-in-one subscription model enables seamless integration of Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud, giving developers everything they need to build efficiently. By providing easy access to the suite of products and flexibility to scale, Docker allows developers to focus on what matters most — building and innovating without unnecessary distractions.
For more details on Docker’s all-in-one subscription approach, check out our Docker plans announcement.
Build up to 39x faster with Docker Build Cloud
Docker Build Cloud, introduced in 2024, brings the best of two worlds — local development and the cloud to developers and engineering teams worldwide. It offloads resource-intensive build processes to the cloud, ensuring faster, more consistent builds while freeing up local machines for other tasks.
A standout feature is shared build caches, which dramatically improve efficiency for engineering teams working on large-scale projects. Shared caches allow teams to avoid redundant rebuilds by reusing intermediate layers of images across builds, accelerating iteration cycles and reducing resource consumption. This approach is especially valuable for collaborative teams working on shared codebases, as it minimizes duplicated effort and enhances productivity.
Docker Build Cloud also offers native support for multi-architecture builds, eliminating the need for setting up and maintaining multiple native builders. This support removes the challenges associated with emulation, further improving build efficiency.
We’ve designed Docker Build Cloud to be easy to set up wherever you run your builds, without requiring a massive lift-and-shift effort. Docker Build Cloud also works well with Docker Compose, GitHub Actions, and other CI solutions. This means you can seamlessly incorporate Docker Build Cloud into your existing development tools and services and immediately start reaping the benefits of enhanced speed and efficiency.
Optimizing development workflows with performance enhancements
In 2024, Docker Desktop introduced a series of enterprise-grade performance enhancements designed to streamline development workflows at scale. These updates cater to the unique needs of development teams operating in diverse, high-performance environments.
One notable feature is the Virtual Machine Manager (VMM) in Docker Desktop for Mac, which provides a robust alternative to the Apple Virtualization Framework. Available since Docker Desktop 4.35, VMM significantly boosts performance for native Arm-based images, delivering faster and more efficient workflows for M1 and M2 Mac users. For development teams relying on Apple’s latest hardware, this enhancement translates into reduced build times and a smoother experience when working with containerized applications.
Additionally, Docker Desktop expanded its platform support to include Red Hat Enterprise Linux (RHEL) and Windows on Arm architectures, enabling organizations to maintain a consistent Docker Desktop experience across a wide array of operating systems. This flexibility ensures that development teams can optimize their workflows regardless of the underlying platform, leveraging platform-specific optimizations while maintaining uniformity in their tooling.
These advancements reflect Docker’s unwavering commitment to speed, reliability, and cross-platform support, ensuring that development teams can scale their operations without bottlenecks. By minimizing downtime and enhancing performance, Docker Desktop empowers developers to focus on innovation, improving productivity across even the most demanding enterprise environments.
More options to improve file operations for large projects
We enhanced Docker Desktop with synchronized file shares (Figure 1), a feature that can significantly improve file operation speeds by 2-10x. This enhancement brings fast and flexible host-to-VM file sharing, offering a performance boost for developers dealing with extensive codebases.
Synchronized file sharing is ideal for developers who:
Develop on projects that consist of a significant number of files (such as PHP or Node projects).
Develop using large repositories or monorepos with more than 100,000 files, totaling significant storage.
Utilize virtual file systems (such as VirtioFS, gRPC FUSE, or osxfs) and face scalability issues with their workflows.
Encounter performance limitations and want a seamless file-sharing solution without worrying about ownership conflicts.
This integration streamlines workflows, allowing developers to focus more on coding and less on managing file synchronization issues and slow file read times.
Enhancing developer productivity with Docker Debug
Docker Debug enhances the ability of developer teams to debug any container, especially those without a shell (that is, distroless or scratch images). The ability to peek into “secure” images significantly improves the debugging experience for both local and remote containerized applications.
Docker Debug does this by attaching a dedicated debugging toolkit to any image and allows developers to easily install additional tools for quick issue identification and resolution. Docker Debug not only streamlines debugging for both running and stopped containers but also is accessible directly from both the Docker Desktop CLI and GUI (Figure 2).
Being able to troubleshoot images without modifying them is crucial for maintaining the security and performance of containerized applications, especially those images that traditionally have been hard to debug. Docker Debug offers:
Streamlined debugging process: Easily debug local and remote containerized applications, even those not running, directly from Docker Desktop.
Cross-device and cloud compatibility: Initiate debugging effortlessly from any device, whether local or in the cloud, enhancing flexibility and productivity.
Docker Debug improves productivity and seamless integration. The docker debug command simplifies attaching a shell to any container or image. This capability reduces the cognitive load on developers, allowing them to focus on solving problems rather than configuring their environment.
Ensuring reliable image builds with Docker Build checks
Docker Desktop 4.33 was a big release because, in addition to including the GA release of Docker Debug, it included the GA release of Docker Build checks, a new feature that ensures smoother and more reliable image builds. Build checks automatically validate common issues in your Dockerfiles before the build process begins, catching errors like invalid syntax, unsupported instructions, or missing dependencies. By surfacing these issues upfront, Docker Build checks help developers save time and avoid costly build failures.
You can access Docker Build checks in the CLI and in the Docker Desktop Builds view. The feature also works seamlessly with Docker Build Cloud, both locally and through CI. Whether you’re optimizing your Dockerfiles or troubleshooting build errors, Docker Build checks let you create efficient, high-quality container images with confidence — streamlining your development workflow from start to finish.
Onboarding and learning resources for developer success
To further reduce friction, Docker revamped its learning resources and integrated new tools to enhance developer onboarding. By adding beginner-friendly tutorials, Docker’s learning center makes it easier for developers to ramp up and quickly learn to use Docker tools, helping them spend more time coding and less time troubleshooting.
As Docker continues to rank as a top developer tool globally, we’re dedicated to empowering our community with continuous learning support.
Built-in container security from code to production
In an era where software supply chain security is essential, Docker has raised the bar on container security. With integrated security measures across every phase of the development lifecycle, Docker helps teams build, test, and deploy confidently.
Proactive security insights with Docker Scout Health Scores
Docker Scout, launched in 2023, has become a cornerstone of Docker’s security ecosystem, empowering developer teams to identify and address vulnerabilities in container images early in the development lifecycle. By integrating with Docker Hub, Docker Desktop, and CI/CD workflows, Scout ensures that security is seamlessly embedded into every build.
Addressing vulnerabilities during the inner loop — the development phase — is estimated to be up to 100 times less costly than fixing them in production. This underscores the critical importance of early risk visibility and remediation for engineering teams striving to deliver secure, production-ready software efficiently.
In 2024, we announced Docker Scout Health Scores (Figure 3), a feature designed to better communicate the security posture of container images development teams use every day. Docker Scout Health Scores provide a clear, alphabetical grading system (A to F) that evaluates common vulnerabilities and exposures (CVEs) for software components within Docker Hub. This feature allows developers to quickly assess and wisely choose trusted content for a secure software supply chain.
Air-gapped containers: Enhanced security for isolated environments
Docker introduced support for air-gapped containers in Docker Desktop 4.31, addressing the unique needs of highly secure, offline environments. Air-gapped containers enable developers to build, run, and test containerized applications without requiring an active internet connection.
This feature is crucial for organizations operating in industries with stringent compliance and security requirements, such as government, healthcare, and finance. By allowing developers to securely transfer container images and dependencies to air-gapped systems, Docker simplifies workflows and ensures that even isolated environments benefit from the power of containerization.
Strengthening trust with SOC 2 Type 2 and ISO 27001 certifications
Docker also achieved two major milestones in its commitment to security and reliability: SOC 2 Type 2 attestation and ISO 27001 certification. These globally recognized standards validate Docker’s dedication to safeguarding customer data, maintaining robust operational controls, and adhering to stringent security practices. SOC 2 Type 2 attestation focuses on the effective implementation of security, availability, and confidentiality controls, while ISO 27001 certification ensures compliance with best practices for managing information security systems.
These certifications provide developers and organizations with increased confidence in Docker’s ability to support secure software supply chains and protect sensitive information. They also demonstrate Docker’s focus on aligning its services with the needs of modern enterprises.
Accelerating success for development teams and organizations
In 2024, Docker introduced a range of features and enhancements designed to empower development teams and streamline operations across organizations. From harnessing the potential of AI to simplifying deployment workflows and improving security, Docker’s advancements are focused on enabling teams to work smarter and build with confidence. By addressing key challenges in development, management, and security, Docker continues to drive meaningful outcomes for developers and businesses alike.
Docker Home: A central hub to access and manage Docker products
Docker introduced Docker Home (Figure 4), a central hub for users to access Docker products, manage subscriptions, adjust settings, and find resources — all in one place. This approach simplifies navigation for developers and admins. Docker Home allows admins to manage organizations, users, and onboarding processes, with access to dashboards for monitoring Docker usage.
Future updates will add personalized features for different roles, and business subscribers will gain access to tools like the Docker Support portal and organization-wide notifications.
Empowering AI innovation
Docker’s ecosystem supports AI/ML workflows, helping developers work with these cutting-edge technologies while staying cloud-native and agile. Read the Docker Labs GenAI series to see how we’re innovating and experimenting in the open.
Through partnerships like those with NVIDIA and GitHub, Docker ensures seamless integration of AI tools, allowing teams to rapidly experiment, deploy, and iterate. This emphasis on enabling advanced tech aligns Docker with organizations looking to leverage AI and ML in containerized environments.
Optimizing AI application development with Docker Desktop and NVIDIA AI Workbench
Docker and NVIDIA partnered to integrate Docker Desktop with NVIDIA AI Workbench, streamlining AI development workflows. This collaboration simplifies setup by automatically installing Docker Desktop when selected as the container runtime in AI Workbench, allowing developers to focus on creating, testing, and deploying AI models without configuration hassles. By combining Docker’s containerization capabilities with NVIDIA’s advanced AI tools, this integration provides a seamless platform for model training and deployment, enhancing productivity and accelerating innovation in AI application development.
We announced that Docker joined GitHub’s Partner Program and unveiled the Docker extension for GitHub Copilot (@docker). This extension is designed to assist developers in working with Docker directly within their GitHub workflows. This integration extends GitHub Copilot’s technology, enabling developers to generate Docker assets, learn about containerization, and analyze project vulnerabilities using Docker Scout, all from within the GitHub environment.
Accelerating AI development with the Docker AI catalog
Docker launched the AI Catalog, a curated collection of generative AI images and tools designed to simplify and accelerate AI application development. This catalog offers developers access to powerful models like IBM Granite, Llama, Mistral, Phi 2, and SolarLLM, as well as applications such as JupyterHub and H2O.ai. By providing essential tools for machine learning, model deployment, inference optimization, orchestration, ML frameworks, and databases, the AI Catalog enables developers to build and deploy AI solutions more efficiently.
The Docker AI Catalog addresses common challenges in AI development, such as decision overload from the vast array of tools and frameworks, steep learning curves, and complex configurations. By offering a curated list of trusted content and container images, Docker simplifies the decision-making process, allowing developers to focus on innovation rather than setup. This initiative underscores Docker’s commitment to empowering developers and publishers in the AI space, fostering a more streamlined and productive development environment.
Streamlining enterprise administration
Simplified deployment and management with Docker’s MSI and PKG installers
Docker simplifies deploying and managing Docker Desktop with the new MSI Installer for Windows and PKG Installer for macOS. The MSI Installer enables silent installations, automated updates, and login enforcement, streamlining workflows for IT admins. Similarly, the PKG Installer offers macOS users easy deployment and management with standard tools. These installers enhance efficiency, making it easier for organizations to equip teams and maintain secure, compliant environments.
These new installers also align with Docker’s commitment to simplifying the developer experience and improving organizational management. Whether you’re setting up a few machines or deploying Docker Desktop across an entire enterprise, these tools provide a reliable and efficient way to keep teams equipped and ready to build.
New sign-in enforcement options enhance security and help streamline IT administration
Docker simplifies IT administration and strengthens organizational security with new sign-in enforcement options for Docker Desktop. These features allow organizations to ensure users are signed in while using Docker, aligning local software with modern security standards. With flexible deployment options — including macOS Config Profiles, Windows Registry Keys, and the cross-platform registry.json file — IT administrators can easily enforce policies that prevent tampering and enhance security. These tools empower organizations to manage development environments more effectively, providing a secure foundation for teams to build confidently.
Desktop Insights: Unlocking performance and usage analytics
Docker introduced Desktop Insights, a powerful feature that provides developers and teams with actionable analytics to optimize their use of Docker Desktop. Accessible through the Docker Dashboard, Desktop Insights offers a detailed view of resource usage, build times, and performance metrics, helping users identify inefficiencies and fine-tune their workflows (Figure 5).
Whether you’re tracking the speed of container builds or understanding how resources like CPU and memory are being utilized, Desktop Insights empowers developers to make data-driven decisions. By bringing transparency to local development environments, this feature aligns with Docker’s mission to streamline container workflows and ensure developers have the tools to build faster and more effectively.
New usage dashboards in Docker Hub
Docker introduced Usage dashboards in Docker Hub, giving organizations greater visibility into how they consume resources. These dashboards provide detailed insights into storage and image pull activity, helping teams understand their usage patterns at a granular level (Figure 6).
By breaking down data by repository, tag, and even IP address, the dashboards make it easy to identify high-traffic images or repositories that might require optimization. With this added transparency, teams can better manage their storage, avoid unnecessary pull requests, and optimize workflows to control costs.
Usage dashboards enhance accountability and empower organizations to fine-tune their Docker Hub usage, ensuring resources are used efficiently and effectively across all projects.
Enhancing security with organization access tokens
Docker introduced organization access tokens, which let teams manage access to Docker Hub repositories at an organizational level. Unlike personal access tokens tied to individual users, these tokens are associated with the organization itself, allowing for centralized control and reducing reliance on individual accounts. This approach enhances security by enabling fine-grained permissions and simplifying the management of automated processes and CI/CD pipelines.
Organization access tokens offer several advantages, including the ability to set specific access permissions for each token, such as read or write access to selected repositories. They also support expiration dates, aligning with compliance requirements and bolstering security. By providing visibility into token usage and centralizing management within the Admin Console, these tokens streamline operations and improve governance for organizations of all sizes.
Docker’s vision for 2025
Docker’s journey doesn’t end here. In 2025, Docker remains committed to expanding its support for cloud-native and AI/ML development, reinforcing its position as the go-to container platform. New integrations and expanded multi-cloud capabilities are on the horizon, promising a more connected and versatile Docker ecosystem.
As Docker continues to build for the future, we’re committed to empowering developers, supporting the open source community, and driving efficiency in software development at scale.
2024 was a year of transformation for Docker and the developer community. With major advances in our product suite, continued focus on security, and streamlined experiences that deliver value, Docker is ready to help developer teams and organizations succeed in an evolving tech landscape. As we head into 2025, we invite you to explore Docker’s suite of tools and see how Docker can help your team build, innovate, and secure software faster than ever.
This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.
In our past experiments, we started our work from the assumption that we had a project ready to work on. That means someone like a UI tech writer would need to understand Git operations in order to use the tools we built for them. Naturally, because we have been touching on Git so frequently, we wanted to try getting a Git agent started. Then, we want to use this Git agent to understand PR branches for a variety of user personas — without anyone needing to know the ins and outs of Git.
Git as an agent
We are exploring the idea that tools are agents. So, what would a Git agent do?
Let’s tackle our UI use case prompt.
Previously:
You are at $PWD of /project, which is a git repo.
Force checkout {{branch}}
Run a three-dot diff of the files changed in {{branch}} compared to main using --name-only.
A drawback that isn’t shown here, is that there is no authentication. So, if you haven’t fetched that branch or pulled commits already, this prompt at best will be unreliable and more than likely will fail (Figure 1):
Now:
You are a helpful assistant that checks a PR for user-facing changes.
1. Fetch everything and get on latest main.
2. Checkout the PR branch and pull latest.
3. Run a three-dot git diff against main for just files. Write the output to /thread/diff.txt.
This time around, you can see that we are being less explicit about the Git operations, we have the ability to export outputs to the conversation thread and, most importantly, we have authentication with a new prompt!
Preparing GitHub authentication
Note: These prompts should be easily adaptable to other Git providers, but we use GitHub at Docker.
Before we can do anything with GitHub, we have to authenticate. There are several ways to do this, but for this post we’ll focus on SSH-based auth rather than using HTTPS through the CLI. Without getting too deep into the Git world, we will be authenticating with keys on our machine that are associated with our account. These keys and configurations are commonly located at ~/.ssh on Linux/Mac. Furthermore, users commonly maintain Git config at ~/.gitconfig.
The .gitconfig file is particularly useful because it lets us specify carriage return rules — something that can easily cause Git to fail when running in a Linux container. We will also need to modify our SSH config to remove UseKeychain. We found these changes are enough to authenticate using SSH in Alpine/Git. But we, of course, don’t want to modify any host configuration.
We came up with a fairly simple flow that lets us prepare to use Git in a container without messing with any host SSH configs.
Readonly mounts: Git config and SSH keys are stored on specific folders on the host machine. We need to mount those in. a. Mount ~/.ssh into a container as /root/.ssh-base readonly. b. Mount ~/.gitconfig into the same container as /root/.gitconfig.
Copy /root/.ssh-base to /root/.ssh and make the new file readwrite.
Make necessary changes to config.
For the LLM, we also need it to verify the config is in the thread and the changes were made to it. In the event that it fails to make the right changes, the LLM can self-correct.
Copy the .ssh directory and .gitconfig to /thread.
All of this is baked in a prompt you can find and run on GitHub. Simply replace <username> with your system username (Figure 2).
If you’re using the default behavior in our extension for VSCode, then the /thread/ volume is ephemeral, so you would need to make sure your agent runs it before any Git tools. However, if you use the Set prompt thread ID command in VSCode, you can keep the /thread volume around under your own tag (Figure 3).
Tagging the thread persists everything after the prompt completes, so you would only need to run the prompt once before going on to develop your Git tools.
If you opt to persist the thread, then this prompt would only need to be run once and the Git credentials will stick around (Figure 4).
Now that we have our credentials ready, we need our Git tool to pick up on them.
Our container definition copies the SSH credentials and .gitconfig files to the correct directory before running Git commands.
Unpacking objects: 90% (9/10)
Unpacking objects: 100% (10/10)
Unpacking objects: 100% (10/10), 4.14 KiB | 141.00 KiB/s, done.
From github.com:docker/repo
abcd123..abcd123 main -> origin/main
* [new tag] branch/v2982.0.0 -> branch/v2982.0.0
# ROLE assistant
I have fetched all origin branches. The latest updates have been applied to your local repository, and any new tags have been downloaded.
```
Now, we can see our Git tool is authenticated as if it were us.
Running the agent now, it can:
## ROLE assistant
I have run the git fetch --all command, which retrieves updates from all remotes. If you need further assistance with your Git repository, feel free to ask!
```
# ROLE assistant
```json
{"request":"checkout main"}
```
## ROLE assistant
```json
{"args":"checkout main"}
```
## ROLE assistant
You are already on the 'main' branch, and it is up to date with 'origin/main'.
```
# ROLE assistant
```json
{"request":"checkout the branch branch and pull latest changes."}
```
...
## ROLE assistant
The branch branch has been successfully checked out, and it's already up to date with the latest changes.
...
# ROLE assistant
```json
{"request":"Run a three-dot git diff against main and write the output for just files to /thread/diff.txt."}
```
...
## ROLE assistant
```json
{"args":"diff main... --name-only","outfile":"/thread/diff.txt"}
```
*entering tool*
## ROLE tool (git)
file1.tsx
file2.tsx
...
## ROLE assistant
The three-dot Git diff against the main branch has been executed, and the output has been saved to /thread/diff.txt. This includes the names of files that have differences.
``
With the new Git agent, we have enabled our assistant to fetch new information from upstream, switch branches, and pull in new code to review. If we provide our agent with credentials that are limited to read-only access, this agent could be used by someone that does not have a local git install.
For more on what we’re doing at Docker, subscribe to our newsletter.
This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.
Can an AI-powered assistant understand a GitHub repo enough to answer questions for UI writers?
Across many projects, user-facing content is rendered based on some sort of client-side code. Whether a website, a game, or a mobile app, it’s critical to nail the text copy displayed to the user.
So let’s take a sample question: Do any open PRs in this project need to be reviewed for UI copy? In other words, we want to scan a GitHub repo’s PRs and gain intelligence about the changes included.
Disclaimer: The best practice to accomplish this at a mature organization would be to implement Localization (i18n), which would facilitate centralized user-facing text. However, in a world of AI-powered tools, we believe our assistants will help minimize friction for all projects, not just ones that have adopted i18n.
So, let’s start off by seeing what options we already have.
The first instinct someone might have is to open the new copilot friend in the GitHub nav
We tried to get it to answer basic questions, first: “How many PR’s are open?”
Despite having access to the GitHub repo, the Copilot agent provides less helpful information than we might expect.
We don’t even get a number like we asked, despite GitHub surfacing that information on the repository’s main page. Following up our first query with the main query we want to ask effectively just gives us the same answer
And, after inspecting the third PR in the list, it doesn’t contain user-facing changes. One great indicator for this web project is the lack of any clientside code being modified. This was a backend change so we didn’t want to see this one.
So let’s try to improve this:
First prompt file
---
functions:
- name: bash
description: Run a bash script in the utilities container.
parameters:
type: object
properties:
command:
type: string
description: The command to send to bash
container:
image: wbitt/network-multitool
command:
- "bash"
- "-c"
- "{{command|safe}}"
- name: git
description: Run a git command.
parameters:
type: object
properties:
command:
type: string
description: The git command to run, excluding the `git` command itself
container:
image: alpine/git
entrypoint:
- "/bin/sh"
command:
- "-c"
- "git --no-pager {{command|safe}}"
---
# prompt system
You are a helpful assistant that helps the user to check if a PR contains any user-facing changes.
You are given a container to run bash in with the following tools:
curl, wget, jq
and default alpine linux tools too.
# prompt user
You are at $PWD of /project, which is a git repo.
Checkout branch `{{branch}}`.
Diff the changes and report any containing user facing changes
This prompt was promising, but it ended up with a few blocking flaws. The reason is that using git to compare files is quite tricky for an LLM.
git diff uses a pager, and therefore needs the --no-pager arg to send stdout to the conversation.
The total number of files affected via git diff can be quite large.
Given each file, the raw diff output can be massive and difficult to parse.
The important files changed in a PR might be buried with many extra files in the diff output.
The container has many more tools than necessary, allowing the LLM to hallucinate.
The agent needs some understanding of the repo to determine the sorts of files that contain user-facing changes, and it needs to be capable of seeing just the important pieces of information.
Our next pass involves a few tweaks:
Switch to alpine git image and a file writer as the only tools necessary.
Use –files-only and –no-pager args.
# ROLE assistant
The following files are likely to contain user-facing changes as they mainly consist of UI components, hooks, and API functionalities.
```
file1.ts
fil2.tsx
file3.tsx
...
```
Remember that this isn't a guarantee of whether there are user-facing changes, but just an indication of where they might be if there are any.
Remember that this isn’t a guarantee of whether there are user-facing changes, but just an indication of where they might be if there are any.
Giving the agent the tool run-javascript-sandbox allowed our agent to write a script to save the output for later.
This is a great start; however, we now need to inspect the files themselves for user-facing changes. When we started this, we realized that user-facing changes could manifest in a diverse set of “diff”s so we needed to include expert knowledge. We synced up with Mark Higson, a staff SWE currently working on the frontend platform here at Docker. Mark was able to help provide some key advice for what “user-facing” changes look like in many repos at Docker, so I baked the tips into the prompt.
Straightforward approaches
Looking for changes in text nodes found in a JSX tree is the easiest example.
JSX node with interpolation
<div>{functionReturningString()}</div>
If the result is a string, the result is probably user-facing, but the components that create the string could be elsewhere, so look for:
Nuanced indicators
Standard user-facing components. Example: notifications. If a notification’s props change, we can likely infer that it is a user-facing change.
Constructors for commonly used components. Example: errors. If an Error() is constructed with a different argument, we know that error could show up differently.
The key for UI reviewers is the overall amount of text is changed, rather than layout.
So, despite not being able to catch everything that could be rendered, we will be able to deliver value by focusing on these areas.
Diffing the files
There are a few approaches to finding changes in the files. To start, we’ll use git again.
Before including expert advice, git diff was hallucinating changes that weren’t actually represented in the output. After the expert advice, we are seeing much more focused outputs from the LLM.
Our second prompt just needs to read the user-facing files that we already saved, and then compare them to main. After a bit of trial and error with git, we found that a combination of --no-color and --minimal was able to give the LLM enough context without consuming too many tokens for each change.
There’s a lot of buzz around generating and reading diffs with LLMs. The teams at Aider and Cursor are both focusing on fine-tuning LLM’s using speculative decoding:
For our use cases, however, we found that the average “big” PR at Docker was not a problem for traditional LLMs when given good advice and the right tools.
So here’s the second prompt:
## User-facing content
The following pieces of code are what we want to focus on for user-facing content:
Text Nodes in any JSX or TSX files.
Text Nodes in any HTML files.
Standard user-facing components. Example: Definitions/Constructors for notifications, toasts, and errors.
# prompt user
Can you tell me if there are any user-facing changes in {{branch}}?
Note: Some full paths, branches, and diffs are trimmed or replaced with placeholders for this article.
# ROLE assistant
```json
{
"args": ["checkout", "branch"]
}
```
# ROLE tool (git)
Already on 'branch'
Your branch is up to date with 'origin/branch'.
# ROLE assistant
```json
{
"path": "/thread/user-changes/files.txt"
}
```
*entering tool*
# ROLE tool (read-file)
path1
path2
path3
...rest of paths
# ROLE assistant
```json
{
"args": ["diff", "main", "--no-color", "--minimal", path1 path2 path3...]
}
```
*entering tool*
# ROLE tool (git)
<Diff output truncated>
# ROLE assistant
The following user-facing changes were detected between the "main" branch and the "branch" branch:
1. In `File1.tsx` file, the text "Sign In" was modified to "Sign in".
2. In `File2.ts` file, titles "Databases & Storage", "Monitoring & Observability", "Developer Tools", "New Extensions", were changed to "Databases & storage", "Monitoring & observability", "Developer tools", "New extensions" respectively.
3. In `File3.rtl.test.tsx.snap` and `ImageHeader/index.tsx files`, the text "Compressed Size " was changed to "Compressed size".
4. In `File4.rtl.test.tsx` and `ImageDetail/index.tsx files`, a test-case previously checking for the existence of "Compressed Size" now looks for "Compressed size".
5. In `File5.tsx file`, the button text "Add Collaborator" has been changed to "Add collaborator".
6. In `route.tsx file`, the collaborators table title "Current Collaborators" was changed to "Current collaborators".
7. In `File6.tsx` file, the description was updated.
8. In `File7.tsx`, `RepositoryOverview.tsx`, `RepositoryReadMe.tsx`, `TagsAndScans.tsx` files, the description when there is no category or description was updated.
9. In `File8.rtl.test.tsx`, `DeleteRepo.tsx`, the text for confirming repo deletion and the delete button text was adjusted.
10. In `route.tsx file`, the filter label "Trusted Content" was changed to "Trusted content".
11. In `File9.tsx`, the text "Your Organizations" has been changed to "Your organizations".
12. In `Component/index.tsx`, the search group label "Trusted Content" has been changed to "Trusted content".
13. In `file10.test.ts` file, a test-case was updated to check for the new text "Sign in".
```
Overall, this was a positive result:
Follow up:
Another example from a different PR:
The changes in the `branch` branch compared to `main` include several modifications that affect user-facing content, particularly in JSX and TSX files. Here is a summary of the key changes:
...
These changes primarily involve simplification and refactoring of the existing UI logic, possibly to streamline the user interaction by removing complexities related to feature flags and reducing the use of modals or conditional rendering for specific purchasing flows.
This is already a promising flow. For example, a tech writer could clone the git repo and run this prompt to inspect a branch for user-facing changes. From here, we might extend the functionality:
Allow user input for PR to review without knowing the branch or git needing to use git.
Automatic git clone & pull with auth.
Support for larger >15 files changed PR by allowing agents to automate their tasks.
“Baking” the final flow into CI/CD so that it can automatically assign reviewers to relevant PRs.
If you’re interested in running this prompt on your own repo or just want to follow along with the code, watch our new public repo and reach out. We also appreciate your GitHub Stars.
Everything we’ve discussed in this blog post is available for you to try out on your own projects.
For more on what we’re doing at Docker, subscribe to our newsletter.
Modern application development has evolved dramatically. Gone are the days when a couple of developers, a few machines, and some pizza were enough to launch an app. As the industry grew, DevOps revolutionized collaboration, and Docker popularized containerization, simplifying workflows and accelerating delivery.
Later, DevSecOps brought security into the mix. Fast forward to today, and the demand for software has never been greater, with more than 750 million cloud-native apps expected by 2025.
This explosion in demand has created a new challenge: complexity. Applications now span multiple programming languages, frameworks, and architectures, integrating both legacy and modern systems. Development workflows must navigate hybrid environments — local, cloud, and everything in between. This complexity makes it harder for companies to deliver innovation on time and stay competitive.
To overcome these challenges, you need a development platform that’s as reliable and ubiquitous as electricity or Wi-Fi — a platform that works consistently across diverse applications, development tools, and environments. Whether you’re just starting to move toward microservices or fully embracing cloud-native development, Docker meets your team where they are, integrates seamlessly into existing workflows, and scales to meet the needs of individual developers, teams, and entire enterprises.
Docker: Simplifying the complex
The Docker suite of products provides the tools you need to accelerate development, modernize legacy applications, and empower your team to work efficiently and securely. With Docker, you can:
Modernize legacy applications: Docker makes it easy to containerize existing systems, bringing them closer to modern technology stacks without disrupting operations.
Boost productivity for cloud-native teams: Docker ensures consistent environments, integrates with CI/CD workflows, supports hybrid development environments, and enhances collaboration
Consistent environments: Build once, run anywhere
Docker ensures consistency across development, testing, and production environments, eliminating the dreaded “works on my machine” problem. With Docker, your team can build applications in unified environments — whether on macOS, Windows, or Linux — for reliable code, better collaboration, and faster time to market.
With Docker Desktop, developers have a powerful GUI and CLI for managing containers locally. Integration with popular IDEs like Visual Studio Code allows developers to code, build, and debug within familiar tools. Built-in Kubernetes support enables teams to test and deploy applications on a local Kubernetes cluster, giving developers confidence that their code will perform in production as expected.
Integrated workflows for hybrid environments
Development today spans both local and cloud environments. Docker bridges the gap and provides flexibility with solutions like Docker Build Cloud, which speeds up build pipelines by up to 39x using cloud-based, multi-platform builders. This allows developers to focus more on coding and innovation, rather than waiting on builds.
Docker also integrates seamlessly with CI/CD tools like Jenkins, GitLab CI, and GitHub Actions. This automation reduces manual intervention, enabling consistent and reliable deployments. Whether you’re building in the cloud or locally, Docker ensures flexibility and productivity at every stage.
Team collaboration: Better together
Collaboration is central to Docker. With integrations like Docker Hub and other registries, teams can easily share container images and work together on builds. Docker Desktop features like Docker Debug and the Builds view dashboards empower developers to troubleshoot issues together, speeding up resolution and boosting team efficiency.
Docker Scout provides actionable security insights, helping teams identify and resolve vulnerabilities early in the development process. With these tools, Docker fosters a collaborative environment where teams can innovate faster and more securely.
Why Docker?
In today’s fast-paced development landscape, complexity can slow you down. Docker’s unified platform reduces complexity as it simplifies workflows, standardizes environments, and empowers teams to deliver software faster and more securely. Whether you’re modernizing legacy applications, bridging local and cloud environments, or building cutting-edge, cloud-native apps, Docker helps you achieve efficiency and scale at every stage of the development lifecycle.
Ready to simplify your development workflows? Start your Docker journey now and equip your team with the tools they need to innovate, collaborate, and deliver with confidence.
Looking for tips and tricks? Subscribe to Docker Navigator for the latest updates and insights delivered straight to your inbox.
Software engineering is a dynamic, high-pressure field where development teams encounter a variety of challenges every day. As software development projects become increasingly complex, engineers must maintain high-quality code, meet time constraints, collaborate effectively, and prevent security vulnerabilities. At the same time, development teams can be held back by inefficiencies that can hinder productivity and speed.
Let’s explore some of the most common software engineering challenges and how Docker’s tools streamline the inner loop of cloud-native workflows. These tools help developers overcome pain points, boost productivity, and deliver better software faster.
Top 4 software engineering challenges developers face
Let’s be real — software development teams face a laundry list of challenges. From managing dependencies across teams to keeping up with the latest threats in an increasingly complex software ecosystem, these obstacles can quickly become roadblocks that stifle progress. Let’s dive into some of the most significant software engineering challenges that developers face today and how Docker can help:
1. Dependency management
One of the most common pain points in software engineering is managing dependencies. In any large development project, multiple teams might work on different parts of the codebase, often relying on various third-party libraries and services. The complexity increases when these dependencies span across different environments and versions.
The result? Version conflicts, broken builds, deployment failures, and hours spent troubleshooting. This process can become even more cumbersome when working with legacy code or when different teams work with conflicting versions.
Containerize your applications with their dependencies
Docker allows developers to package all their apps and dependencies into neat, lightweight containers. Think of these containers as “time capsules” that hold everything your app needs to run smoothly, from libraries and tools to configurations. And because these containers are portable, you get the same app behavior on your laptop, your testing server, or in production — no more hoping that “it worked on my machine” when it’s go-time.
No more version conflict drama. No more hours spent trying to figure out which version of the library your coworker’s been using. Docker ensures that everyone on the team works with the same setup. Consistent environments, happy devs, and no more dependency issues!
2. Testing complexities
Testing presents another significant challenge for developers. In an ideal world, tests would run in an environment that perfectly mirrors production; however, this is rarely the case. Developers often encounter problems when testing code in isolated environments that don’t reflect real-world conditions. As a result, bugs that might have been caught early in development are only discovered later, leading to costly fixes and delays.
Moreover, when multiple developers work in different environments or use different tools, the quality of tests can be inconsistent, and issues might be missed altogether. This leads to inefficiencies and makes it harder to ensure that your software is functional and reliable.
Leverage cloud-native testing environments that match production
One of Docker’s most significant benefits is its ability to create cloud-native testing environments. With Testcontainers Cloud, you can integrate testing within containers to create consistent, reliable testing environments that scale by defining test dependencies as code with confidence that they match production. Testing ensures that bugs and issues are caught earlier in the development cycle, reducing the time spent on troubleshooting and improving the overall quality of the software.
Docker Hub offers a repository of pre-configured images and environments, enabling developers to quickly share and collaborate on testing setups. This eliminates inconsistencies between test environments, ensuring all teams work with the same configurations and tools.
3. Lack of visibility and collaboration
Software development today often involves many developers working on different parts of a project simultaneously. This collaborative approach has obvious benefits, but can also lead to significant challenges. In a multi-developer environment, tracking changes, ensuring consistency, and maintaining smooth collaboration across teams can be hard.
Without proper visibility into the software development process, identifying issues in real-time and keeping everyone aligned becomes difficult. In many cases, teams end up working in silos, each using their own tools and systems. This lack of coherence can lead to misunderstandings, duplication of efforts, and delays in achieving milestones.
Accelerate teamwork with shared images, caches, and insights
Docker fosters collaboration by offering an integrated ecosystem where developers can seamlessly share images, cache, templates, and more. For example, Docker Hub and Hardened Docker Desktop allow teams to push, pull, and share secure images, making it easier to get started quickly using all the right configurations. Meanwhile, teams can also cut down on time-consuming builds and resolve failed builds with the Docker Build Cloud shared cache and Build insights.
Docker’s streamlined workflows provide greater visibility into the development process. With this improved collaboration and integrated workflows, software developers can enjoy faster development cycles and more time to innovate.
4. Security risks
Security is often a major concern in software development, yet it’s a challenge that many teams struggle to address consistently. Developers are constantly working under tight deadlines to release new features and fixes, which can sometimes push security considerations to the sidelines. As a result, vulnerabilities can be unintentionally introduced into the codebase through outdated libraries, insecure configurations, and even simple coding oversights.
The main challenge with security lies in identifying and managing risks across all development stages and environments. Developers must follow security protocols diligently and vulnerabilities need to be patched quickly, especially when building software for organizations with strict security regulations. This becomes increasingly difficult when multiple teams work on separate components, each potentially introducing its own security concerns.
Embed security into every phase of the development lifecycle
Docker solves these challenges by integrating security and compliance from build to production, without sacrificing speed or flexibility. For example, Docker Scout offers continuous vulnerability scanning and actionable insights, enabling teams to identify and address risks early. And with increased visibility into dependencies, images, and remediation recommendations, developers can be set up to prevent outdated libraries and insecure configurations from reaching production.
With tools like Hardened Docker Desktop, IAM, and RAM, Docker reduces the complexity of security oversight while ensuring compliance. These features help organizations avoid costly vulnerabilities, safeguard intellectual property, and maintain customer trust without slowing development speed. This simplified security management allows developers to deliver faster without compromising security.
Adopt Docker to overcome key challenges in software development
From dependency management to security risks, software developers face numerous challenges on their journey to deliver high-quality, secure applications. Docker’s unified development suite streamlines every stage of the inner loop, combining Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud into one powerful, cloud-native workflow ecosystem.
By streamlining workflows, enhancing collaboration, embedding security into every stage of development, and providing consistent testing environments, Docker empowers teams to build, test, and ship cloud-native applications with unparalleled speed and reliability. Whether you’re tackling legacy code or scaling modern applications, Docker ensures your development process remains efficient, secure, and ready for the demands of today’s fast-paced software landscape.
Docker’s subscription plans offer flexible, scalable access to a unified inner-loop suite, allowing teams of any size to accelerate workflows, ensure consistency, and build better software faster. It’s more than a set of tools — it offers a cohesive platform designed to transform your development lifecycle and keep your team competitive, efficient, and secure.
Did you know that enterprise companies that implemented Docker saw a 126% return on investment (ROI) over three years? In today’s rapidly evolving business landscape, companies face relentless pressure to innovate while managing costs and complexity. Traditional software development methods often struggle to keep pace with technological advancements, leading to inconsistent environments, high operational costs, and slow deployment cycles. That’s where containerization comes in as a smart solution.
Rising technology costs are a concern
Businesses today are navigating a complex environment filled with evolving market demands and economic pressures. A recent survey revealed that 70% of executives expect economic conditions to worsen, driving concerns about inflation and cash flow. Another survey found that 50% of businesses have raised prices to combat rising costs, reflecting broader financial pressures. In this context, traditional software deployment methods often fall short, resulting in rigid, inconsistent environments that impede agility and delay feature releases.
As cloud services costs surge, expected to surpass $1 trillion in 2024, businesses face heightened financial and operational challenges. Outdated deployment methods struggle with modern applications’ complexity, leading to persistent issues and inefficiencies. This underscores the need for a more agile, cost-effective solution.
As the adoption of cloud and hybrid cloud environments accelerates, businesses need solutions that ensure seamless integration and portability across their entire IT ecosystem. Containers provide a key to achieving this, offering unmatched agility, scalability, and security. By embracing containers, organizations can create more adaptable, resilient, and future-proof software solutions.
The solution is a container-first approach
Containerization simplifies the development and deployment of applications by encapsulating them into self-contained units known as containers. Each container includes everything an application needs to run — its code, libraries, and dependencies — ensuring consistent performance across different environments, from development to production.
Similar to how shipping containers transformed the packaging and transport industry, containerization revolutionized development. Using containers, development teams can reduce errors, optimize resources, accelerate time to market, and more.
Key benefits of containerization
Improved consistency: Containers guarantee that applications perform identically regardless of where they are deployed, eliminating the notorious “it works on my machine” problem.
Cost efficiency: Containers reduce infrastructure costs by optimizing resource utilization. Unlike traditional virtual machines that require separate operating systems, containers share the same operating system (OS) kernel, leading to significant savings and better scalability.
Faster time to market: Containers accelerate development and deployment cycles, allowing businesses to bring products and updates to market more quickly.
Enhanced security: Containers provide isolation between applications, which helps manage vulnerabilities and prevent breaches from spreading, thereby enhancing overall security.
Seeing a true impact
A Forrester Consulting study found that enterprises using Docker experienced a three-month faster time to market for revenue-generating applications, along with notable gains in efficiency and speed. These organizations reduced their data center footprint, enhanced application delivery speeds, and saved on infrastructure costs, showcasing containerization’s tangible benefits.
For instance, Cloudflare, a company operating one of the world’s largest cloud networks, needed to address the complexities of managing a growing infrastructure and supporting over 1,000 developers. By adopting Docker’s containerization technology and leveraging innovations like manifest lists, Cloudflare successfully streamlined its development and deployment processes. Docker’s support for multi-architecture images and continuous improvements, such as IPv6 networking capabilities, allowed Cloudflare to manage complex application stacks more efficiently, ensuring consistency across diverse environments and enhancing overall agility.
Stepping into a brighter future
Containerization offers a powerful solution to modern business challenges, providing consistency, cost savings, and enhanced security. As companies face increasing complexity and market pressures, adopting a container-first approach can streamline development, improve operational efficiency, and maintain a competitive edge.
Are you navigating the ever-evolving world of developer tools and container technology? The Docker Newsletter is your essential resource, curated for Docker users like you. Keep your finger on the pulse of the Docker ecosystem. Subscribe now!
If you’re anything like me, you love crafting sleek and responsive user interfaces with React. But, setting up consistent development environments and ensuring smooth deployments can also get complicated. That’s where Docker can help save the day.
As a Senior DevOps Engineer and Docker Captain, I’ve navigated the seas of containerization and witnessed firsthand how Docker can revolutionize your workflow. In this guide, I’ll share how you can dockerize a React app to streamline your development process, eliminate those pesky “it works on my machine” problems, and impress your colleagues with seamless deployments.
Let’s dive into the world of Docker and React!
Why containerize your React application?
You might be wondering, “Why should I bother containerizing my React app?” Great question! Containerization offers several compelling benefits that can elevate your development and deployment game, such as:
Streamlined CI/CD pipelines: By packaging your React app into a Docker container, you create a consistent environment from development to production. This consistency simplifies continuous integration and continuous deployment (CI/CD) pipelines, reducing the risk of environment-specific issues during builds and deployments.
Simplified dependency management: Docker encapsulates all your app’s dependencies within the container. This means you won’t have to deal with the infamous “works on my machine” dilemma anymore. Every team member and deployment environment uses the same setup, ensuring smooth collaboration.
Better resource management: Containers are lightweight and efficient. Unlike virtual machines, Docker containers share the host system’s kernel, which means you can run more containers on the same hardware. This efficiency is crucial when scaling applications or managing resources in a production environment.
Isolated environment without conflict: Docker provides isolated environments for your applications. This isolation prevents conflicts between different projects’ dependencies or configurations on the same machine. You can run multiple applications, each with its own set of dependencies, without them stepping on each other’s toes.
Getting started with React and Docker
Before we go further, let’s make sure you have everything you need to start containerizing your React app.
React app: Use an existing project or create a new one using create-react-app.
A quick introduction to Docker
Docker offers a comprehensive suite of enterprise-ready tools, cloud services, trusted content, and a collaborative community that helps streamline workflows and maximize development efficiency. The Docker productivity platform allows developers to package applications into containers — standardized units that include everything the software needs to run. Containers ensure that your application runs the same, regardless of where it’s deployed.
How to dockerize your React project
Now let’s get down to business. We’ll go through the process step by step and, by the end, you’ll have your React app running inside a Docker container.
Step 1: Set up the React app
If you already have a React app, you can skip this step. If not, let’s create one:
npx create-react-app my-react-app
cd my-react-app
This command initializes a new React application in a directory called my-react-app.
Step 2: Create a Dockerfile
In the root directory of your project, create a file named Dockerfile (no extension). This file will contain instructions for building your Docker image.
Dockerfile for development
For development purposes, you can create a simple Dockerfile:
# Use the latest LTS version of Node.js
FROM node:18-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of your application files
COPY . .
# Expose the port your app runs on
EXPOSE 3000
# Define the command to run your app
CMD ["npm", "start"]
What’s happening here?
FROM node:18-alpine: We’re using the latest LTS version of Node.js based on Alpine Linux.
WORKDIR /app: Sets the working directory inside the container.
*COPY package.json ./**: Copies package.json and package-lock.json to the working directory.
RUN npm install: Installs the dependencies specified in package.json.
COPY . .: Copies all the files from your local directory into the container.
EXPOSE 3000: Exposes port 3000 on the container (React’s default port).
CMD ["npm", "start"]: Tells Docker to run npm start when the container launches.
Production Dockerfile with multi-stage build
For a production-ready image, we’ll use a multi-stage build to optimize the image size and enhance security.
# Build Stage
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production Stage
FROM nginx:stable-alpine AS production
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Explanation
Build stage:
FROM node:18-alpine AS build: Uses Node.js 18 for building the app.
RUN npm run build: Builds the optimized production files.
Production stage:
FROM nginx: Uses Nginx to serve static files.
COPY --from=build /app/build /usr/share/nginx/html: Copies the build output from the previous stage.
EXPOSE 80: Exposes port 80.
CMD ["nginx", "-g", "daemon off;"]: Runs Nginx in the foreground.
Benefits
Smaller image size: The final image contains only the production build and Nginx.
Enhanced security: Excludes development dependencies and Node.js runtime from the production image.
Just like .gitignore helps Git ignore certain files, .dockerignore tells Docker which files or directories to exclude when building the image. Create a .dockerignore file in your project’s root directory:
Excluding unnecessary files reduces the image size and speeds up the build process.
Step 4: Build and run your dockerized React app
Navigate to your project’s root directory and run:
docker build -t my-react-app .
This command tags the image with the name my-react-app and specifies the build context (current directory). By default, this will build the final production stage from your multi-stage Dockerfile, resulting in a smaller, optimized image.
If you have multiple stages in your Dockerfile and need to target a specific build stage (such as the build stage), you can use the --target option. For example:
docker build -t my-react-app-dev --target build .
Note: Building with --target build creates a larger image because it includes the build tools and dependencies needed to compile your React app. The production image (built using –target production) on the other hand, is much smaller because it only contains the final build files.
Running the Docker container
For the development image:
docker run -p 3000:3000 my-react-app-dev
For the production image:
docker run -p 80:80 my-react-app
Accessing your application
Next, open your browser and go to:
http://localhost:3000 (for development)
http://localhost (for production)
You should see your React app running inside a Docker container.
Step 5: Use Docker Compose for multi-container setups
Here’s an example of how a React frontend app can be configured as a service using Docker Compose.
Security note: Ensure your .env file is added to .gitignore and .dockerignore to prevent it from being committed to version control or included in your Docker image.
To start all services defined in a compose.yml in detached mode, the command is:
docker compose up -d
Passing environment variables at runtime
Alternatively, you can pass variables when running the container:
docker run -p 3000:3000 -e REACT_APP_API_URL=https://api.example.com my-react-app-dev
Using Docker Secrets (advanced)
For sensitive data in a production environment, consider using Docker Secrets to manage confidential information securely.
Production Dockerfile with multi-stage builds
When preparing your React app for production, multi-stage builds keep things lean and focused. They let you separate the build process from the final runtime environment, so you only ship what you need to serve your app. This not only reduces image size but also helps prevent unnecessary packages or development dependencies from sneaking into production.
The following is an example that goes one step further: We’ll create a dedicated build stage, a development environment stage, and a production stage. This approach ensures you can develop comfortably while still ending up with a streamlined, production-ready image.
# Stage 1: Build the React app
FROM node:18-alpine AS build
WORKDIR /app
# Leverage caching by installing dependencies first
COPY package.json package-lock.json ./
RUN npm install --frozen-lockfile
# Copy the rest of the application code and build for production
COPY . ./
RUN npm run build
# Stage 2: Development environment
FROM node:18-alpine AS development
WORKDIR /app
# Install dependencies again for development
COPY package.json package-lock.json ./
RUN npm install --frozen-lockfile
# Copy the full source code
COPY . ./
# Expose port for the development server
EXPOSE 3000
CMD ["npm", "start"]
# Stage 3: Production environment
FROM nginx:alpine AS production
# Copy the production build artifacts from the build stage
COPY --from=build /app/build /usr/share/nginx/html
# Expose the default NGINX port
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
What’s happening here?
build stage: The first stage uses the official Node.js image to install dependencies, run the build, and produce an optimized, production-ready React build. By copying only your package.json and package-lock.json before running npm install, you leverage Docker’s layer caching, which speeds up rebuilds when your code changes but your dependencies don’t.
development stage: Need a local environment with hot-reloading for rapid iteration? This second stage sets up exactly that. It installs dependencies again (using the same caching trick) and starts the development server on port 3000, giving you the familiar npm start experience inside Docker.
production stage: Finally, the production stage uses a lightweight NGINX image to serve your static build artifacts. This stripped-down image doesn’t include Node.js or unnecessary development tools — just your optimized app and a robust web server. It keeps things clean, secure, and efficient.
This structured approach makes it a breeze to switch between development and production environments. You get fast feedback loops while coding, plus a slim, optimized final image ready for deployment. It’s a best-of-both-worlds solution that will streamline your React development workflow.
Troubleshooting common issues with Docker and React
Even with the best instructions, issues can arise. Here are common problems and how to fix them.
Issue: “Port 3000 is already in use”
Solution: Either stop the service using port 3000 or map your app to a different port when running the container.
docker run -p 4000:3000 my-react-app
Access your app at http://localhost:4000.
Issue: Changes aren’t reflected during development
Solution: Use Docker volumes to enable hot-reloading. In your compose.yml, ensure you have the following under volumes:
This setup allows your local changes to be mirrored inside the container.
Issue: Slow build times
Solution: Optimize your Dockerfile to leverage caching. Copy only package.json and package-lock.json before running npm install. This way, Docker caches the layer unless these files change.
COPY package*.json ./
RUN npm install
COPY . .
Issue: Container exits immediately
Cause: The React development server may not keep the container running by default.
Solution: Ensure you’re running the container interactively:
docker run -it -p 3000:3000 my-react-app
Issue: File permission errors
Solution: Adjust file permissions or specify a user in the Dockerfile using the USER directive.
# Add before CMD
USER node
Issue: Performance problems on macOS and Windows
File-sharing mechanisms between the host system and Docker containers introduce significant overhead on macOS and Windows, especially when working with large repositories or projects containing many files. Traditional methods like osxfs and gRPC FUSE often struggle to scale efficiently in these environments.
Solutions:
Enable synchronized file shares (Docker Desktop 4.27+): Docker Desktop 4.27+ introduces synchronized file shares, which significantly enhance bind mount performance by creating a high-performance, bidirectional cache of host files within the Docker Desktop VM.
Key benefits:
Optimized for large projects: Handles monorepos or repositories with thousands of files efficiently.
Performance improvement: Resolves bottlenecks seen with older file-sharing mechanisms.
Real-time synchronization: Automatically syncs filesystem changes between the host and container in near real-time.
Reduced file ownership conflicts: Minimizes issues with file permissions between host and container.
How to enable:
Open Docker Desktop and go to Settings > Resources > File Sharing.
In the Synchronized File Shares section, select the folder to share and click Initialize File Share.
Use bind mounts in your compose.yml or Docker CLI commands that point to the shared directory.
Optimize with .syncignore: Create a .syncignore file in the root of your shared directory to exclude unnecessary files (e.g., node_modules, .git/) for better performance.
Leverage WSL 2 on Windows: For Windows users, Docker’s WSL 2 backend offers near-native Linux performance by running the Docker engine in a lightweight Linux VM.
How to enable WSL 2 backend:
Ensure Windows 10 version 2004 or higher is installed.
Install the Windows Subsystem for Linux 2.
In Docker Desktop, go to Settings > General and enable Use the WSL 2 based engine.
Use updated caching options in volume mounts: Although legacy options like :cached and :delegated are deprecated, consistency modes still allow optimization:
consistent: Strict consistency (default).
cached: Allows the host to cache contents.
delegated: Allows the container to cache contents.
Let’s enhance our setup with some advanced techniques.
Reducing image size
Every megabyte counts, especially when deploying to cloud environments.
Use smaller base images: Alpine-based images are significantly smaller.
Clean up after installing dependencies:
RUN npm install && npm cache clean --force
Avoid copying unnecessary files: Use .dockerignore effectively.
Leveraging Docker build cache
Ensure that you’re not invalidating the cache unnecessarily. Only copy files that are required for each build step.
Using Docker layers wisely
Each command in your Dockerfile creates a new layer. Combine commands where appropriate to reduce the number of layers.
RUN npm install && npm cache clean --force
Conclusion
Dockerizing your React app is a game-changer. It brings consistency, efficiency, and scalability to your development workflow. By containerizing your application, you eliminate environment discrepancies, streamline deployments, and make collaboration a breeze.
So, the next time you’re setting up a React project, give Docker a shot. It will make your life as a developer significantly easier. Welcome to the world of containerization!
As organizations strive to stay competitive in an increasingly complex digital world, the pressure to innovate quickly and securely is at an all-time high. Development teams face challenges that range from complex workflows and growing security concerns to ensuring seamless collaboration across distributed environments. Addressing these challenges requires tools that optimize every stage of the CI/CD pipeline, from the developer’s inner loop to production.
This is where Docker comes in. Initially known for revolutionizing containerization, Docker has evolved far beyond its roots to become a powerful suite of products that supports cloud-native development workflows. It’s not just about containers anymore; it’s about empowering developers to build and ship high-quality applications faster and more efficiently. Docker is about automating repetitive tasks, securing applications throughout the entire development lifecycle, and enabling collaboration at scale. By providing the right tools for developers, DevOps teams, and enterprise decision-makers, Docker drives innovation, streamlines processes, and creates measurable value for businesses.
What does Docker do?
At its core, Docker provides a suite of software development tools that enhance productivity, improve security, and seamlessly integrate with your existing CI/CD pipeline. While still closely associated with containers, Docker has evolved into much more than just a containerization solution. Its products support the entire development lifecycle, empowering teams to automate key tasks, improve the consistency of their work, and ship applications faster and more securely.
Automation: Docker automates repetitive tasks within the development process, allowing developers to focus on what matters most: writing code. Whether they’re building images, managing dependencies, or testing applications, developers can use Docker to streamline their workflows and accelerate development cycles.
Security: Security is built into Docker from the start. Docker provides features like proactive vulnerability monitoring with Docker Scout and robust access control mechanisms. These built-in security features help ensure your applications are secure, reducing risks from malicious actors, CVEs, or other vulnerabilities.
CI/CD integration: Docker’s seamless integration with existing CI/CD pipelines offers profound enhancements to ensure that teams can smoothly pass high-quality applications from local development through testing and into production.
Multi-cloud compatibility: Docker supports flexible, multi-cloud development, allowing teams to build applications in one environment and migrate them to the cloud with minimized risk. This flexibility is key for businesses looking to scale, increase cloud adoption, and even upgrade from legacy apps.
The impact on team-based efficiency and enterprise value
Docker is designed not only to empower individual developers but also to elevate the entire team’s productivity while delivering tangible business value. By streamlining workflows, enhancing collaboration, and ensuring security, Docker makes it easier for teams to scale operations and deliver high-impact software with speed.
Streamlined development processes
One of Docker’s primary goals is to simplify development processes. Repetitive tasks such as environment setup, debugging, and dependency management have historically eaten up a lot of developers’ time. Docker removes these inefficiencies, allowing teams to focus on what really matters: building great software. Tools like Docker Desktop, Docker Hub, and Docker Build Cloud help accelerate build processes, while standardized environments ensure that developers spend less time dealing with system inconsistencies and more time coding.
Enterprise-level security and governance
For enterprise decision-makers, security and governance are top priorities. Docker addresses these concerns by providing comprehensive security features that span the entire development lifecycle. Docker Scout proactively monitors for vulnerabilities, ensuring that potential security threats are identified early, before they make their way into production. Additionally, Docker offers fine-grained control over who can access resources within the platform, with features like Image Access Management (IAM) and Resource Access Management (RAM) that ensure the security of developer environments without impairing productivity.
Measurable impact on business value
The value Docker delivers isn’t just in improved developer experience — it directly impacts the bottom line. By automating repetitive tasks in the developer’s inner loop and enhancing integration with the CI/CD pipeline, Docker reduces operational costs while accelerating the delivery of high-quality applications. Developers are able to move faster, iterate quickly, and deliver more reliable software, all of which contribute to lower operational expenses and higher developer satisfaction.
In fact, Docker’s ability to simplify workflows and secure applications means that developers can spend less time troubleshooting and more time building new features. For businesses, this translates to higher productivity and, ultimately, greater profitability.
Collaboration at scale: Empowering teams to work together more effectively
In modern development environments, teams are often distributed across different locations, sometimes even in different time zones. Docker enables effective collaboration at scale by providing standardized tools and environments that help teams work seamlessly together, regardless of where they are. Docker’s suite also helps ensure that teams are all on the same page when it comes to development, security, testing, and more.
Consistent environments for team workflows
One of Docker’s most powerful features is the ability to ensure consistency across different development environments. A Docker container encapsulates everything needed to run an application, including the code, libraries, and dependencies so that applications run the same way on every system. This means developers can work in a standardized environment, reducing the likelihood of errors caused by environment inconsistencies and making collaboration between team members smoother and more reliable.
Simplified CI/CD pipelines
Docker enhances the developer’s inner loop by automating workflows and providing consistent environments, creating efficiencies that ripple through the entire software delivery pipeline. This ripple effect of efficiency can be seen in features like advanced caching with Docker Build Cloud, on-demand and consistent test environments with Testcontainers Cloud, embedded security with Docker Scout, and more. These tools, combined with Docker’s standardized environments, allow developers to collaborate effectively to move from code to production faster and with fewer errors.
GenAI and innovative development
Docker equips developers to meet the demands of today while exploring future possibilities, including streamlining workflows for emerging AI/ML and GenAI applications. By simplifying the adoption of new tools for AI/ML development, Docker empowers organizations to meet present-day demands while also tapping into emerging technologies. These innovations help developers write better code faster while reducing the complexity of their workflows, allowing them to focus more on innovation.
A suite of tools for growth and innovation
Docker isn’t just a containerization tool — it’s a comprehensive suite of software development tools that empower cloud-native teams to streamline workflows, boost productivity, and deliver secure, scalable applications faster. Whether you’re an enterprise scaling workloads securely or a development team striving for speed and consistency, Docker’s integrated suite provides the tools to accelerate innovation while maintaining control.
Ready to unlock the full potential of Docker? Start by exploring our range of solutions and discover how Docker can transform your development processes today. If you’re looking for hands-on guidance, our experts are here to help — contact us to see how Docker can drive success for your team.
Take the next step toward building smarter, more efficient applications. Let’s scale, secure, and simplify your workflows together.
Outdoor light displays are a fun holiday tradition — from simple light strings hung from the eaves to elaborate scenes that bring out your competitive spirit. If using open source tools, thousands of feet of electrical cables, custom controllers, and your favorite music to build complex projects appeals to you, then the holiday season offers the perfect opportunity to indulge your creative passion.
I personally run home light shows at Halloween and Christmas that feature up to 30,000 individually addressable LED lights synchronized with dozens of different songs. It’s been an interesting learning journey over the past five years, but it is also one that almost anyone can pursue, regardless of technical ability. Read on for tips on how to make a display that’s the highlight of your neighborhood.
Getting started with outdoor light shows
As you might expect, light shows are built using a combination of hardware and software. The hardware includes the lights, props, controllers, and cabling. On the software side, there are different tools for the programming, also called sequencing, of the lights as well as the playback of the show.
Hardware requirements
Lights
Let’s look more closely at the hardware behind the scenes starting with the lights. Multiple types of lights can be used in displays, but I’ll keep it simple and focus on the most popular choice. Most shows are built around 12mm RGB LED lights that support the WS2811 protocol, often referred to as pixels or nodes. Generally, these are not available at retail stores. That means you’ll need to order them online, and I recommend choosing a vendor that specializes in light displays. I have purchased lights from a few different vendors, but recently I’ve been using Wally’s Lights, Visionary Light Shows, and Your Pixel Store.
Props
The lights are mounted into different props — such as a spider for Halloween or a snowflake for the winter holidays. You can either purchase these props, which are usually made out of the same plastic cardboard material used in yard signs, or you can make them yourself. Very few vendors sell pre-built props, so be ready to push the pixels by hand — yes, in my display either I or someone in my family pushed each of the 30,000 lights into place when we initially built the props. I get most of my props from EFL Designs, Gilbert Engineering, or Boscoyo Studio.
Controllers
Once your props are ready to go, you’ll need something to drive them. This is where controllers come in (Figure 3). Like the props and lights, you can get your controllers from various specialized vendors and, to a large extent, you can mix and match different brands in the same show because they all speak the same protocols to control the pixels (usually E1.31 or DDP).
You can purchase controllers that are ready to run, or you can buy the individual components and build your own boxes — I grew up building PCs, so I love this degree of flexibility. However, I do tend to buy pre-configured controllers, because I like having a warranty from the manufacturer. My controllers all come from HolidayCoro, but Falcon controllers are also popular.
The number of controllers you need depends on the number of lights in your show. Most controllers have multiple outputs, and each output can drive a certain number of lights. I typically plan for about 400 lights per output. Plus, I use about three main controllers and four receiver boxes. Note that long-range receivers are a way of extending the distance you can place lights from the main controller, but this is more of an advanced topic and not one I’ll cover in this introductory article.
Cables
Although controllers are powered by standard household outlets, the connection from the controllers to the lights happens over specialized cabling. These extension cables contain three wires. Two are used to send power to the lights (either 5v or 12v), and a third is used to send data. Basically, this third wire sends instructions like “light 1,232 turn green for .5 seconds then fade to off over .25 seconds.” You can get these extension cables from any vendor that sells pixels.
Additionally, all of the controllers need to be on the same Ethernet network. Many folks run their shows on wireless networks, but I prefer a wired setup for increased performance and reliability.
Software and music
At this point, you have a bunch of props with lights connected to networked controllers via specialized cabling. But, how do you make them dance? That’s where the software comes in.
xLights
Many hobbyists use xLights to program their lights. This software is open source and available for Mac, Windows, and Linux, and it works with three basic primitives: props, effects, and time. You can choose what effect you want to apply to a given prop at a given time (Figure 4). The timing of the effect is almost always aligned with the song you’ve chosen. For example, you might flash snowflakes off and on in synchronization with the drum beat of a song.
Music
If this step sounds overwhelming to you, you’re not alone. In fact, I don’t sequence my own songs. I purchase them from different vendors, who create sequences for generic setups with a wide variety of props. I then import them and map them to the different elements that I actually use in my show. In terms of time, professionals can spend many hours to animate one minute of a song. I generally spend about two hours mapping an existing sequence to my show’s layout. My favorite sequence vendors include BF Light Shows, xTreme Sequences, and Magical Light Shows.
Falcon Player
Once you have a sequence built, you use another piece of software to send that sequence to your show controllers. Some controllers have this software built in, but most people I know use another open source application, Falcon Player (FPP), to perform this task. Not only can FPP be run on a Raspberry Pi, but it also is shipped as a Docker image! FPP includes the ability to play back your sequence as well as to build playlists and set up a show schedule for automated playback.
Put it all together and flip the switch
When everything is put together, you should have a system similar to Figure 5:
Although building your own light show may seem like a daunting task, fear not; you are not alone. I have yet to mention the most important part of this whole process: the community. The xLights community is one of the most helpful I’ve ever been part of. You can get questions answered via the official Facebook group as well through as other groups dedicated to specific sequence and controller vendors. Additionally, a Zoom support meeting runs 24×7 and is staffed by hobbyists from across the globe. So, what are you waiting for? Go ahead and start planning your first holiday light show!
Docker Scout simplifies the integration with container image repositories, improving the efficiency of container image approval workflows without disrupting or replacing current processes. Positioned outside the repository’s stringent validation framework, Docker Scout serves as a proactive measure to significantly reduce the time needed for an image to gain approval.
By shifting security checks left and integrating Docker Scout into the early stages of the development cycle, issues are identified and addressed directly on the developer’s machine.
Minimizing vulnerabilities
This leftward shift in security accelerates the development process by keeping developers in flow, providing immediate feedback on policy violations at the point of development. As a result, images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans (Figure 1). By resolving issues earlier, Docker Scout minimizes the number of vulnerabilities detected during the CI/CD process, freeing up the security team to focus on higher-priority tasks.
Additionally, the Docker Scout console allows the security team to define custom security policies and manage VEX (Vulnerability Exploitability eXchange) statements. VEX is a standard that allows vendors and other parties to communicate the exploitability status of vulnerabilities, allowing for the creation of justifications for including software that has been tied to Common Vulnerabilities and Exposures (CVE).
This feature enables seamless collaboration between development and security teams, ensuring that developers are working with up-to-date compliance guidelines. The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection (Figure 2).
How to secure image repositories
A secure container image repository provides digitally signed, OCI-compliant images that are rebuilt and rescanned nightly. These repositories are typically used in highly regulated or security-conscious environments, offering a wide range of container images, from open source software to commercial off-the-shelf (COTS) products. Each image in the repository undergoes rigorous security assessments to ensure compliance with strict security standards before being deployed in restricted or sensitive environments.
Key components of the repository include a hardened source code repository and an OCI-compliant registry (Figure 3). All images are continuously scanned for vulnerabilities, stored secrets, problematic code, and compliance with various standards. Each image is assigned a score upon rebuild, determining its compliance and suitability for use. Scanning reports and justifications for any potential issues are typically handled using the VEX format.
Why use a hardened image repository?
A hardened image repository mitigates the security risks associated with deploying containers in sensitive or mission-critical environments. Traditional software deployment can expose organizations to vulnerabilities and misconfigurations that attackers can exploit. By enforcing a strict set of requirements for container images, the hardened image repository ensures that images meet the necessary security standards before deployment. Rebuilding and rescanning each image daily allows for continuous monitoring of new vulnerabilities and emerging attack vectors.
Using pre-vetted images from a hardened repository also streamlines the development process, reducing the load on development teams and enabling faster, safer deployment.
In addition to addressing security risks, the repository also ensures software supply chain security by incorporating software bills of materials (SBOMs) with each image. The SBOM of a container image can provide an inventory of all the components that were used to build the image, including operating system packages, application specific dependencies with its versions, and license information. By maintaining a robust vetting process, the repository guarantees that all software components are traceable, verifiable, and tamper-free — essential for ensuring the integrity and reliability of deployed software.
Who uses a hardened image repository?
The main users of a hardened container image repository include internal developers responsible for creating applications, developers working on utility images, and those responsible for building base images for other containerized applications. Note that the titles for these roles can vary by organization.
Application developers use the repository to ensure that the images their applications are built upon meet the required security and compliance standards.
DevOps engineers are responsible for building and maintaining the utility images that support various internal operations within the organization.
Platform developers create and maintain secure base images that other teams can use as a foundation for their containerized applications.
Daily builds
One challenge with using a hardened image repository is the time needed to approve images. Daily rebuilds are conducted to assess each image for vulnerabilities and policy violations, but issues can emerge, requiring developers to make repeated passes through the pipeline. Because rebuilds are typically done at night, this process can result in delays for development teams, as they must wait for the next rebuild cycle to resolve issues.
Enter Docker Scout
Integrating Docker Scout into the pre-submission phase can reduce the number of issues that enter the pipeline. This proactive approach helps speed up the submission and acceptance process, allowing development teams to catch issues before the nightly scans.
Vulnerability detection and management
Requirement: Images must be free of known vulnerabilities at the time of submission to avoid delays in acceptance.
Docker Scout contribution:
Early detection: Docker Scout can scan Docker images during development to detect vulnerabilities early, allowing developers to resolve issues before submission.
Continuous analysis: Docker Scout continually reviews uploaded SBOMs, providing early warnings for new critical CVEs and ensuring issues are addressed outside of the nightly rebuild process.
Justification handling: Docker Scout supports VEX for handling exceptions. This can streamline the justification process, enabling developers to submit justifications for potential vulnerabilities more easily.
Security best practices and configuration management
Requirement: Images must follow security best practices and configuration guidelines, such as using secure base images and minimizing the attack surface.
Docker Scout contribution:
Security posture enhancement: Docker Scout allows teams to set policies that align with repository guidelines, checking for policy violations such as disallowed software or unapproved base images.
Compliance with dependency management
Requirement: All dependencies must be declared, and internet access during the build process is usually prohibited.
Docker Scout contribution:
Dependency scanning: Docker Scout identifies outdated or vulnerable libraries included in the image.
Automated reports: Docker Scout generates security reports for each dependency, which can be used to cross-check the repository’s own scanning results.
Documentation and provenance
Requirement: Images must include detailed documentation on their build process, dependencies, and configurations for auditing purposes.
Docker Scout contribution:
Documentation support: Docker Scout contributes to security documentation by providing data on the scanned image, which can be used as part of the official documentation submitted with the image.
Continuous compliance
Requirement: Even after an image is accepted into the repository, it must remain compliant with new security standards and vulnerability disclosures.
Docker Scout contribution:
Ongoing monitoring: Docker Scout continuously monitors images, identifying new vulnerabilities as they emerge, ensuring that images in the repository remain compliant with security policies.
By utilizing Docker Scout in these areas, developers can ensure their images meet the repository’s rigorous standards, thereby reducing the time and effort required for submission and review. This approach helps align development practices with organizational security objectives, enabling faster deployment of secure, compliant containers.
Integrating Docker Scout into the CI/CD pipeline
Integrating Docker Scout into an organization’s CI/CD pipeline can enhance image security from the development phase through to deployment. By incorporating Docker Scout into the CI/CD process, the organization can automate vulnerability scanning and policy checks before images are pushed into production, significantly reducing the risk of deploying insecure or non-compliant images.
Integration with build pipelines: During the build stage of the CI/CD pipeline, Docker Scout can be configured to automatically scan Docker images for vulnerabilities and adherence to security policies. The integration would typically involve adding a Docker Scout scan as a step in the build job, for example through a GitHub action. If Docker Scout detects any issues such as outdated dependencies, vulnerabilities, or policy violations, the build can be halted, and feedback is provided to developers immediately. This early detection helps resolve issues long before images are pushed to the hardened image repository.
Validation in the deployment pipeline: As images move from development to production, Docker Scout can be used to perform final validation checks. This step ensures that any security issues that might have arisen since the initial build have been addressed and that the image is compliant with the latest security policies. The deployment process can be gated based on Docker Scout’s reports, preventing insecure images from being deployed. Additionally, Docker Scout’s continuous analysis of SBOMs means that even after deployment, images can be monitored for new vulnerabilities or compliance issues, providing ongoing protection throughout the image lifecycle.
By embedding Docker Scout directly into the CI/CD pipeline (Figure 1), the organization can maintain a proactive approach to security, shifting left in the development process while ensuring that each image deployed is safe, compliant, and up-to-date.
Defense in depth and Docker Scout’s role
In any organization that values security, adopting a defense-in-depth strategy is essential. Defense in depth is a multi-layered approach to security, ensuring that if one layer of defense is compromised, additional safeguards are in place to prevent or mitigate the impact. This strategy is especially important in environments that handle sensitive data or mission-critical operations, where even a single vulnerability can have significant consequences.
Docker Scout plays a vital role in this defense-in-depth strategy by providing a proactive layer of security during the development process. Rather than relying solely on post-submission scans or production monitoring, Docker Scout integrates directly into the development and CI/CD workflows (Figure 2), allowing teams to catch and resolve security issues early. This early detection prevents issues from escalating into more significant risks later in the pipeline, reducing the burden on the SecOps team and speeding up the deployment process.
Furthermore, Docker Scout’s continuous monitoring capabilities mean that images are not only secure at the time of deployment but remain compliant with evolving security standards and new vulnerabilities that may arise after deployment. This ongoing vigilance forms a crucial layer in a defense-in-depth approach, ensuring that security is maintained throughout the entire lifecycle of the container image.
By integrating Docker Scout into the organization’s security processes, teams can build a more resilient, secure, and compliant software environment, ensuring that security is deeply embedded at every stage from development to deployment and beyond.
Docker Desktop 4.36 introduces powerful updates to simplify enterprise administration and enhance security. This release features streamlined macOS sign-in enforcement via configuration profiles, enabling IT administrators to deploy tamper-proof policies at scale, alongside a new PKG installer for efficient, consistent deployments. Enhancements like the unified WSL 2 mono distribution improve startup speeds and workflows, while updates to Enhanced Container Isolation (ECI) and Desktop Settings Management allow for greater flexibility and centralized policy enforcement. These innovations empower organizations to maintain compliance, boost productivity, and streamline Docker Desktop management across diverse enterprise environments.
Sign-in enforcement: Streamlined alternative for organizations for macOS
Recognizing the need for streamlined and secure ways to enforce sign-in protocols, Docker is introducing a new sign-in enforcement mechanism for macOS configuration profiles. This Early Access update delivers significant business benefits by enabling IT administrators to enforce sign-in policies quickly, ensuring compliance and maximizing the value ofDocker subscriptions.
Key benefits
Fast deployment and rollout: Configuration profiles can be rapidly deployed across a fleet of devices using Mobile Device Management (MDM) solutions, making it easy for IT admins to enforce sign-in requirements and other policies without manual intervention.
Tamper-proof enforcement: Configuration profiles ensure that enforced policies, such as sign-in requirements, cannot be bypassed or disabled by users, providing a secure and reliable way to manage access to Docker Desktop (Figure 1).
Support for multiple organizations: More than one organization can now be defined in the allowedOrgs field, offering flexibility for users who need access to Docker Desktop under multiple organizational accounts (Figure 2).
How it works
macOS configuration profiles are XML files that contain specific settings to control and manage macOS device behavior. These profiles allow IT administrators to:
Restrict access to Docker Desktop unless the user is authenticated.
Prevent users from disabling or bypassing sign-in enforcement.
By distributing these profiles through MDM solutions, IT admins can manage large device fleets efficiently and consistently enforce organizational policies.
Docker now supports enforcing sign-in for more than one organization at a time, providing greater flexibility for users working across multiple teams or enterprises. The allowedOrgs field now accepts multiple strings, enabling IT admins to define more than one organization via any supported configuration method, including:
registry.json
Windows Registry key
macOS plist
macOS configuration profile
This enhancement makes it easier to enforce login policies across diverse organizational setups, streamlining access management while maintaining security (Figure 3).
Deploy Docker Desktop for macOS in bulk with the PKG installer
Managing large-scale Docker Desktop deployments on macOS just got easier with the new PKG installer. Designed for enterprises and IT admins, the PKG installer offers significant advantages over the traditional DMG installer, streamlining the deployment process and enhancing security.
Ease of use: Automate installations and reduce manual steps, minimizing user error and IT support requests.
Consistency: Deliver a professional and predictable installation experience that meets enterprise standards.
Streamlined deployment: Simplify software rollouts for macOS devices, saving time and resources during bulk installations.
Enhanced security: Benefit from improved security measures that reduce the risk of tampering and ensure compliance with enterprise policies.
You can download the PKG installer via Admin Console > Security and Access > Deploy Docker Desktop > macOS. Options for both Intel and Arm architectures are also available for macOS and Windows, ensuring compatibility across devices.
Start deploying Docker Desktop more efficiently and securely today via the Admin Console (Figure 4).
Desktop Settings Management (Early Access)
Managing Docker Desktop settings at scale is now easier than ever with the new Desktop Settings Management, available in Early Access for Docker Business customers. Admins can centrally deploy and enforce settings policies for Docker Desktop directly from the cloud via the Admin Console, ensuring consistency and efficiency across their organization.
Here’s what’s available now:
Admin Console policies: Configure and enforce default Docker Desktop settings from the Admin Console.
Quick import: Import existing configurations from an admin-settings.json file for seamless migration.
Export and share: Export policies as JSON files to easily share with security and compliance teams.
Targeted testing: Roll out policies to a smaller group of users for testing before deploying globally.
What’s next?
Although the Desktop Settings Management feature is in Early Access, we’re actively building additional functionality to enhance it, such as compliance reporting and automated policy enforcement capabilities. Stay tuned for more!
This is just the beginning of a powerful new way to simplify Docker Desktop management and ensure organizational compliance. Try it out now and help shape the future of settings management: Admin Console > Security and Access > Desktop Settings Management (Figure 5).
Streamlining data workflow with WSL 2 mono distribution
Simplify the Windows Subsystem for Linux (WSL 2) setup by eliminating the need to maintain two separate Docker Desktop WSL distributions. This update streamlines the WSL 2 configuration by consolidating the previously required dual Docker Desktop WSL distributions into a single distribution, now available on both macOS and Windows operating systems.
The simplification of Docker Desktop’s WSL 2 setup is designed to make the codebase easier to understand and maintain. This enhances the ability to handle failures more effectively and increases the startup speed of Docker Desktop on WSL 2, allowing users to begin their work more quickly.
The value of streamlining data workflows and relocating data to a different drive on macOS and Windows with the WSL 2 backend in Docker Desktop encompasses these key areas:
Improved performance: By separating data and system files, I/O contention between system operations and data operations is reduced, leading to faster access and processing.
Enhanced storage management: Separating data from the main system drives allows for more efficient use of space.
Increased flexibility with cross-platform compatibility: Ensuring consistent data workflows across different operating systems (macOS and Windows), especially when using Docker Desktop with WSL 2.
Enhanced Docker performance: Docker performs better when processing data on a drive optimized for such tasks, reducing latency and improving container performance.
By implementing these practices, organizations can achieve more efficient, flexible, and high-performing data workflows, leveraging Docker Desktop’s capabilities on both macOS and Windows platforms.
Enhanced Container Isolation (ECI) improvements
Allow any container to mount the Docker socket: Admins can now configure permissions to allow all containers to mount the Docker socket by adding * or *:* to the ECI Docker socket mount permission image list. This simplifies scenarios where broad access is required while maintaining security configuration through centralized control. Learn more in the advanced configuration documentation.
Improved support for derived image permissions: The Docker socket mount permissions for derived images feature now supports wildcard tags (e.g., alpine:*), enabling admins to grant permissions for all versions of an image. Previously, specific tags like alpine:latest had to be listed, which was restrictive and required ongoing maintenance. Learn more about managing derived image permissions.
These enhancements reduce administrative overhead while maintaining a high level of security and control, making it easier to manage complex environments.
Upgrade now
The Docker Desktop 4.36 release introduces a suite of features designed to simplify enterprise administration, improve security, and enhance operational efficiency. From enabling centralized policy enforcement with Desktop Settings Management to streamlining deployments with the macOS PKG installer, Docker continues to empower IT administrators with the tools they need to manage Docker Desktop at scale.
The improvements in Enhanced Container Isolation (ECI) and WSL 2 workflows further demonstrate Docker’s commitment to innovation, providing solutions that optimize performance, reduce complexity, and ensure compliance across diverse enterprise environments.
As businesses adopt increasingly complex development ecosystems, these updates highlight Docker’s focus on meeting the unique needs of enterprise teams, helping them stay agile, secure, and productive. Whether you’re managing access for multiple organizations, deploying tools across platforms, or leveraging enhanced image permissions, Docker Desktop 4.36 sets a new standard for enterprise administration.
Start exploring these powerful new features today and unlock the full potential of Docker Desktop for your organization.
At Docker, we’re continuously enhancing Docker Desktop to meet the evolving needs of enterprise users. Since Docker Desktop 4.23, where we reduced startup time by 75%, we’ve made significant investments in both performance and stability. These improvements are designed to deliver a faster, more reliable experience for developers across industries. (Read more about our previous performance milestones.)
In this post, we walk through the latest performance enhancements.
Latest performance enhancements
Boost performance with Docker VMM on Apple Silicon Mac
Apple Silicon Mac users, we’re excited to introduce Docker Virtual Machine Manager (Docker VMM) — a powerful new virtualization option designed to enhance performance for Docker Desktop on M1 and M2 Macs. Currently in beta, Docker VMM gives developers a faster, more efficient alternative to the existing Apple Virtualization Framework for many workflows (Figure 1). Docker VMM is available starting in the Docker Desktop 4.35 release.
Why try Docker VMM?
If you’re running native ARM-based images on Docker Desktop, Docker VMM offers a performance boost that could make your development experience smoother and more efficient. With Docker VMM, you can:
Experience faster operations: Docker VMM shows improved speeds on essential commands like git status and others, especially when caches are built up. In our benchmarks, Docker VMM eliminates certain slowdowns that can occur with the Apple Virtualization framework.
Enjoy flexibility: Not sure if Docker VMM is the right fit? No problem! Docker VMM is still in beta, so you can switch back to the Apple Virtualization framework at any time and try Docker VMM again in future releases as we continue optimizing it.
What about emulated Intel images?
If you’re using Rosetta to emulate Intel images, Docker VMM may not be the ideal choice for now, as it currently doesn’t support Rosetta. For workflows requiring Intel emulation, the Apple Virtualization framework remains the best option, as Docker VMM is optimized for native Arm binaries.
Key benchmarks: Real-world speed gains
Our testing reveals significant improvements when using Docker VMM for common commands, including git status:
Initial git status: Docker VMM outperforms, with the first run significantly faster compared to the Apple Virtualization framework (Figure 2).
Subsequent git status: With Docker VMM, subsequent runs are also speedier due to more efficient caching (Figure 3).
With Docker VMM, you can say goodbye to frustrating delays and get a faster, more responsive experience right out of the gate.
Say goodbye to QEMU
For users who may have relied on QEMU, note that we’re transitioning it to legacy support. Docker VMM and Apple Virtualization Framework now provide superior performance options, optimized for the latest Apple hardware.
Docker Desktop for Windows on Arm
For specific workloads, particularly those involving parallel computing or Arm-optimized tasks, Arm64 devices can offer significant performance benefits. With Docker Desktop now supporting Windows on Arm, developers can take advantage of these performance boosts while maintaining the familiar Docker Desktop experience, ensuring smooth, efficient operations on this architecture.
Synchronized file shares
Unlike traditional file-sharing mechanisms that can suffer from performance degradation with large projects or frequent file changes, the synchronized file shares feature offers a more stable and performant alternative. It uses efficient synchronization processes to ensure that changes made to files on the host are rapidly reflected in the container, and vice versa, without the bottlenecks or slowdowns experienced with older methods.
This feature is a major performance upgrade for developers who work with shared files between the host and container. It reduces the performance issues related to intensive file system operations and enables smoother, more responsive development workflows. Whether you’re dealing with frequent file changes or working on large, complex projects, synchronized file sharing improves efficiency and ensures that your containers and host remain in sync without delays or excessive resource usage.
Key highlights of synchronized file sharing include:
Selective syncing: Developers can choose specific directories to sync, avoiding unnecessary overhead from syncing unneeded files or directories.
Faster file changes: It significantly reduces the time it takes for changes made in the host environment to be recognized and applied within containers.
Improved performance with large projects: This feature is especially beneficial for large projects with many files, as it minimizes the file-sharing latency that often accompanies such setups.
Cross-platform support: Synchronized file sharing is supported on both macOS and Windows, making it versatile across platforms and providing consistent performance.
The synchronized file shares feature is available in Docker Desktop 4.27 and newer releases.
GA for Docker Desktop on Red Hat Enterprise Linux (RHEL)
Red Hat Enterprise Linux (RHEL) is known for its high-performance capabilities and efficient resource utilization, which is essential for developers working with resource-intensive applications. Docker Desktop on RHEL enables enterprises to fully leverage these optimizations, providing a smoother, faster experience from development through to production. Moreover, RHEL’s robust security framework ensures that Docker containers run within a highly secure, certified operating system, maintaining strict security policies, patch management, and compliance standards — vital for industries like finance, healthcare, and government.
Continuous performance improvements in every Docker Desktop release
At Docker, we are committed to delivering continuous performance improvements with every release. Recent updates to Docker Desktop have introduced the following optimizations across file sharing and network performance:
Advanced VirtioFS optimizations: The performance journey continued in Docker Desktop 4.33 with further fine-tuning of VirtioFS. We increased the directory cache timeout, optimized host change notifications, and removed extra FUSE operations related to security.capability attributes. Additionally, we introduced an API to clear caches after container termination, enhancing overall file-sharing efficiency and container lifecycle management.
Faster read and write operations on bind mounts. In Docker Desktop 4.32, we further enhanced VirtioFS performance by optimizing read and write operations on bind mounts. These changes improved I/O throughput, especially when dealing with large files or high-frequency file operations, making Docker Desktop more responsive and efficient for developers.
Enhanced caching for faster performance: Continuing with performance gains, Docker Desktop 4.31 brought significant improvements to VirtioFS file sharing by extending attribute caching timeouts and improving invalidation processes. This reduced the overhead of constant file revalidation, speeding up containerized applications that rely on shared files.
Why these updates matter for you
Each update to Docker Desktop is focused on improving speed and reliability, ensuring it scales effortlessly with your infrastructure. Whether you’re using RHEL, Apple Silicon, or Windows Arm, these performance optimizations help you work faster, reduce downtime, and boost productivity. Stay current with the latest updates to keep your development environment running at peak efficiency.
Share your feedback and help us improve
We’re always looking for ways to enhance Docker Desktop and make it the best tool for your development needs. If you have feedback on performance, ideas for improvement, or issues you’d like to discuss, we’d love to hear from you. If you have feedback on performance, ideas for improvement, or issues you’d like to discuss, we’d love to hear from you. Feel free to reach out and schedule time to chat directly with a Docker Desktop Product Manager via Calendly.
This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.
We recently met up with developers at GitHub Universe at the Docker booth. In addition to demonstrating the upcoming Docker agent extension for GitHub Copilot, we also hosted a “Hack with Docker Labs” session.
To facilitate these sessions, we created a VSCode extension to explore the relationship between agents and tools. We encouraged attendees to think about how agents can change how we interact with tools by mixing tool definitions (anything you can package in a Docker container) with prompts using a simple Markdown-based canvas.
Many of these sessions followed a simple pattern.
Choose a tool and describe what you want it to do.
Let the agent interact with that tool.
Ask the agent to explain what it did or adjust the strategy and try again.
It was great to facilitate these discussions and learn more about how agents are challenging us to interact with tools in new ways.
Figure 1 shows a short example of a session where we generated a QR code (qrencode). We start by defining both a tool and a prompt in the Markdown file. Then, we pass control over to the agent and let it interact with that tool (the output from the agent pops up on the right-hand side).
Feel free to create an issue in our repo if you want to learn more.
Editors
This year’s trip to GitHub Universe also felt like an opportunity to reflect on how developer workflows are changing with the introduction of coding assistants. Developers may have had language services in the editor for a long time now, but coding assistants that can predict the next most likely tokens have taught us something new. We were all writing more or less the same programs (Figure 2).
Other agents
Tools like Cursor, GitHub Copilot Chat, and others are also teaching us new ways in which coding assistants are expanding beyond simple predictions. In this series, we’ve been highlighting tools that typically work in the background. Agents armed with these tools will track other kinds of issues, such as build problems, outdated dependencies, fixable linting violations, and security remediations.
Extending the previous diagram, we can imagine an updated picture where agents send diagnostics and propose code actions, while still offering chat interfaces for other kinds of user input (Figure 3). If the ecosystem has felt closed, get ready for it to open up to new kinds of custom agents.
More to come
In the next few posts, we’ll take this series in a new direction and look at how agents are able to use LSPs to interact with developers in new ways. An agent that represents background tasks, such as updating dependencies, or fixing linting violations, can now start to use language services and editors as tools! We think this will be a great way for agents to start helping developers better understand the changes they’re making and open up these platforms to input from new kinds of tools.
GitHub Universe was a great opportunity to check in with developers, and we were excited to learn how many more tools developers wanted to bring to their workflows. As always, to follow along with this effort, check out the GitHub repository for this project.
For more on what we’re doing at Docker, subscribe to our newsletter.
Navigating the complex world of containerized testing environments can be challenging, especially when dealing with Docker-in-Docker (DinD). As a senior DevOps engineer and Docker Captain, I’ve seen firsthand the hurdles that teams face with DinD, and here I’ll share why Testcontainers Cloud is a transformative alternative that’s reshaping the way we handle container-based testing.
Understanding Docker-in-Docker
Docker-in-Docker allows you to run Docker within a Docker container. It’s like Inception for containers — a Docker daemon running inside a Docker container, capable of building and running other containers.
How Docker-in-Docker works
Nested Docker daemons: In a typical Docker setup, the Docker daemon runs on the host machine, managing containers directly on the host’s operating system. With DinD, you start a Docker daemon inside a container. This inner Docker daemon operates independently, enabling the container to build and manage its own set of containers.
Privileged mode and access to host resources: To run Docker inside a Docker container, the container needs elevated privileges. This is achieved by running the container in privileged mode using the --privileged flag:
docker run --privileged -d docker:dind
The --privileged flag grants the container almost all the capabilities of the host machine, including access to device files and the ability to perform system administration tasks. Although this setup enables the inner Docker daemon to function, it poses significant security risks, as it can potentially allow the container to affect the host system adversely.
Filesystem considerations: The inner Docker daemon stores images and containers within the file system of the DinD container, typically under /var/lib/docker. Because Docker uses advanced file system features like copy-on-write layers, running an inner Docker daemon within a containerized file system (which may itself use such features) can lead to complex interactions and potential conflicts.
Cgroups and namespace isolation: Docker relies on Linux kernel features like cgroups and namespaces for resource isolation and management. When running Docker inside a container, these features must be correctly configured to allow nesting. This process can introduce additional complexity in ensuring that resource limits and isolation behave as expected.
Why teams use Docker-in-Docker
Isolated build environments: DinD allows each continuous integration (CI) job to run in a clean, isolated Docker environment, ensuring that builds and tests are not affected by residual state from previous jobs or other jobs running concurrently.
Consistency across environments: By encapsulating the Docker daemon within a container, teams can replicate the same Docker environment across different stages of the development pipeline, from local development to CI/CD systems.
Challenges with DinD
Although DinD provides certain benefits, it also introduces significant challenges, such as:
Security risks: Running containers in privileged mode can expose the host system to security vulnerabilities, as the container gains extensive access to host resources.
Stability issues: Nested containers can lead to storage driver conflicts and other instability issues, causing unpredictable build failures.
Complex debugging: Troubleshooting issues in a nested Docker environment can be complicated, as it involves multiple layers of abstraction and isolation.
Real-world challenges
Although Docker-in-Docker might sound appealing, it often introduces more problems than it solves. Before diving into those challenges, let’s briefly discuss Testcontainers and its role in modern testing practices.
What is Testcontainers?
Testcontainers is a popular open source library designed to support integration testing by providing lightweight, disposable instances of common databases, web browsers, or any service that can run in a Docker container. It allows developers to write tests that interact with real instances of external resources, rather than relying on mocks or stubs.
Key features of Testcontainers
Realistic testing environment: By using actual services in containers, tests are more reliable and closer to real-world scenarios.
Isolation: Each test session, or even each test can run in a clean environment, reducing flakiness due to shared state.
Easy cleanup: Containers are ephemeral and are automatically cleaned up after tests, preventing resource leaks.
Dependency on the Docker daemon
A core component of Testcontainers’ functionality lies in its interaction with the Docker daemon. Testcontainers orchestrates Docker resources by starting and stopping containers as needed for tests. This tight integration means that access to a Docker environment is essential wherever the tests are run.
The DinD challenge with Testcontainers in CI
When teams try to include Testcontainers-based integration testing in their CI/CD pipelines, they often face the challenge of providing Docker access within the CI environment. Because Testcontainers requires communication with the Docker daemon, many teams resort to using Docker-in-Docker to emulate a Docker environment inside the CI job.
However, this approach introduces significant challenges, especially when trying to scale Testcontainers usage across the organization.
Case study: The CI pipeline nightmare
We had a Jenkins CI pipeline that utilized Testcontainers for integration tests. To provide the necessary Docker environment, we implemented DinD. Initially, it seemed to work fine, but soon we encountered:
Unstable builds: Random failures due to storage driver conflicts and issues with nested container layers. The nested Docker environment sometimes clashed with the host, causing unpredictable behavior.
Security concerns: Running containers in privileged mode raised red flags during security audits. Because DinD requires privileged mode to function correctly, it posed significant security risks, potentially allowing containers to access the host system.
Performance bottlenecks: Builds were slow, and resource consumption was high. The overhead of running Docker within Docker led to longer feedback loops, hindering developer productivity.
Complex debugging: Troubleshooting nested containers became time-consuming. Logs and errors were difficult to trace through the multiple layers of containers, making issue resolution challenging.
We spent countless hours trying to patch these issues, but it felt like playing a game of whack-a-mole.
Why Testcontainers Cloud is a better choice
Testcontainers Cloud is a cloud-based service designed to simplify and enhance your container-based testing. By offloading container execution to the cloud, it provides a secure, scalable, and efficient environment for your integration tests.
How TestContainers Cloud addresses DinD’s shortcomings
Enhanced security
No more privileged mode: Eliminates the need for running containers in privileged mode, reducing the attack surface.
Isolation: Tests run in isolated cloud environments, minimizing risks to the host system.
Compliance-friendly: Easier to pass security audits without exposing the Docker socket or granting elevated permissions.
Improved performance
Scalability: Leverage cloud resources to run tests faster and handle higher loads.
Resource efficiency: Offloading execution frees up local and CI/CD resources.
Simplified configuration
Plug-and-play integration: Minimal changes are required to switch from local Docker to Testcontainers Cloud.
No nested complexity: Avoid the intricacies and pitfalls of nested Docker daemons.
Better observability and debugging
Detailed logs: Access comprehensive logs through the Testcontainers Cloud dashboard.
Real-time monitoring: Monitor containers and resources in real time with enhanced visibility.
Getting started with Testcontainers Cloud
Let’s dive into how you can get the most out of Testcontainers Cloud.
Switching to Testcontainers Cloud allows you to run tests without needing a local Docker daemon:
No local Docker required: Testcontainers Cloud handles container execution in the cloud.
Consistent environment: Ensures that your tests run in the same environment across different machines.
Additionally, you can easily integrate Testcontainers Cloud into your CI pipeline to run the same tests without scaling your CI infrastructure.
Using Testcontainers Cloud with GitHub Actions
Here’s how you can set up Testcontainers Cloud in your GitHub Actions workflow.
1. Create a new service account
Log in to Testcontainers Cloud dashboard.
Navigate to Service Accounts:
Create a new service account dedicated to your CI environment.
Generate an access token:
Copy the access token. Remember, you can only view it once, so store it securely.
2. Set the TC_CLOUD_TOKEN environment variable
In GitHub Actions:
Go to your repository’s Settings > Secrets and variables > Actions.
Add a new Repository Secret named TC_CLOUD_TOKEN and paste the access token.
3. Add Testcontainers Cloud to your workflow
Update your GitHub Actions workflow (.github/workflows/ci.yml) to include the Testcontainers Cloud setup.
Example workflow:
name: CI Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# ... other preparation steps (dependencies, compilation, etc.) ...
- name: Set up Java
uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
- name: Setup Testcontainers Cloud Client
uses: atomicjar/testcontainers-cloud-setup-action@v1
with:
token: ${{ secrets.TC_CLOUD_TOKEN }}
# ... steps to execute your tests ...
- name: Run Tests
run: ./mvnw test
Notes:
The atomicjar/testcontainers-cloud-setup-action GitHub Action automates the installation and authentication of the Testcontainers Cloud Agent in your CI environment.
Ensure that your TC_CLOUD_TOKEN is kept secure using GitHub’s encrypted secrets.
Clarifying the components: Testcontainers Cloud Agent and Testcontainers Cloud
To make everything clear:
Testcontainers Cloud Agent (CLI in CI environments): In CI environments like GitHub Actions, you use the Testcontainers Cloud Agent (installed via the GitHub Action or command line) to connect your CI jobs to Testcontainers Cloud.
Testcontainers Cloud: The cloud service that runs your containers, offloading execution from your CI environment.
In CI environments:
Use the Testcontainers Cloud Agent (CLI) within your CI jobs.
Authenticate using the TC_CLOUD_TOKEN.
Tests executed in the CI environment will use Testcontainers Cloud.
Monitoring and debugging
Take advantage of the Testcontainers Cloud dashboard:
Session logs: View logs for individual test sessions.
Container details: Inspect container statuses and resource usage.
Debugging: Access container logs and output for troubleshooting.
Why developers prefer Testcontainers Cloud over DinD
Real-world impact
After integrating Testcontainers Cloud, our team observed the following:
Faster build times: Tests ran significantly faster due to optimized resource utilization.
Reduced maintenance: Less time spent on debugging and fixing CI pipeline issues.
Enhanced security: Eliminated the need for privileged mode, satisfying security audits.
Better observability: Improved logging and monitoring capabilities.
Addressing common concerns
Security and compliance
Data isolation: Each test runs in an isolated environment.
Encrypted communication: Secure data transmission.
Efficiency gains: Time saved on maintenance offsets the cost.
Resource optimization: Reduces the need for expensive CI infrastructure.
Compatibility
Multi-language support: Works with Java, Node.js, Python, Go, .NET, and more.
Seamless integration: Minimal changes required to existing test code.
Conclusion
Switching to Testcontainers Cloud, with the help of the Testcontainers Cloud Agent, has been a game-changer for our team and many others in the industry. It addresses the key pain points associated with Docker-in-Docker and offers a secure, efficient, and developer-friendly alternative.
Key takeaways
Security: Eliminates the need for privileged containers and Docker socket exposure.
Performance: Accelerates test execution with scalable cloud resources.
Simplicity: Simplifies configuration and reduces maintenance overhead.
Observability: Enhances debugging with detailed logs and monitoring tools.
As someone who has navigated these challenges, I recommend trying Testcontainers Cloud. It’s time to move beyond the complexities of DinD and adopt a solution designed for modern development workflows.
Effective infrastructure management is crucial for organizations using Docker Hub. Without a clear understanding of resource consumption, unexpected usage can emerge and skyrocket. This is particularly true if pulls and storage needs are not budgeted and forecasted correctly. By implementing proactive post controls and monitoring usage patterns, development teams can sustain their Docker Hub usage while keeping expenses under control.
To support these goals, we’ve introduced new Docker Hub Usage dashboards, offering organizations the ability to access and analyze their usage patterns for storage and pulls.
Docker Hub’s Usage dashboards put you in control, giving visibility into every pull and image your Docker systems request. Each pull and cache becomes a deliberate choice — not a random event — so you can make every byte count. With clear insights into what’s happening and why, you can design more efficient, optimized systems.
Reclaim control and manage technical resources by kicking bad habits
The Docker Hub Usage dashboards (Figure 1) provide valuable insights, allowing teams to track peaks and valleys, detect high usage periods, and identify the images and repositories driving the most consumption. This visibility not only aids in managing usage but also strengthens continuous improvement efforts across your software supply chain, helping teams build applications more efficiently and sustainably.
This information helps development teams to stay on top of challenges, such as:
Redundant pulls and misconfigured repositories: Thesecan quickly and quietly drive up technical expenses while falling out of scope of the most relevant or critical use cases. Docker Hub’s Usage dashboards can help development teams identify patterns and optimize accordingly. They let you view usage trends across IPs and users as well, which helps with pinpointing high consumption areas and ensuring accountability in an organization when it comes to resource management.
Poor caching management: Repository insights and image tagging helps customers assess internal usage patterns, such as frequently accessed images, where there might be an opportunity to improve caching. With proper governance models, organizations can also establish policies and processes that reduce the variability of resource usage as a whole. This goal goes beyond keeping track of seasonality usage patterns to help you design more predictable usage patterns so you can budget accordingly.
Accidental automation: Accidental automated system activities can really hurt your usage. Let’s say you are using a CI/CD pipeline or automated scripts configured to pull images more often than they should. They may pull on every build instead of the actual version change, for example.
Usage dashboards can help you identify these inefficiencies by showing detailed pull data associated with automated tooling. This information can help your teams quickly identify and adjust misconfigured systems, fine-tune automations to only pull when needed, and ultimately focus on the most relevant use cases for your organization, avoiding accidental overuse of resources:
Docker Hub’s Usage dashboards offer a comprehensive view of your usage data, including downloadable CSV reports that include metrics such as pull counts, repository names, IP addresses, and version checks (Figure 2). This granular approach allows your organization to gain valuable insights and trend data to help optimize your team’s workflows and inform policies.
Integrate robust operational principles into your development pipeline by leveraging these data-driven reports and maintain control over resource consumption and operational efficiency with Docker Hub.
Docker Desktop’s single sign-on (SSO) and sign-in enforcement (also called login enforcement) features work together to enhance security and ease of use. SSO allows users to log in with corporate credentials, whereas login enforcement ensures every user is authenticated, giving IT tighter control over compliance. In this post, we’ll define each of these features, explain their unique benefits, and show how using them together streamlines management and improves your Docker Desktop experience.
Before diving into the benefits of login alongside SSO, let’s clarify three related terms: login, single sign-on (SSO), and enforced login.
Login: Logging in connects users to Docker’s suite of tools, enabling access to personalized settings, team resources, and features like Docker Scout and Docker Build Cloud. By default, members of an organization can use Docker Desktop without signing in. Logging in can be done through SSO or by using Docker-specific credentials.
Single sign-on (SSO): SSO allows users to access Docker using their organization’s central authentication system, letting teams streamline access across multiple platforms with one set of credentials. SSO standardizes and secures login and supports automation around provisioning but does not automatically log in users unless enforced.
Enforced login: This policy, configured by administrators, ensures users are logged in by requiring login credentials before accessing Docker Desktop and associated tools. With enforced login, teams gain consistent access to Docker’s productivity and security features, minimizing gaps in visibility and control.
With these definitions in mind, here’s why being logged in matters, how SSO simplifies login, and how login enforcement ensures your team gets the full benefit of Docker’s powerful development tools.
Why logging in matters for admins and compliance teams
Enforcing sign-in with corporate credentials ensures that all users accessing Docker Desktop are verified and utilizing the benefits of your Docker Business subscription while adding a layer of security to safeguard your software supply chain. This policy strengthens your organization’s security posture and enables Docker to provide detailed usage insights, helping compliance teams track engagement and adoption.
Enforced login will support cloud-based control over settings, allowing admins to manage application configurations across the organization more effectively. By requiring login, your organization benefits from greater transparency, control, and alignment with compliance standards.
When everyone in your organization signs in with proper credentials:
Access controls for shared resources become more reliable, allowing administrators to enforce policies and permissions consistently.
Developers stay connected to their workspaces and resources, minimizing disruptions.
Desktop Insights Dashboard provides admins actionable insights into usage, from feature adoption to image usage trends and login activity, helping administrators optimize team performance and security.
Teams gain full visibility and access to Docker Scout’s security insights, which only function with logged-in accounts.
Docker provides three options to help administrators enforce sign-in.
Registry key method (Windows Only): Integrates seamlessly with Windows, letting IT enforce login policies within familiar registry settings, saving time on configuration.
Plist or config profiles method (Mac): Provides an easy way for IT to manage access on macOS, ensuring policy consistency across Apple devices without additional tools.
Registry.json method (all platforms): Works across Windows, macOS, and Linux, allowing IT to enforce login on all platforms with a single, flexible configuration file, streamlining policy management for diverse environments.
Each method helps IT secure access, restrict to authorized users, and maintain compliance across all systems. You can enforce login without setting up SSO. Read the documentation to learn more about Docker’s sign-in enforcement methods.
Single sign-on (SSO)
Docker Desktop’s SSO capabilities allow organizations to streamline access by integrating with corporate identity providers, ensuring that only authorized team members can access Docker resources using their work credentials. This integration enhances security by eliminating the need for separate Docker-specific passwords, reducing the risk of unauthorized access to critical development tools. With SSO, admins can enforce consistent login policies across teams, simplify user management, and gain greater control over who accesses Docker Desktop. Additionally, SSO enables compliance teams to track access and usage better, aligning with organizational security standards and improving overall security posture.
Docker Desktop supports SSO integrations with a variety of idPs, including Okta, OneLogin, Auth0, and Microsoft Entra ID. By integrating with these IdPs, organizations can streamline user authentication, enhance security, and maintain centralized access control across their Docker environments.
Differences between SSO enforcement and SSO enablement
SSO and SCIM give your company more control over how users log in and attach themselves to your organization and Docker subscription but do not require your users to sign in to your organization when using Docker Desktop. Without sign-in enforcement, users can continue to utilize Docker Desktop without logging in or using their personal Docker IDs or subscriptions, preventing Docker from providing you with insights into their usage and control over the application.
SSO enforcement usually applies to identity management across multiple applications, enforcing a single, centralized login for a suite of apps or services. However, a registry key or other local login enforcement mechanism typically applies only to that specific application (e.g., Docker Desktop) and doesn’t control access across different services.
Better together: Sign-in enforcement and SSO
While SSO enables seamless access to Docker for those who choose to log in, enforcing login ensures that users fully benefit from Docker’s productivity and security features.
Docker’s SSO integration is designed to simplify enterprise user management, allowing teams to access Docker with their organization’s centralized credentials. This streamlines onboarding and minimizes password management overhead, enhancing security across the board. However, SSO alone doesn’t require users to log in — it simply makes it more convenient and secure. Without enforced login, users might bypass the sign-in process, missing out on Docker’s full benefits, particularly in areas of security and control.
By coupling SSO with login enforcement, organizations strengthen their Registry Access Management (RAM), ensuring access is restricted to approved registries, boosting image compliance, and centralizing control. Encouraging login alongside SSO ensures teams enjoy a seamless experience while unlocking Docker’s complete suite of features.