Reading view

There are new articles available, click to refresh the page.

ESP32-based Waveshare DDSM Driver HAT (B) for Raspberry Pi supports DDSM400 hub motors

Waveshare DDSM Driver HAT (B) Raspberry Pi DDSM motor driver

Waveshare has recently launched DDSM Driver HAT (B), a compact Raspberry Pi DDSM (Direct Drive Servo Motor) motor driver designed specifically to drive the DDSM400 hub motors. This board is built around an ESP32 MCU and supports wired (USB and UART) and wireless (2.4GHz WiFi) communication. Additionally, the board features a physical toggle switch, which lets it choose between the ESP32 control or USB control modes. On ESP32 control mode you can control the device through a built-in web application. In the USB control mode, the motor driver can be controlled via USB from a host computer sending JSON commands. An XT60 connector is used to power the board, and programming is done through a USB-C port that connects to the ESP32. The board is suitable for robotics projects, especially for mobile robots in 6×6 or 4×4 configurations. Waveshare DDSM Driver HAT (B) specifications: Wireless MCU – Espressif Systems ESP32-WROOM-32E ESP32 [...]

The post ESP32-based Waveshare DDSM Driver HAT (B) for Raspberry Pi supports DDSM400 hub motors appeared first on CNX Software - Embedded Systems News.

How to Dockerize a Django App: Step-by-Step Guide for Beginners

One of the best ways to make sure your web apps work well in different environments is to containerize them. Containers let you work in a more controlled way, which makes development and deployment easier. This guide will show you how to containerize a Django web app with Docker and explain why it’s a good idea.

We will walk through creating a Docker container for your Django application. Docker gives you a standardized environment, which makes it easier to get up and running and more productive. This tutorial is aimed at those new to Docker who already have some experience with Django. Let’s get started!

2400x1260 docker evergreen logo blog A

Why containerize your Django application?

Django apps can be put into containers to help you work more productively and consistently. Here are the main reasons why you should use Docker for your Django project:

  • Creates a stable environment: Containers provide a stable environment with all dependencies installed, so you don’t have to worry about “it works on my machine” problems. This ensures that you can reproduce the app and use it on any system or server. Docker makes it simple to set up local environments for development, testing, and production.
  • Ensures reproducibility and portability: A Dockerized app bundles all the environment variables, dependencies, and configurations, so it always runs the same way. This makes it easier to deploy, especially when you’re moving apps between environments.
  • Facilitates collaboration between developers: Docker lets your team work in the same environment, so there’s less chance of conflicts from different setups. Shared Docker images make it simple for your team to get started with fewer setup requirements.
  • Speeds up deployment processes: Docker makes it easier for developers to get started with a new project quickly. It removes the hassle of setting up development environments and ensures everyone is working in the same place, which makes it easier to merge changes from different developers.

Getting started with Django and Docker

Setting up a Django app in Docker is straightforward. You don’t need to do much more than add in the basic Django project files.

Tools you’ll need

To follow this guide, make sure you first:

If you need help with the installation, you can find detailed instructions on the Docker and Django websites.

How to Dockerize your Django project

The following six steps include code snippets to guide you through the process.

Step 1: Set up your Django project

1. Initialize a Django project. 

If you don’t have a Django project set up yet, you can create one with the following commands:

django-admin startproject my_docker_django_app
cd my_docker_django_app

2. Create a requirements.txt file. 

In your project, create a requirements.txt file to store dependencies:

pip freeze > requirements.txt

3. Update key environment settings.

You need to change some sections in the settings.py file to enable them to be set using environment variables when the container is started. This allows you to change these settings depending on the environment you are working in.

  # The secret key
  SECRET_KEY = os.environ.get("SECRET_KEY")

  DEBUG = bool(os.environ.get("DEBUG", default=0))

  ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")

Step 2: Create a Dockerfile

A Dockerfile is a script that tells Docker how to build your Docker image. Put it in the root directory of your Django project. Here’s a basic Dockerfile setup for Django:

# Use the official Python runtime image
FROM python:3.13  

# Create the app directory
RUN mkdir /app

# Set the working directory inside the container
WORKDIR /app

# Set environment variables 
# Prevents Python from writing pyc files to disk
ENV PYTHONDONTWRITEBYTECODE=1
#Prevents Python from buffering stdout and stderr
ENV PYTHONUNBUFFERED=1 

# Upgrade pip
RUN pip install --upgrade pip 

# Copy the Django project  and install dependencies
COPY requirements.txt  /app/

# run this command to install all dependencies 
RUN pip install --no-cache-dir -r requirements.txt

# Copy the Django project to the container
COPY . /app/

# Expose the Django port
EXPOSE 8000

# Run Django’s development server
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Each line in the Dockerfile serves a specific purpose:

  • FROM: Selects the image with the Python version you need.
  • WORKDIR: Sets the working directory of the application within the container.
  • ENV: Sets the environment variables needed to build the application
  • RUN and COPY commands: Install dependencies and copy project files.
  • EXPOSE and CMD: Expose the Django server port and define the startup command.

You can build the Django Docker container with the following command:

docker build -t django-docker .

To see your image, you can run:

docker image list

The result will look something like this:

REPOSITORY      TAG       IMAGE ID       CREATED          SIZE
django-docker   latest    ace73d650ac6   20 seconds ago   1.55GB

Although this is a great start in containerizing the application, you’ll need to make a number of improvements to get it ready for production.

  • The CMD manage.py is only meant for development purposes and should be changed for a WSGI server.
  • Reduce the size of the image by using a smaller image.
  • Optimize the image by using a multistage build process.

Let’s get started with these improvements.

Update requirements.txt

Make sure to add gunicorn to your requirements.txt. It should look like this:

asgiref==3.8.1
Django==5.1.3
sqlparse==0.5.2
gunicorn==23.0.0
psycopg2-binary==2.9.10

Make improvements to the Dockerfile

The Dockerfile below has changes that solve the three items on the list. The changes to the file are as follows:

  • Updated the FROM python:3.13 image to FROM python:3.13-slim. This change reduces the size of the image considerably, as the image now only contains what is needed to run the application.
  • Added a multi-stage build process to the Dockerfile. When you build applications, there are usually many files left on the file system that are only needed during build time and are not needed once the application is built and running. By adding a build stage, you use one image to build the application and then move the built files to the second image, leaving only the built code. Read more about multi-stage builds in the documentation.
  • Add the Gunicorn WSGI server to the server to enable a production-ready deployment of the application.
# Stage 1: Base build stage
FROM python:3.13-slim AS builder

# Create the app directory
RUN mkdir /app

# Set the working directory
WORKDIR /app 

# Set environment variables to optimize Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1 

# Upgrade pip and install dependencies
RUN pip install --upgrade pip 

# Copy the requirements file first (better caching)
COPY requirements.txt /app/

# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Stage 2: Production stage
FROM python:3.13-slim

RUN useradd -m -r appuser && \
   mkdir /app && \
   chown -R appuser /app

# Copy the Python dependencies from the builder stage
COPY --from=builder /usr/local/lib/python3.13/site-packages/ /usr/local/lib/python3.13/site-packages/
COPY --from=builder /usr/local/bin/ /usr/local/bin/

# Set the working directory
WORKDIR /app

# Copy application code
COPY --chown=appuser:appuser . .

# Set environment variables to optimize Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1 

# Switch to non-root user
USER appuser

# Expose the application port
EXPOSE 8000 

# Start the application using Gunicorn
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "my_docker_django_app.wsgi:application"]

Build the Docker container image again.

docker build -t django-docker .

After making these changes, we can run a docker image list again:

REPOSITORY      TAG       IMAGE ID       CREATED         SIZE
django-docker   latest    3c62f2376c2c   6 seconds ago   299MB

You can see a significant improvement in the size of the container.

The size was reduced from 1.6 GB to 299MB, which leads to faster a deployment process when images are downloaded and cheaper storage costs when storing images.

You could use docker init as a command to generate the Dockerfile and compose.yml file for your application to get you started.

Step 3: Configure the Docker Compose file

A compose.yml file allows you to manage multi-container applications. Here, we’ll define both a Django container and a PostgreSQL database container.

The compose file makes use of an environment file called .env, which will make it easy to keep the settings separate from the application code. The environment variables listed here are standard for most applications:

services:
 db:
   image: postgres:17
   environment:
     POSTGRES_DB: ${DATABASE_NAME}
     POSTGRES_USER: ${DATABASE_USERNAME}
     POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
   ports:
     - "5432:5432"
   volumes:
     - postgres_data:/var/lib/postgresql/data
   env_file:
     - .env

 django-web:
   build: .
   container_name: django-docker
   ports:
     - "8000:8000"
   depends_on:
     - db
   environment:
     DJANGO_SECRET_KEY: ${DJANGO_SECRET_KEY}
     DEBUG: ${DEBUG}
     DJANGO_LOGLEVEL: ${DJANGO_LOGLEVEL}
     DJANGO_ALLOWED_HOSTS: ${DJANGO_ALLOWED_HOSTS}
     DATABASE_ENGINE: ${DATABASE_ENGINE}
     DATABASE_NAME: ${DATABASE_NAME}
     DATABASE_USERNAME: ${DATABASE_USERNAME}

     DATABASE_PASSWORD: ${DATABASE_PASSWORD}
     DATABASE_HOST: ${DATABASE_HOST}
     DATABASE_PORT: ${DATABASE_PORT}
   env_file:
     - .env
volumes:
   postgres_data:

And the example .env file:

DJANGO_SECRET_KEY=your_secret_key
DEBUG=True
DJANGO_LOGLEVEL=info
DJANGO_ALLOWED_HOSTS=localhost
DATABASE_ENGINE=postgresql_psycopg2
DATABASE_NAME=dockerdjango
DATABASE_USERNAME=dbuser
DATABASE_PASSWORD=dbpassword
DATABASE_HOST=db
DATABASE_PORT=5432

Step 4: Update Django settings and configuration files

1. Configure database settings. 

Update settings.py to use PostgreSQL:

  DATABASES = {
       'default': {
           'ENGINE': 'django.db.backends.{}'.format(
               os.getenv('DATABASE_ENGINE', 'sqlite3')
           ),
           'NAME': os.getenv('DATABASE_NAME', 'polls'),
           'USER': os.getenv('DATABASE_USERNAME', 'myprojectuser'),
           'PASSWORD': os.getenv('DATABASE_PASSWORD', 'password'),
           'HOST': os.getenv('DATABASE_HOST', '127.0.0.1'),
           'PORT': os.getenv('DATABASE_PORT', 5432),
       }
   }

2. Set ALLOWED_HOSTS to read from environment files. 

In settings.py, set ALLOWED_HOSTS to:

   # 'DJANGO_ALLOWED_HOSTS' should be a single string of hosts with a , between each.
   # For example: 'DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1,[::1]'
   ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")

3. Set the SECRET_KEY to read from environment files.

In settings.py, set SECRET_KEY to:

   # SECURITY WARNING: keep the secret key used in production secret!
   SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY")

4. Set DEBUG to read from environment files.

In settings.py, set DEBUG to:

 # SECURITY WARNING: don't run with debug turned on in production!
 DEBUG = bool(os.environ.get("DEBUG", default=0))

Step 5: Build and run your new Django project

To build and start your containers, run:

docker compose up --build

This command will download any necessary Docker images, build the project, and start the containers. Once complete, your Django application should be accessible at http://localhost:8000.

Step 6: Test and access your application

Once the app is running, you can test it by navigating to http://localhost:8000. You should see Django’s welcome page, indicating that your app is up and running. To verify the database connection, try running a migration:

docker compose run django-web python manage.py migrate

Troubleshooting common issues with Docker and Django

Here are some common issues you might encounter and how to solve them:

  • Database connection errors: If Django can’t connect to PostgreSQL, verify that your database service name matches in compose.yml and settings.py.
  • File synchronization issues: Use the volumes directive in compose.yml to sync changes from your local files to the container.
  • Container restart loops or crashes: Use docker compose logs to inspect container errors and determine the cause of the crash.

Optimizing your Django web application

To improve your Django Docker setup, consider these optimization tips:

  • Automate and secure builds: Use Docker’s multi-stage builds to create leaner images, removing unnecessary files and packages for a more secure and efficient build.
  • Optimize database access: Configure database pooling and caching to reduce connection time and boost performance.
  • Efficient dependency management: Regularly update and audit dependencies listed in requirements.txt to ensure efficiency and security.

Take the next step with Docker and Django

Containerizing your Django application with Docker is an effective way to simplify development, ensure consistency across environments, and streamline deployments. By following the steps outlined in this guide, you’ve learned how to set up a Dockerized Django app, optimize your Dockerfile for production, and configure Docker Compose for multi-container setups.

Docker not only helps reduce “it works on my machine” issues but also fosters better collaboration within development teams by standardizing environments. Whether you’re deploying a small project or scaling up for enterprise use, Docker equips you with the tools to build, test, and deploy reliably.

Ready to take the next step? Explore Docker’s powerful tools, like Docker Hub and Docker Scout, to enhance your containerized applications with scalable storage, governance, and continuous security insights.

Learn more 

Unlocking Efficiency with Docker for AI and Cloud-Native Development

By: Yiwen Xu

The need for secure and high quality software becomes more critical every day as the impact of vulnerabilities increases and related costs continue to rise. For example, flawed software cost the U.S. economy $2.08 trillion in 2020 alone, according to the Consortium for Information and Software Quality (CISQ). And, a software defect that might cost $100 to fix if found early in the development process can grow exponentially to $10,000 if discovered later in production. 

Docker helps you deliver secure, efficient applications by providing consistent environments and fast, reliable container management, building on best practices that let you discover and resolve issues earlier in the software development life cycle (SDLC).

2400x1260 docker evergreen logo blog E

Shifting left to ensure fewer defects

In a previous blog post, we talked about using the right tools, including Docker’s suite of products to boost developer productivity. Besides having the right tools, you also need to implement the right processes to optimize your software development and improve team productivity. 

The software development process is typically broken into two distinct loops, the inner and the outer loops. At Docker, we believe that investing in the inner loop is crucial. This means shifting security left and identifying problems as soon as you can. This approach improves efficiency and reduces costs by helping teams find and fix software issues earlier.

Using Docker tools to adopt best practices

Docker’s products help you adopt these best practices — we are focused on enhancing the software development lifecycle, especially around refining the inner loop. Products like Docker Desktop allow your dev team in the inner loop to run, test, code, and build everything fast and consistently. This consistency eliminates the “it works on my machine” issue, meaning applications behave the same in both development and production.  

Shifting left lets your dev team identify problems earlier in your software project lifecycle. When you detect issues sooner, you increase efficiency and help ensure secure builds and compliance. By shifting security left with Docker Scout, your dev teams can identify vulnerabilities sooner and help avoid issues down the road. 

Another example of shifting left involves testing — doing testing earlier in the process leads to more robust software and faster release cycles. This is when Testcontainers Cloud comes in handy because it enables developers to run reliable integration tests, with real dependencies defined in code. 

Accelerate development within the hybrid inner loop

We see more and more companies adopting the so-called hybrid inner loop, which combines the best of two worlds — local and cloud. The results provide greater flexibility for your dev teams and encourage better collaboration. For example, Docker Build Cloud uses the power of the cloud to speed up build time without sacrificing the local development experience that developers love. 

By using these Docker products across the software development life cycle, teams get quick feedback loops and faster issue resolution, ensuring a smooth development flow from inception to deployment. 

Simplifying AI application development

When you’re using the right tools and processes to accelerate your application delivery and maximize efficiency throughout your SDLC, processes that were once cumbersome become your new baseline, freeing up time for true innovation. 

Docker also helps accelerate innovation by simplifying AI/ML development. We are continually investing in AI to help your developers deliver AI-backed applications that differentiate your business and enhance competitiveness.

Docker AI tools

Docker’s GenAI Stack accelerates the incorporation of large language models (LLMs) and AI/ML into your code, enabling the delivery of AI-backed applications. All containers work harmoniously and are managed directly from Docker Desktop, allowing your team to monitor and adjust components without leaving their development environment. Deploying the GenAI Stack is quick and easy, and leveraging Docker’s containerization technology helps speed setup and simplify scaling as applications grow.

Earlier this year, we announced the preview of Docker Extension for GitHub Copilot. By standardizing best practices and enabling integrations with tools like GitHub Copilot, Docker empowers developers to focus on innovation, closing the gap from the first line of code to production.

And, more recently, we launched the Docker AI Catalog in Docker Hub. This new feature simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. Your dev team will benefit from shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Wrapping up

Docker products help you establish sound processes and practices related to shifting left and discovering issues earlier to avoid headaches down the road. This approach ultimately unlocks developer productivity, giving your dev team more time to code and innovate. Docker also allows you to quickly use AI to close knowledge gaps and offers trusted tools to build AI/ML applications and accelerate time to market. 

To see how Docker continues to empower developers with the latest innovations and tools, check out our Docker 2024 Highlights.

Learn about Docker’s updated subscriptions and find the ideal plan for your team’s needs.

Learn more

Building Trust into Your Software with Verified Components

Within software development, security and compliance are more than simple boxes to check. Each attestation and compliance check is backed by a well-considered risk assessment that aims to avoid ever-changing vulnerabilities and attack vectors. Software development teams don’t want to worry about vulnerabilities when they are focused on building something remarkable.

In this article, we explain how Docker Hub and Docker Scout can help development teams ensure a more secure and compliant software supply chain. 

2400x1260 security column 072024

Security starts with trusted foundations

Every structure needs a strong foundation. A weak base is where cracks begin to show. Using untrusted or outdated software is like building a skyscraper on sand, and security issues can derail progress, leading to costly fixes and delayed releases. By “shifting security left” — addressing vulnerabilities early in the development process — teams can avoid these setbacks down the road.  

Modern development demands a secure and compliant software supply chain. Unverified software or vulnerabilities buried deep within base images can become costly compliance issues, disrupting development timelines and eroding customer trust. One weak link in the supply chain can snowball into more significant issues, affecting product delivery and customer satisfaction. Without security and compliance checks, organizations will lack the credibility their customers rely on.

How Docker Hub and Scout help teams shift left

Software developers are like a construction crew building a skyscraper. The process requires specialized components — windows, elevators, wiring, concrete, and so on — which are found at a single supply depot and which work in harmony with each other. This idea is similar to microservices, which are pieced together to create modern applications. In this analogy, Docker Hub acts as the supply depot for a customer’s software supply chain, stocked with trusted container images that help developer teams streamline development.

Docker Hub is more than a container registry. It is the most widely trusted content distribution platform built on secure, verified, and dependable container images. Docker Official Images (DOI) and Docker Verified Publisher (DVP) programs provide a rock-solid base to help minimize risks and let development teams focus on creating their projects. 

Docker Hub simplifies supply chain security by ensuring developers start with trusted components. Its library of official and verified publisher images offers secure, up-to-date resources vetted for compliance and reliability, eliminating the risk of untrusted or outdated components.

Proactive risk management is critical to software development

To avoid breaking production environments, organizations need to plan ahead by catching and tracking common vulnerabilities and exposures (CVEs) early in the development process. Docker Scout enables proactive risk management by integrating security checks early in the development lifecycle. Scout reduces the likelihood of security incidents and streamlines the development process.

Additionally, Docker Scout Health Scores provide a straightforward framework for evaluating the security posture of container images used daily by development teams. Using an easy-to-understand alphabetical grading system (A to F), these scores assess CVEs in software components within Docker Hub. This feature lets developers quickly evaluate and select trusted content, ensuring a secure software supply chain.

Avoid shadow changes with IAM and RBAC for secure collaboration

Compliance is not glamorous, but it is essential to running a business. Development teams don’t want to have to worry about whether they are meeting industry standards — they want to know they are. Docker Hub makes compliance simple with pre-certified images and many features that take the guesswork out of governance. That means you can stay compliant while your teams keep growing and innovating.

The biggest challenge to scaling a team or growing your development operations is not about adding people — it’s about maintaining control without losing momentum. Tracking, reducing, and managing shadow changes means that your team does not lose the flow state in development velocity. 

Docker Hub’s Image Access Management (IAM) enforces precise permissions to ensure that only authorized people have access to modify sensitive information in repositories. Additionally, with role-based access control (RBAC), you’re not just delegating; you’re empowering your team with predefined roles that streamline onboarding, reduce mistakes, and keep everyone moving in harmony.

Docker Hub’s activity logs provide another layer of confidence as they let you track changes, enforce compliance, and build trust. These capabilities enhance security and boost collaboration by creating an environment where team members can focus on delivering high-quality applications.

Built-in trust

Without verified components, development teams can end up playing whack-a-mole with vulnerabilities. Time is lost. Money is spent. Trust is damaged. Now, picture a team working with trusted content and images that integrate security measures from the start. They deliver on time, on budget, and with confidence.

Building security into your applications doesn’t slow you down; it’s your superpower. Docker weaves trust and security into every part of your development process. Your applications are safeguarded, your delivery is accelerated, and your team is free to focus on what matters most — creating value.

Start your journey today. With Docker, you’re not just developing applications but building trust. Learn how trusted components help simplify compliance, enhance security, and empower your team to innovate fearlessly. 

Learn more

Docker Desktop 4.37: AI Catalog and Command-Line Efficiency

By: Yiwen Xu

Key features of the Docker Desktop 4.37 release include: 

The Docker Desktop 4.37 release brings incremental improvements that make developers’ lives easier by addressing common challenges in modern software development. With a focus on integrating AI resources and streamlining operational workflows, this update ensures developers can work faster, smarter, and more effectively.

1920x1080 4.37 docker desktop release

Unlocking AI-driven development with Docker AI Catalog integration

AI/ML development is exploding, but many developers face hurdles accessing prebuilt AI models and tools. They often need to search across multiple platforms, wasting valuable time piecing together resources and overcoming compatibility issues. This fragmentation slows down innovation and makes it harder for teams to bring AI-driven features into their applications.

With Docker Desktop 4.37, the AI Catalog in Docker Hub is now accessible directly through Docker Desktop. This seamless integration enables developers to discover, pull, and integrate AI models into their workflows effortlessly. Whether you’re incorporating pretrained machine learning models or exploring generative AI tools, Docker Desktop ensures these resources are just a click away.

Accessing AI Catalog from DD
Figure 1: AI Catalog in Docker Hub is now accessible directly through Docker Desktop.

Key benefits:

  • Streamlined discovery: You don’t need to leave your development environment to find AI tools. The AI Catalog is built into Docker Hub and can be immediately accessed from Docker Desktop.
  • Faster prototyping: By eliminating friction in accessing AI resources, teams can focus on building and iterating faster.
  • Enhanced compatibility: Docker’s containerized approach ensures AI models run consistently across environments, reducing setup headaches.

Whether you’re developing cutting-edge AI/ML applications or just beginning to experiment with AI tools, this integration empowers developers to innovate without distraction.

Command-line operations: Control Docker Desktop your way

For developers who automate workflows or work heavily in terminal environments, relying solely on graphical user interfaces (GUIs) can be limiting. Starting, stopping, or troubleshooting Docker Desktop often requires GUI navigation, which can disrupt automation pipelines and slow down power users.

Docker Desktop 4.37 introduces robust command-line capabilities for managing Docker Desktop itself. Developers can now perform essential tasks such as starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line.

Key benefits:

  • Improved automation: Script Docker Desktop operations into CI/CD workflows, eliminating manual intervention.
  • Faster troubleshooting: Check the status and restart Docker Desktop without leaving the terminal, streamlining issue resolution.
  • Developer flexibility: A smoother, distraction-free experience for developers who prefer terminal-based workflows.

This new feature bridges the gap between GUI and command-line preferences, allowing developers to tailor their workflows to their needs.

Upgraded components: Keeping developers ahead

Docker Desktop 4.37 includes significant upgrades to its underlying components, bringing enhanced performance, security, and feature sets such as GPU- accelerated workflows. 

Here’s what’s new:

Bug fixes and stability improvements

At Docker, we aim to provide a stable and dependable development platform so developer teams can focus on creating, not troubleshooting. Docker Desktop 4.37 also addresses several key bugs and usability concerns:

  • Default disk usage limit: New installations now default to a 1TB disk limit, offering additional flexibility for developers with large containerized applications.
  • Loopback AF_VSOCK connections: Fixed to ensure container communication reliability.
  • CLI context reset fixes: Prevent unintended resets when restoring default settings.
  • Dashboard synchronization: Ensures consistent behavior between the Docker Desktop Dashboard and the Docker daemon after engine restarts.
  • Resource Saver mode stability: Resolves issues with mode reengagement, improving power efficiency for resource-conscious users.

Wrapping up 

Docker Desktop 4.37 offers a step forward in enabling developers to innovate. With a focus on AI-driven development and automation-friendly operations, this release aligns with the evolving needs of modern software teams.

Learn more

Docker 2024 Highlights: Innovations in AI, Security, and Empowering Development Teams

In 2024, as developers and engineering teams focused on delivering high-quality, secure software faster, Docker continued to evolve with impactful updates and a streamlined user experience. This commitment to empowering developers was recognized in the annual Stack Overflow Developer Survey, where Docker ranked as one of the most loved and widely used tools for yet another year. Here’s a look back at Docker’s 2024 milestones and how we helped teams build, test, and deploy with greater ease, security, and control than ever.

2400x1260 docker evergreen logo blog D 1

Streamlining the developer experience

Docker focused heavily on streamlining workflows, creating efficiencies, and reducing the complexities often associated with managing multiple tools. One big announcement in 2024 is our upgraded Docker plans. With the launch of updated Docker subscriptions, developers now have access to the entire suite of Docker products under their existing subscription. 

The all-in-one subscription model enables seamless integration of Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud, giving developers everything they need to build efficiently. By providing easy access to the suite of products and flexibility to scale, Docker allows developers to focus on what matters most — building and innovating without unnecessary distractions.

For more details on Docker’s all-in-one subscription approach, check out our Docker plans announcement.

Build up to 39x faster with Docker Build Cloud

Docker Build Cloud, introduced in 2024, brings the best of two worlds — local development and the cloud to developers and engineering teams worldwide. It offloads resource-intensive build processes to the cloud, ensuring faster, more consistent builds while freeing up local machines for other tasks.

A standout feature is shared build caches, which dramatically improve efficiency for engineering teams working on large-scale projects. Shared caches allow teams to avoid redundant rebuilds by reusing intermediate layers of images across builds, accelerating iteration cycles and reducing resource consumption. This approach is especially valuable for collaborative teams working on shared codebases, as it minimizes duplicated effort and enhances productivity.

Docker Build Cloud also offers native support for multi-architecture builds, eliminating the need for setting up and maintaining multiple native builders. This support removes the challenges associated with emulation, further improving build efficiency.

We’ve designed Docker Build Cloud to be easy to set up wherever you run your builds, without requiring a massive lift-and-shift effort. Docker Build Cloud also works well with Docker Compose, GitHub Actions, and other CI solutions. This means you can seamlessly incorporate Docker Build Cloud into your existing development tools and services and immediately start reaping the benefits of enhanced speed and efficiency.

Check out our build time savings calculator to estimate your potential savings in hours and dollars. 

Optimizing development workflows with performance enhancements

In 2024, Docker Desktop introduced a series of enterprise-grade performance enhancements designed to streamline development workflows at scale. These updates cater to the unique needs of development teams operating in diverse, high-performance environments.

One notable feature is the Virtual Machine Manager (VMM) in Docker Desktop for Mac, which provides a robust alternative to the Apple Virtualization Framework. Available since Docker Desktop 4.35, VMM significantly boosts performance for native Arm-based images, delivering faster and more efficient workflows for M1 and M2 Mac users. For development teams relying on Apple’s latest hardware, this enhancement translates into reduced build times and a smoother experience when working with containerized applications.

Additionally, Docker Desktop expanded its platform support to include Red Hat Enterprise Linux (RHEL) and Windows on Arm architectures, enabling organizations to maintain a consistent Docker Desktop experience across a wide array of operating systems. This flexibility ensures that development teams can optimize their workflows regardless of the underlying platform, leveraging platform-specific optimizations while maintaining uniformity in their tooling.

These advancements reflect Docker’s unwavering commitment to speed, reliability, and cross-platform support, ensuring that development teams can scale their operations without bottlenecks. By minimizing downtime and enhancing performance, Docker Desktop empowers developers to focus on innovation, improving productivity across even the most demanding enterprise environments.

More options to improve file operations for large projects

We enhanced Docker Desktop with synchronized file shares (Figure 1), a feature that can significantly improve file operation speeds by 2-10x. This enhancement brings fast and flexible host-to-VM file sharing, offering a performance boost for developers dealing with extensive codebases.

Synchronized file sharing is ideal for developers who:

  • Develop on projects that consist of a significant number of files (such as PHP or Node projects).
  • Develop using large repositories or monorepos with more than 100,000 files, totaling significant storage.
  • Utilize virtual file systems (such as VirtioFS, gRPC FUSE, or osxfs) and face scalability issues with their workflows.
  • Encounter performance limitations and want a seamless file-sharing solution without worrying about ownership conflicts.

This integration streamlines workflows, allowing developers to focus more on coding and less on managing file synchronization issues and slow file read times. 

Screenshot of Docker Desktop showing Synchronized file shares within Resources.
Figure 1: Synchronized file shares.

Enhancing developer productivity with Docker Debug 

Docker Debug enhances the ability of developer teams to debug any container, especially those without a shell (that is, distroless or scratch images). The ability to peek into “secure” images significantly improves the debugging experience for both local and remote containerized applications. 

Docker Debug does this by attaching a dedicated debugging toolkit to any image and allows developers to easily install additional tools for quick issue identification and resolution. Docker Debug not only streamlines debugging for both running and stopped containers but also is accessible directly from both the Docker Desktop CLI and GUI (Figure 2). 

Screenshot of Docker Desktop showing Docker Debug.
Figure 2: Docker Debug.

Being able to troubleshoot images without modifying them is crucial for maintaining the security and performance of containerized applications, especially those images that traditionally have been hard to debug. Docker Debug offers:

  • Streamlined debugging process: Easily debug local and remote containerized applications, even those not running, directly from Docker Desktop.
  • Cross-device and cloud compatibility: Initiate debugging effortlessly from any device, whether local or in the cloud, enhancing flexibility and productivity.

Docker Debug improves productivity and seamless integration. The docker debug command simplifies attaching a shell to any container or image. This capability reduces the cognitive load on developers, allowing them to focus on solving problems rather than configuring their environment. 

Ensuring reliable image builds with Docker Build checks

Docker Desktop 4.33 was a big release because, in addition to including the GA release of Docker Debug, it included the GA release of Docker Build checks, a new feature that ensures smoother and more reliable image builds. Build checks automatically validate common issues in your Dockerfiles before the build process begins, catching errors like invalid syntax, unsupported instructions, or missing dependencies. By surfacing these issues upfront, Docker Build checks help developers save time and avoid costly build failures.

You can access Docker Build checks in the CLI and in the Docker Desktop Builds view. The feature also works seamlessly with Docker Build Cloud, both locally and through CI. Whether you’re optimizing your Dockerfiles or troubleshooting build errors, Docker Build checks let you create efficient, high-quality container images with confidence — streamlining your development workflow from start to finish.

Onboarding and learning resources for developer success  

To further reduce friction, Docker revamped its learning resources and integrated new tools to enhance developer onboarding. By adding beginner-friendly tutorials, Docker’s learning center makes it easier for developers to ramp up and quickly learn to use Docker tools, helping them spend more time coding and less time troubleshooting. 

As Docker continues to rank as a top developer tool globally, we’re dedicated to empowering our community with continuous learning support.

Built-in container security from code to production

In an era where software supply chain security is essential, Docker has raised the bar on container security. With integrated security measures across every phase of the development lifecycle, Docker helps teams build, test, and deploy confidently.

Proactive security insights with Docker Scout Health Scores

Docker Scout, launched in 2023,  has become a cornerstone of Docker’s security ecosystem, empowering developer teams to identify and address vulnerabilities in container images early in the development lifecycle. By integrating with Docker Hub, Docker Desktop, and CI/CD workflows, Scout ensures that security is seamlessly embedded into every build. 

Addressing vulnerabilities during the inner loop — the development phase — is estimated to be up to 100 times less costly than fixing them in production. This underscores the critical importance of early risk visibility and remediation for engineering teams striving to deliver secure, production-ready software efficiently.

In 2024, we announced Docker Scout Health Scores (Figure 3), a feature designed to better communicate the security posture of container images development teams use every day. Docker Scout Health Scores provide a clear, alphabetical grading system (A to F) that evaluates common vulnerabilities and exposures (CVEs) for software components within Docker Hub. This feature allows developers to quickly assess and wisely choose trusted content for a secure software supply chain. 

creenshot of Docker Scout health score page showing checks for high profile vulnerabilities, Supply chain attestations, unapproved images, outdated images, and more.
Figure 3: Docker Scout health score.

For a deeper dive, check out our blog post on enhancing container security with Docker Scout and secure repositories.

Air-gapped containers: Enhanced security for isolated environments

Docker introduced support for air-gapped containers in Docker Desktop 4.31, addressing the unique needs of highly secure, offline environments. Air-gapped containers enable developers to build, run, and test containerized applications without requiring an active internet connection. 

This feature is crucial for organizations operating in industries with stringent compliance and security requirements, such as government, healthcare, and finance. By allowing developers to securely transfer container images and dependencies to air-gapped systems, Docker simplifies workflows and ensures that even isolated environments benefit from the power of containerization.

Strengthening trust with SOC 2 Type 2 and ISO 27001 certifications

Docker also achieved two major milestones in its commitment to security and reliability: SOC 2 Type 2 attestation and ISO 27001 certification. These globally recognized standards validate Docker’s dedication to safeguarding customer data, maintaining robust operational controls, and adhering to stringent security practices. SOC 2 Type 2 attestation focuses on the effective implementation of security, availability, and confidentiality controls, while ISO 27001 certification ensures compliance with best practices for managing information security systems.

These certifications provide developers and organizations with increased confidence in Docker’s ability to support secure software supply chains and protect sensitive information. They also demonstrate Docker’s focus on aligning its services with the needs of modern enterprises.

Accelerating success for development teams and organizations

In 2024, Docker introduced a range of features and enhancements designed to empower development teams and streamline operations across organizations. From harnessing the potential of AI to simplifying deployment workflows and improving security, Docker’s advancements are focused on enabling teams to work smarter and build with confidence. By addressing key challenges in development, management, and security, Docker continues to drive meaningful outcomes for developers and businesses alike.

Docker Home: A central hub to access and manage Docker products

Docker introduced Docker Home (Figure 4), a central hub for users to access Docker products, manage subscriptions, adjust settings, and find resources — all in one place. This approach simplifies navigation for developers and admins. Docker Home allows admins to manage organizations, users, and onboarding processes, with access to dashboards for monitoring Docker usage.

Future updates will add personalized features for different roles, and business subscribers will gain access to tools like the Docker Support portal and organization-wide notifications.

Screenshot of Docker Home showing options to explore Docker products, Admin console, and more.
Figure 4: Docker Home.

Empowering AI innovation  

Docker’s ecosystem supports AI/ML workflows, helping developers work with these cutting-edge technologies while staying cloud-native and agile. Read the Docker Labs GenAI series to see how we’re innovating and experimenting in the open.

Through partnerships like those with NVIDIA and GitHub, Docker ensures seamless integration of AI tools, allowing teams to rapidly experiment, deploy, and iterate. This emphasis on enabling advanced tech aligns Docker with organizations looking to leverage AI and ML in containerized environments.

Optimizing AI application development with Docker Desktop and NVIDIA AI Workbench

Docker and NVIDIA partnered to integrate Docker Desktop with NVIDIA AI Workbench, streamlining AI development workflows. This collaboration simplifies setup by automatically installing Docker Desktop when selected as the container runtime in AI Workbench, allowing developers to focus on creating, testing, and deploying AI models without configuration hassles. By combining Docker’s containerization capabilities with NVIDIA’s advanced AI tools, this integration provides a seamless platform for model training and deployment, enhancing productivity and accelerating innovation in AI application development. 

Docker + GitHub Copilot: AI-powered developer productivity

We announced that Docker joined GitHub’s Partner Program and unveiled the Docker extension for GitHub Copilot (@docker). This extension is designed to assist developers in working with Docker directly within their GitHub workflows. This integration extends GitHub Copilot’s technology, enabling developers to generate Docker assets, learn about containerization, and analyze project vulnerabilities using Docker Scout, all from within the GitHub environment.

Accelerating AI development with the Docker AI catalog

Docker launched the AI Catalog, a curated collection of generative AI images and tools designed to simplify and accelerate AI application development. This catalog offers developers access to powerful models like IBM Granite, Llama, Mistral, Phi 2, and SolarLLM, as well as applications such as JupyterHub and H2O.ai. By providing essential tools for machine learning, model deployment, inference optimization, orchestration, ML frameworks, and databases, the AI Catalog enables developers to build and deploy AI solutions more efficiently. 

The Docker AI Catalog addresses common challenges in AI development, such as decision overload from the vast array of tools and frameworks, steep learning curves, and complex configurations. By offering a curated list of trusted content and container images, Docker simplifies the decision-making process, allowing developers to focus on innovation rather than setup. This initiative underscores Docker’s commitment to empowering developers and publishers in the AI space, fostering a more streamlined and productive development environment. 

Streamlining enterprise administration 

Simplified deployment and management with Docker’s MSI and PKG installers

Docker simplifies deploying and managing Docker Desktop with the new MSI Installer for Windows and PKG Installer for macOS. The MSI Installer enables silent installations, automated updates, and login enforcement, streamlining workflows for IT admins. Similarly, the PKG Installer offers macOS users easy deployment and management with standard tools. These installers enhance efficiency, making it easier for organizations to equip teams and maintain secure, compliant environments.

These new installers also align with Docker’s commitment to simplifying the developer experience and improving organizational management. Whether you’re setting up a few machines or deploying Docker Desktop across an entire enterprise, these tools provide a reliable and efficient way to keep teams equipped and ready to build.

New sign-in enforcement options enhance security and help streamline IT administration 

Docker simplifies IT administration and strengthens organizational security with new sign-in enforcement options for Docker Desktop. These features allow organizations to ensure users are signed in while using Docker, aligning local software with modern security standards. With flexible deployment options — including macOS Config Profiles, Windows Registry Keys, and the cross-platform registry.json file — IT administrators can easily enforce policies that prevent tampering and enhance security. These tools empower organizations to manage development environments more effectively, providing a secure foundation for teams to build confidently.

Desktop Insights: Unlocking performance and usage analytics

Docker introduced Desktop Insights, a powerful feature that provides developers and teams with actionable analytics to optimize their use of Docker Desktop. Accessible through the Docker Dashboard, Desktop Insights offers a detailed view of resource usage, build times, and performance metrics, helping users identify inefficiencies and fine-tune their workflows (Figure 5).

Whether you’re tracking the speed of container builds or understanding how resources like CPU and memory are being utilized, Desktop Insights empowers developers to make data-driven decisions. By bringing transparency to local development environments, this feature aligns with Docker’s mission to streamline container workflows and ensure developers have the tools to build faster and more effectively.

Screenshot of Docker Insights within Admin console, showing data for Total active users, Users with license, Total Builds, Total Containers run, and more
Figure 5: Desktop Insights dashboard.

New usage dashboards in Docker Hub

Docker introduced Usage dashboards in Docker Hub, giving organizations greater visibility into how they consume resources. These dashboards provide detailed insights into storage and image pull activity, helping teams understand their usage patterns at a granular level (Figure 6). 

By breaking down data by repository, tag, and even IP address, the dashboards make it easy to identify high-traffic images or repositories that might require optimization. With this added transparency, teams can better manage their storage, avoid unnecessary pull requests, and optimize workflows to control costs. 

Usage dashboards enhance accountability and empower organizations to fine-tune their Docker Hub usage, ensuring resources are used efficiently and effectively across all projects.

Screenshot of Docker Usage dashboard showing a graph of daily pulls over time.
Figure 6: Usage dashboard.

Enhancing security with organization access tokens

Docker introduced organization access tokens, which let teams manage access to Docker Hub repositories at an organizational level. Unlike personal access tokens tied to individual users, these tokens are associated with the organization itself, allowing for centralized control and reducing reliance on individual accounts. This approach enhances security by enabling fine-grained permissions and simplifying the management of automated processes and CI/CD pipelines. 

Organization access tokens offer several advantages, including the ability to set specific access permissions for each token, such as read or write access to selected repositories. They also support expiration dates, aligning with compliance requirements and bolstering security. By providing visibility into token usage and centralizing management within the Admin Console, these tokens streamline operations and improve governance for organizations of all sizes. 

Docker’s vision for 2025

Docker’s journey doesn’t end here. In 2025, Docker remains committed to expanding its support for cloud-native and AI/ML development, reinforcing its position as the go-to container platform. New integrations and expanded multi-cloud capabilities are on the horizon, promising a more connected and versatile Docker ecosystem.

As Docker continues to build for the future, we’re committed to empowering developers, supporting the open source community, and driving efficiency in software development at scale. 

2024 was a year of transformation for Docker and the developer community. With major advances in our product suite, continued focus on security, and streamlined experiences that deliver value, Docker is ready to help developer teams and organizations succeed in an evolving tech landscape. As we head into 2025, we invite you to explore Docker’s suite of tools and see how Docker can help your team build, innovate, and secure software faster than ever.

Learn more

From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity

By: Yiwen Xu

Modern application development has evolved dramatically. Gone are the days when a couple of developers, a few machines, and some pizza were enough to launch an app. As the industry grew, DevOps revolutionized collaboration, and Docker popularized containerization, simplifying workflows and accelerating delivery. 

Later, DevSecOps brought security into the mix. Fast forward to today, and the demand for software has never been greater, with more than 750 million cloud-native apps expected by 2025.

This explosion in demand has created a new challenge: complexity. Applications now span multiple programming languages, frameworks, and architectures, integrating both legacy and modern systems. Development workflows must navigate hybrid environments — local, cloud, and everything in between. This complexity makes it harder for companies to deliver innovation on time and stay competitive. 

2400x1260 evergreen docker blog e

To overcome these challenges, you need a development platform that’s as reliable and ubiquitous as electricity or Wi-Fi — a platform that works consistently across diverse applications, development tools, and environments. Whether you’re just starting to move toward microservices or fully embracing cloud-native development, Docker meets your team where they are, integrates seamlessly into existing workflows, and scales to meet the needs of individual developers, teams, and entire enterprises.

Docker: Simplifying the complex

The Docker suite of products provides the tools you need to accelerate development, modernize legacy applications, and empower your team to work efficiently and securely. With Docker, you can:

  • Modernize legacy applications: Docker makes it easy to containerize existing systems, bringing them closer to modern technology stacks without disrupting operations.
  • Boost productivity for cloud-native teams: Docker ensures consistent environments, integrates with CI/CD workflows, supports hybrid development environments, and enhances collaboration

Consistent environments: Build once, run anywhere

Docker ensures consistency across development, testing, and production environments, eliminating the dreaded “works on my machine” problem. With Docker, your team can build applications in unified environments — whether on macOS, Windows, or Linux — for reliable code, better collaboration, and faster time to market.

With Docker Desktop, developers have a powerful GUI and CLI for managing containers locally. Integration with popular IDEs like Visual Studio Code allows developers to code, build, and debug within familiar tools. Built-in Kubernetes support enables teams to test and deploy applications on a local Kubernetes cluster, giving developers confidence that their code will perform in production as expected.

Integrated workflows for hybrid environments

Development today spans both local and cloud environments. Docker bridges the gap and provides flexibility with solutions like Docker Build Cloud, which speeds up build pipelines by up to 39x using cloud-based, multi-platform builders. This allows developers to focus more on coding and innovation, rather than waiting on builds.

Docker also integrates seamlessly with CI/CD tools like Jenkins, GitLab CI, and GitHub Actions. This automation reduces manual intervention, enabling consistent and reliable deployments. Whether you’re building in the cloud or locally, Docker ensures flexibility and productivity at every stage.

Team collaboration: Better together

Collaboration is central to Docker. With integrations like Docker Hub and other registries, teams can easily share container images and work together on builds. Docker Desktop features like Docker Debug and the Builds view dashboards empower developers to troubleshoot issues together, speeding up resolution and boosting team efficiency.

Docker Scout provides actionable security insights, helping teams identify and resolve vulnerabilities early in the development process. With these tools, Docker fosters a collaborative environment where teams can innovate faster and more securely.

Why Docker?

In today’s fast-paced development landscape, complexity can slow you down. Docker’s unified platform reduces complexity as it simplifies workflows, standardizes environments, and empowers teams to deliver software faster and more securely. Whether you’re modernizing legacy applications, bridging local and cloud environments, or building cutting-edge, cloud-native apps, Docker helps you achieve efficiency and scale at every stage of the development lifecycle.

Docker offers a unified platform that combines industry-leading tools — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud — into a seamless experience. Docker’s flexible plans ensure there’s a solution for every developer and every team, from individual contributors to large enterprises.

Get started today

Ready to simplify your development workflows? Start your Docker journey now and equip your team with the tools they need to innovate, collaborate, and deliver with confidence.

Looking for tips and tricks? Subscribe to Docker Navigator for the latest updates and insights delivered straight to your inbox.

Learn more

Learn How to Optimize Docker Hub Costs With Our Usage Dashboards

Effective infrastructure management is crucial for organizations using Docker Hub. Without a clear understanding of resource consumption, unexpected usage can emerge and skyrocket. This is particularly true if pulls and storage needs are not budgeted and forecasted correctly. By implementing proactive post controls and monitoring usage patterns, development teams can sustain their Docker Hub usage while keeping expenses under control. 

To support these goals, we’ve introduced new Docker Hub Usage dashboards, offering organizations the ability to access and analyze their usage patterns for storage and pulls. 

Docker Hub’s Usage dashboards put you in control, giving visibility into every pull and image your Docker systems request. Each pull and cache becomes a deliberate choice — not a random event — so you can make every byte count. With clear insights into what’s happening and why, you can design more efficient, optimized systems.

2400x1260 generic hub blog c

Reclaim control and manage technical resources by kicking bad habits

hub usage f1
Figure 1: Docker Hub Usage dashboards.

The Docker Hub Usage dashboards (Figure 1) provide valuable insights, allowing teams to track peaks and valleys, detect high usage periods, and identify the images and repositories driving the most consumption. This visibility not only aids in managing usage but also strengthens continuous improvement efforts across your software supply chain, helping teams build applications more efficiently and sustainably. 

This information helps development teams to stay on top of challenges, such as: 

  • Redundant pulls and misconfigured repositories: These can quickly and quietly drive up technical expenses while falling out of scope of the most relevant or critical use cases. Docker Hub’s Usage dashboards can help development teams identify patterns and optimize accordingly. They let you view usage trends across IPs and users as well, which helps with pinpointing high consumption areas and ensuring accountability in an organization when it comes to resource management. 
  • Poor caching management: Repository insights and image tagging helps customers assess internal usage patterns, such as frequently accessed images, where there might be an opportunity to improve caching. With proper governance models, organizations can also establish policies and processes that reduce the variability of resource usage as a whole. This goal goes beyond keeping track of seasonality usage patterns to help you design more predictable usage patterns so you can budget accordingly. 
  • Accidental automation: Accidental automated system activities can really hurt your usage. Let’s say you are using a CI/CD pipeline or automated scripts configured to pull images more often than they should. They may pull on every build instead of the actual version change, for example. 

Usage dashboards can help you identify these inefficiencies by showing detailed pull data associated with automated tooling. This information can help your teams quickly identify and adjust misconfigured systems, fine-tune automations to only pull when needed, and ultimately focus on the most relevant use cases for your organization, avoiding accidental overuse of resources:

Details of Docker Hub Usage Dashboard with columns for Date/hour, Username, Repository library, IPs, Version checks, pulls, and more.
Figure 2: Details from the Usage dashboards.

Docker Hub’s Usage dashboards offer a comprehensive view of your usage data, including downloadable CSV reports that include metrics such as pull counts, repository names, IP addresses, and version checks (Figure 2). This granular approach allows your organization to gain valuable insights and trend data to help optimize your team’s workflows and inform policies. 

Integrate robust operational principles into your development pipeline by leveraging these data-driven reports and maintain control over resource consumption and operational efficiency with Docker Hub. 

Learn more

Accelerating AI Development with the Docker AI Catalog

Developers are increasingly expected to integrate AI capabilities into their applications but they also face many challenges. Namely, the steep learning curve, coupled with an overwhelming array of tools and frameworks, makes this process too tedious. Docker aims to bridge this gap with the Docker AI Catalog, a curated experience designed to simplify AI development and empower both developers and publishers.

2400x1260 generic hub blog f

Why Docker for AI?

Docker and container technology has been a key technology used by developers at the forefront of AI applications for the past few years. Now, Docker is doubling down on that effort with our AI Catalog. Developers using Docker’s suite of products are often responsible for building, deploying, and managing complex applications — and, now, they must also navigate generative AI (GenAI) technologies, such as large language models (LLMs), vector databases, and GPU support.

For developers, the AI Catalog simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. This approach removes the hassle of evaluating numerous tools and configurations, allowing developers to focus on building innovative AI applications.

Key benefits for development teams

The Docker AI Catalog is tailored to help users overcome common hurdles in the evolving AI application development landscape, such as:

  • Decision overload: The GenAI ecosystem is crowded with new tools and frameworks. The Docker AI Catalog simplifies the decision-making process by offering a curated list of trusted content and container images, so developers don’t have to wade through endless options.
  • Steep learning curve: With the rise of new technologies like LLMs and retrieval-augmented generation (RAG), the learning curve can be overwhelming. Docker provides an all-in-one resource to help developers quickly get up to speed.
  • Complex configurations preventing production readiness: Running AI applications often requires specialized hardware configurations, especially with GPUs. Docker’s AI stacks make this process more accessible, ensuring that developers can harness the full power of these resources without extensive setup.

The result? Shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Empowering publishers

For Docker verified publishers, the AI Catalog provides a platform to differentiate themselves in a crowded market. Independent software vendors (ISVs) and open source contributors can promote their content, gain insights into adoption, and improve visibility to a growing community of AI developers.

Key features for publishers include:

  • Increased discoverability: Publishers can highlight their AI content within a trusted ecosystem used by millions of developers worldwide.
  • Metrics and insights: Verified publishers gain valuable insights into the performance of their content, helping them optimize strategies and drive engagement.

Unified experience for AI application development

The AI Catalog is more than just a repository of AI tools. It’s a unified ecosystem designed to foster collaboration between developers and publishers, creating a path forward for more innovative approaches to building applications supported by AI capabilities. Developers get easy access to essential AI tools and content, while publishers gain the visibility and feedback they need to thrive in a competitive marketplace.

With Docker’s trusted platform, development teams can build AI applications confidently, knowing they have access to the most relevant and reliable tools available.

The road ahead: What’s next?

Docker will launch the AI Catalog in preview on November 12, 2024, alongside a joint webinar with MongoDB. This initiative will further Docker’s role as a leader in AI application development, ensuring that developers and publishers alike can take full advantage of the opportunities presented by AI tools.

Stay tuned for more updates and prepare to dive into a world of possibilities with the Docker AI Catalog. Whether you’re an AI developer seeking to streamline your workflows or a publisher looking to grow your audience, Docker has the tools and support you need to succeed.

Ready to simplify your AI development process? Explore the AI Catalog and get access to trusted content that will accelerate your development journey. Start building smarter, faster, and more efficiently.

For publishers, now is the perfect time to join the AI Catalog and gain visibility for your content. Become a trusted source in the AI development space and connect with millions of developers looking for the right tools to power their next breakthrough.

Learn more

Raspberry Pi USB 3 Hub on sale now at $12

Most Raspberry Pi single-board computers, with the exception of the Raspberry Pi Zero and A+ form factors, incorporate an on-board USB hub to fan out a single USB connection from the core silicon, and provide multiple downstream USB Type-A ports. But no matter how many ports we provide, sometimes you just need more peripherals than we have ports. And with that in mind, today we’re launching the official Raspberry Pi USB 3 Hub, a high-quality four-way USB 3.0 hub for use with your Raspberry Pi or other, lesser, computer.

Key features include:

  • A single upstream USB 3.0 Type-A connector on an 8 cm captive cable
  • Four downstream USB 3.0 Type-A ports
  • Aggregate data transfer speeds up to 5 Gbps
  • USB-C socket for optional external 3A power supply (sold separately)

Race you to the bottom

Why design our own hub? Well, we’d become frustrated with the quality and price of the hubs available online. Either you pay a lot of money for a nicely designed and reliable product, which works well with a broad range of hosts and peripherals; or you cheap out and get something much less compatible, or unreliable, or ugly, or all three. Sometimes you spend a lot of money and still get a lousy product.

It felt like we were trapped in a race to the bottom, where bad quality drives out good, and marketplaces like Amazon end up dominated by the cheapest thing that can just about answer to the name “hub”.

So, we worked with our partners at Infineon to source a great piece of hub silicon, CYUSB3304, set Dominic to work on the electronics and John to work on the industrial design, and applied our manufacturing and distribution capabilities to make it available at the lowest possible price. The resulting product works perfectly with all models of Raspberry Pi computer, and it bears our logo because we’re proud of it: we believe it’s the best USB 3.0 hub on the market today.

Grab one and have a play: we think you’ll like it.

The post Raspberry Pi USB 3 Hub on sale now at $12 appeared first on Raspberry Pi.

Announcing IBM Granite AI Models Now Available on Docker Hub

We are thrilled to announce that Granite models, IBM’s family of open source and proprietary models built for business, as well as Red Hat InstructLab model alignment tools, are now available on Docker Hub

Now, developer teams can easily access, deploy, and scale applications using IBM’s AI models specifically designed for developers.

This news will be officially announced during the AI track of the keynote at IBM TechXchange on October 22. Attendees will get an exclusive look at how IBM’s Granite models on Docker Hub accelerate AI-driven application development across multiple programming languages.

2400x1260 evergreen docker blog d

Why Granite on Docker Hub?

With a principled approach to data transparency, model alignment, and security, IBM’s open source Granite models represent a significant leap forward in natural language processing. The models are available under an Apache 2.0 license, empowering developer teams to bring generative AI into mission-critical applications and workflows. 

Granite models deliver superior performance in coding and targeted language tasks at lower latencies, all while requiring a fraction of the compute resources and reducing the cost of inference. This efficiency allows developers to experiment, build, and scale generative AI applications both on-premises and in the cloud, all within departmental budgetary limits.

Here’s what this means for you:

  • Simplified deployment: Pull the Granite image from Docker Hub and get up and running in minutes.
  • Scalability: Docker offers a lightweight and efficient method for scaling artificial intelligence and machine learning (AI/ML) applications. It allows you to run multiple containers on a single machine or distribute them across different machines in a cluster, enabling horizontal scalability.
  • Flexibility: Customize and extend the model to suit your specific needs without worrying about underlying infrastructure.
  • Portability: By creating Docker images once and deploying them anywhere, you eliminate compatibility problems and reduce the need for configurations. 
  • Community support: Leverage the vast Docker and IBM communities for support, extensions, and collaborations.

In addition to the IBM Granite models, Red Hat also made the InstructLab model alignment tools available on Docker Hub. Developers using InstructLab can adapt pre-trained LLMs using far less real-world data and computing resources than alternative methodologies. InstructLab is model-agnostic and can be used to fine-tune any LLM of your choice by providing additional skills and knowledge.

With IBM Granite AI models and InstructLab available on Docker Hub, Docker and IBM enable easy integration into existing environments and workflows.

Getting started with Granite

You can find the following images available on Docker Hub:

  • InstructLab: Ideal for desktop or Mac users looking to explore InstructLab, this image provides a simple introduction to the platform without requiring specialized hardware. It’s perfect for prototyping and testing before scaling up.
  • Granite-7b-lab: This image is optimized for model serving and inference on desktop or Mac environments, using the Granite-7B model. It allows for efficient and scalable inference tasks without needing a GPU, perfect for smaller-scale deployments or local testing.

How to pull and run IBM Granite images from Docker Hub 

IBM Granite provides a toolset for building and managing cloud-native applications. Follow these steps to pull and run an IBM Granite image using Docker and the CLI. You can follow similar steps for the Red Hat InstructLab images.

Authenticate to Docker Hub

Enter your Docker username and password when prompted.

Pull the IBM Granite Image

Pull the IBM Granite image from Docker Hub.  

  • redhat/granite-7b-lab-gguf: For Mac/desktop users with no GPU support

Run the Image in a Container

Start a container with the IBM Granite image. The container can be started in two modes: CLI (default) and server.

To start the container in CLI mode, run the following:
docker run --ipc=host -it redhat/granite-7b-lab-gguf 

This command opens an interactive bash session within the container, allowing you to use the tools.

ibm granite f1

To run the container in server mode, run the following command:

docker run --ipc=host -it redhat/granite-7b-lab-gguf -s

You can check IBM Granite’s documentation for details on using IBM Granite Models.

Join us at IBM TechXchange

Granite on Docker Hub will be officially announced at the IBM TechXchange Conference, which will be held October 21-24 in Las Vegas. Our head of technical alliances, Eli Aleyner, will show a live demonstration at the AI track of the keynote during IBM TechXchange. Oleg Šelajev, Docker’s staff developer evangelist, will show how app developers can test their GenAI apps with local models. Additionally, you’ll learn how Docker’s collaboration with Red Hat is improving developer productivity.

The availability of Granite on Docker Hub marks a significant milestone in making advanced AI models accessible to all. We’re excited to see how developer teams will harness the power of Granite to innovate and solve complex challenges.

Stay anchored for more updates, and as always, happy coding!

Learn more

New Docker Terraform Provider: Automate, Secure, and Scale with Ease

We’re excited to announce the launch of the Docker Terraform Provider, designed to help users and organizations automate and securely manage their Docker-hosted resources. This includes repositories, teams, organization settings, and more, all using Terraform’s infrastructure-as-code approach. This provider brings a unified, scalable, and secure solution for managing Docker resources in an automated fashion — whether you’re managing a single repository or a large-scale organization.

2400x1260 evergreen docker blog g

A new way of working with Docker Hub

The Docker Terraform Provider introduces a new way of working with Docker Hub, enabling infrastructure-as-code best practices that are already widely adopted across cloud-native environments. By integrating Docker Hub with Terraform, organizations can streamline resource management, improve security, and collaborate more effectively, all while ensuring Docker resources remain in sync with other infrastructure components.

The Problem

Managing Docker Hub resources manually can become cumbersome and prone to errors, especially as teams grow and projects scale. Maintaining configurations can lead to inconsistencies, reduced security, and a lack of collaboration between teams without a streamlined, version-controlled system. The Docker Terraform Provider solves this by allowing you to manage Docker Hub resources in the same way you manage your other cloud resources, ensuring consistency, auditability, and automation across the board.

The solution

The Docker Terraform Provider offers:

  • Unified management: With this provider, you can manage Docker repositories, teams, users, and organizations in a consistent workflow, using the same code and structure across environments.
  • Version control: Changes to Docker Hub resources are captured in your Terraform configuration, providing a version-controlled, auditable way to manage your Docker infrastructure.
  • Collaboration and automation: Teams can now collaborate seamlessly, automating the provisioning and management of Docker Hub resources with Terraform, enhancing productivity and ensuring best practices are followed.
  • Scalability: Whether you’re managing a few repositories or an entire organization, this provider scales effortlessly to meet your needs.

Example

At Docker, even we faced challenges managing our Docker Hub resources, especially when adding repositories without owner permissions — it was a frustrating, manual process. With the Terraform provider, anyone in the company can create a new repository without having elevated Docker Hub permissions. All levels of employees are now empowered to write code rather than track down coworkers. This streamlines developer workflows with familiar tooling and reduces employee permissions. Security and developers are happy!

Here’s an example where we are managing a repository, an org team, the permissions for the created repo, and a PAT token:

terraform {
  required_providers {
    docker = {
      source  = "docker/docker"
      version = "~> 0.2"
    }
  }
}

# Initialize provider
provider "docker" {}

# Define local variables for customization
locals {
  namespace        = "my-docker-namespace"
  repo_name        = "my-docker-repo"
  org_name         = "my-docker-org"
  team_name        = "my-team"
  my_team_users    = ["user1", "user2"]
  token_label      = "my-pat-token"
  token_scopes     = ["repo:read", "repo:write"]
  permission       = "admin"
}

# Create repository
resource "docker_hub_repository" "org_hub_repo" {
  namespace        = local.namespace
  name             = local.repo_name
  description      = "This is a generic Docker repository."
  full_description = "Full description for the repository."
}

# Create team
resource "docker_org_team" "team" {
  org_name         = local.org_name
  team_name        = local.team_name
  team_description = "Team description goes here."
}

# Team association
resource "docker_org_team_member" "team_membership" {
  for_each = toset(local.my_team_users)

  org_name  = local.org_name
  team_name = docker_org_team.team.team_name
  user_name = each.value
}

# Create repository team permission
resource "docker_hub_repository_team_permission" "repo_permission" {
  repo_id    = docker_hub_repository.org_hub_repo.id
  team_id    = docker_org_team.team.id
  permission = local.permission
}

# Create access token
resource "docker_access_token" "access_token" {
  token_label = local.token_label
  scopes      = local.token_scopes
}

Future work

We’re just getting started with the Docker Terraform Provider, and there’s much more to come. Future work will expand support to other products in Docker’s suite, including Docker Scout, Docker Build Cloud, and Testcontainers Cloud. Stay tuned as we continue to evolve and enhance the provider with new features and integrations.

For feedback and issue tracking, visit the official Docker Terraform Provider repository or submit feedback via our issue tracker.

We’re confident this new provider will enhance how teams work with Docker Hub, making it easier to manage, secure, and scale their infrastructure while focusing on what matters most — building great software.

Learn more

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

With many organizations moving to container-based workflows, keeping track of the different versions of your images can become a problem. Even smaller organizations can have hundreds of container images spanning from one-off development tests, through emergency variants to fix problems, all the way to core production images. This leads us to the question: How can we tame our image sprawl while still rapidly iterating our images?

A common misconception is that by using the “latest” tag, you are guaranteeing that you are pulling the “latest version of the image. Unfortunately, this assumption is wrong — all latest means is “the last image pushed to this registry.”

Read on to learn more about how to avoid this pitfall when using Docker and how to get a handle on your Docker images.

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

Using tags

One way to address this issue is to use tags when creating an image. Adding one or more tags to an image helps you remember what it is intended for and helps others as well. One approach is always to tag images with their semantic versioning (semver), which lets you know what version you are deploying. This sounds like a great approach, and, to some extent, it is, but there is a wrinkle.

Unless you’ve configured your registry for immutable tags, tags can be changed. For example, you could tag my-great-app as v1.0.0 and push the image to the registry. However, nothing stops your colleague from pushing their updated version of the app with tag v1.0.0 as well. Now that tag points to their image, not yours. If you add in the convenience tag latest, things get a bit more murky.

Let’s look at an example:

FROM busybox:stable-glibc

# Create a script that outputs the version
RUN echo -e "#!/bin/sh\n" > /test.sh && \
    echo "echo \"This is version 1.0.0\"" >> /test.sh && \
    chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

We build the above with docker build -t tagexample:1.0.0 . and run it.

$ docker run --rm tagexample:1.0.0
This is version 1.0.0

What if we run it without a tag specified?

$ docker run --rm tagexample
Unable to find image 'tagexample:latest' locally
docker: Error response from daemon: pull access denied for tagexample, repository does not exist or may require 'docker login'.
See 'docker run --help'.

Now we build with docker build . without specifying a tag and run it.

$ docker run --rm tagexample
This is version 1.0.0

The latest tag is always applied to the most recent push that did not specify a tag. So, in our first test, we had one image in the repository with a tag of 1.0.0, but because we did not have any pushes without a tag, the latest tag did not point to an image. However, once we push an image without a tag, the latest tag is automatically applied to it.

Although it is tempting to always pull the latest tag, it’s rarely a good idea. The logical assumption — that this points to the most recent version of the image — is flawed. For example, another developer can update the application to version 1.0.1, build it with the tag 1.0.1, and push it. This results in the following:

$ docker run --rm tagexample:1.0.1
This is version 1.0.1

$ docker run --rm tagexample:latest
This is version 1.0.0

If you made the assumption that latest pointed to the highest version, you’d now be running an out-of-date version of the image.

The other issue is that there is no mechanism in place to prevent someone from inadvertently pushing with the wrong tag. For example, we could create another update to our code bringing it up to 1.0.2. We update the code, build the image, and push it — but we forget to change the tag to reflect the new version. Although it’s a small oversight, this action results in the following:

$ docker run --rm tagexample:1.0.1
This is version 1.0.2

Unfortunately, this happens all too frequently.

Using labels

Because we can’t trust tags, how should we ensure that we are able to identify our images? This is where the concept of adding metadata to our images becomes important.

The first attempt at using metadata to help manage images was the MAINTAINER instruction. This instruction sets the “Author” field (org.opencontainers.image.authors) in the generated image. However, this instruction has been deprecated in favor of the more powerful LABEL instruction. Unlike MAINTAINER, the LABEL instruction allows you to set arbitrary key/value pairs that can then be read with docker inspect as well as other tooling.

Unlike with tags, labels become part of the image, and when implemented properly, can provide a much better way to determine the version of an image. To return to our example above, let’s see how the use of a label would have made a difference.

To do this, we add the LABEL instruction to the Dockerfile, along with the key version and value 1.0.2.

FROM busybox:stable-glibc

LABEL version="1.0.2"

# Create a script that outputs the version
RUN echo -e "#!/bin/sh\n" > /test.sh && \
    echo "echo \"This is version 1.0.2\"" >> /test.sh && \
    chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

Now, even if we make the same mistake above where we mistakenly tag the image as version 1.0.1, we have a way to check that does not involve running the container to see which version we are using.

$ docker inspect --format='{{json .Config.Labels}}' tagexample:1.0.1
{"version":"1.0.2"}

Best practices

Although you can use any key/value as a LABEL, there are recommendations. The OCI provides a set of suggested labels within the org.opencontainers.image namespace, as shown in the following table:

LabelContent
org.opencontainers.image.createdThe date and time on which the image was built (string, RFC 3339 date-time).
org.opencontainers.image.authorsContact details of the people or organization responsible for the image (freeform string).
org.opencontainers.image.urlURL to find more information on the image (string).
org.opencontainers.image.documentationURL to get documentation on the image (string).
org.opencontainers.image.sourceURL to the source code for building the image (string).
org.opencontainers.image.versionVersion of the packaged software (string).
org.opencontainers.image.revisionSource control revision identifier for the image (string).
org.opencontainers.image.vendorName of the distributing entity, organization, or individual (string).
org.opencontainers.image.licensesLicense(s) under which contained software is distributed (string, SPDX License List).
org.opencontainers.image.ref.nameName of the reference for a target (string).
org.opencontainers.image.titleHuman-readable title of the image (string).
org.opencontainers.image.descriptionHuman-readable description of the software packaged in the image (string).

Because LABEL takes any key/value, it is also possible to create custom labels. For example, labels specific to a team within a company could use the com.myorg.myteam namespace. Isolating these to a specific namespace ensures that they can easily be related back to the team that created the label.

Final thoughts

Image sprawl is a real problem for organizations, and, if not addressed, it can lead to confusion, rework, and potential production problems. By using tags and labels in a consistent manner, it is possible to eliminate these issues and provide a well-documented set of images that make work easier and not harder.

Learn more

❌