Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why Testcontainers Cloud Is a Game-Changer Compared to Docker-in-Docker for Testing Scenarios

14 November 2024 at 22:39

Navigating the complex world of containerized testing environments can be challenging, especially when dealing with Docker-in-Docker (DinD). As a senior DevOps engineer and Docker Captain, I’ve seen firsthand the hurdles that teams face with DinD, and here I’ll share why Testcontainers Cloud is a transformative alternative that’s reshaping the way we handle container-based testing.

2400x1260 Testcontainers Cloud evergreen

Understanding Docker-in-Docker

Docker-in-Docker allows you to run Docker within a Docker container. It’s like Inception for containers — a Docker daemon running inside a Docker container, capable of building and running other containers.

How Docker-in-Docker works

  • Nested Docker daemons: In a typical Docker setup, the Docker daemon runs on the host machine, managing containers directly on the host’s operating system. With DinD, you start a Docker daemon inside a container. This inner Docker daemon operates independently, enabling the container to build and manage its own set of containers.
  • Privileged mode and access to host resources: To run Docker inside a Docker container, the container needs elevated privileges. This is achieved by running the container in privileged mode using the --privileged flag:
docker run --privileged -d docker:dind
  • The --privileged flag grants the container almost all the capabilities of the host machine, including access to device files and the ability to perform system administration tasks. Although this setup enables the inner Docker daemon to function, it poses significant security risks, as it can potentially allow the container to affect the host system adversely.
  • Filesystem considerations: The inner Docker daemon stores images and containers within the file system of the DinD container, typically under /var/lib/docker. Because Docker uses advanced file system features like copy-on-write layers, running an inner Docker daemon within a containerized file system (which may itself use such features) can lead to complex interactions and potential conflicts.
  • Cgroups and namespace isolation: Docker relies on Linux kernel features like cgroups and namespaces for resource isolation and management. When running Docker inside a container, these features must be correctly configured to allow nesting. This process can introduce additional complexity in ensuring that resource limits and isolation behave as expected.

Why teams use Docker-in-Docker

  • Isolated build environments: DinD allows each continuous integration (CI) job to run in a clean, isolated Docker environment, ensuring that builds and tests are not affected by residual state from previous jobs or other jobs running concurrently.
  • Consistency across environments: By encapsulating the Docker daemon within a container, teams can replicate the same Docker environment across different stages of the development pipeline, from local development to CI/CD systems.

Challenges with DinD

Although DinD provides certain benefits, it also introduces significant challenges, such as:

  • Security risks: Running containers in privileged mode can expose the host system to security vulnerabilities, as the container gains extensive access to host resources.
  • Stability issues: Nested containers can lead to storage driver conflicts and other instability issues, causing unpredictable build failures.
  • Complex debugging: Troubleshooting issues in a nested Docker environment can be complicated, as it involves multiple layers of abstraction and isolation.

Real-world challenges

Although Docker-in-Docker might sound appealing, it often introduces more problems than it solves. Before diving into those challenges, let’s briefly discuss Testcontainers and its role in modern testing practices.

What is Testcontainers?

Testcontainers is a popular open source library designed to support integration testing by providing lightweight, disposable instances of common databases, web browsers, or any service that can run in a Docker container. It allows developers to write tests that interact with real instances of external resources, rather than relying on mocks or stubs.

Key features of Testcontainers

  • Realistic testing environment: By using actual services in containers, tests are more reliable and closer to real-world scenarios.
  • Isolation: Each test session, or even each test can run in a clean environment, reducing flakiness due to shared state.
  • Easy cleanup: Containers are ephemeral and are automatically cleaned up after tests, preventing resource leaks.

Dependency on the Docker daemon

A core component of Testcontainers’ functionality lies in its interaction with the Docker daemon. Testcontainers orchestrates Docker resources by starting and stopping containers as needed for tests. This tight integration means that access to a Docker environment is essential wherever the tests are run.

The DinD challenge with Testcontainers in CI

When teams try to include Testcontainers-based integration testing in their CI/CD pipelines, they often face the challenge of providing Docker access within the CI environment. Because Testcontainers requires communication with the Docker daemon, many teams resort to using Docker-in-Docker to emulate a Docker environment inside the CI job.

However, this approach introduces significant challenges, especially when trying to scale Testcontainers usage across the organization.

Case study: The CI pipeline nightmare

We had a Jenkins CI pipeline that utilized Testcontainers for integration tests. To provide the necessary Docker environment, we implemented DinD. Initially, it seemed to work fine, but soon we encountered:

  • Unstable builds: Random failures due to storage driver conflicts and issues with nested container layers. The nested Docker environment sometimes clashed with the host, causing unpredictable behavior.
  • Security concerns: Running containers in privileged mode raised red flags during security audits. Because DinD requires privileged mode to function correctly, it posed significant security risks, potentially allowing containers to access the host system.
  • Performance bottlenecks: Builds were slow, and resource consumption was high. The overhead of running Docker within Docker led to longer feedback loops, hindering developer productivity.
  • Complex debugging: Troubleshooting nested containers became time-consuming. Logs and errors were difficult to trace through the multiple layers of containers, making issue resolution challenging.

We spent countless hours trying to patch these issues, but it felt like playing a game of whack-a-mole.

Why Testcontainers Cloud is a better choice

Testcontainers Cloud is a cloud-based service designed to simplify and enhance your container-based testing. By offloading container execution to the cloud, it provides a secure, scalable, and efficient environment for your integration tests.

How TestContainers Cloud addresses DinD’s shortcomings

Enhanced security

  • No more privileged mode: Eliminates the need for running containers in privileged mode, reducing the attack surface.
  • Isolation: Tests run in isolated cloud environments, minimizing risks to the host system.
  • Compliance-friendly: Easier to pass security audits without exposing the Docker socket or granting elevated permissions.

Improved performance

  • Scalability: Leverage cloud resources to run tests faster and handle higher loads.
  • Resource efficiency: Offloading execution frees up local and CI/CD resources.

Simplified configuration

  • Plug-and-play integration: Minimal changes are required to switch from local Docker to Testcontainers Cloud.
  • No nested complexity: Avoid the intricacies and pitfalls of nested Docker daemons.

Better observability and debugging

  • Detailed logs: Access comprehensive logs through the Testcontainers Cloud dashboard.
  • Real-time monitoring: Monitor containers and resources in real time with enhanced visibility.

Getting started with Testcontainers Cloud

Let’s dive into how you can get the most out of Testcontainers Cloud.

Switching to Testcontainers Cloud allows you to run tests without needing a local Docker daemon:

  • No local Docker required: Testcontainers Cloud handles container execution in the cloud.
  • Consistent environment: Ensures that your tests run in the same environment across different machines.

Additionally, you can easily integrate Testcontainers Cloud into your CI pipeline to run the same tests without scaling your CI infrastructure.

Using Testcontainers Cloud with GitHub Actions

Here’s how you can set up Testcontainers Cloud in your GitHub Actions workflow.

1. Create a new service account

  • Log in to Testcontainers Cloud dashboard.
  • Navigate to Service Accounts:
    • Create a new service account dedicated to your CI environment.
  • Generate an access token:
    • Copy the access token. Remember, you can only view it once, so store it securely.

2. Set the TC_CLOUD_TOKEN environment variable

  • In GitHub Actions:
    • Go to your repository’s Settings > Secrets and variables > Actions.
    • Add a new Repository Secret named TC_CLOUD_TOKEN and paste the access token.

3. Add Testcontainers Cloud to your workflow

Update your GitHub Actions workflow (.github/workflows/ci.yml) to include the Testcontainers Cloud setup.

Example workflow:

name: CI Pipeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      # ... other preparation steps (dependencies, compilation, etc.) ...

      - name: Set up Java
        uses: actions/setup-java@v3
        with:
          distribution: 'temurin'
          java-version: '17'

      - name: Setup Testcontainers Cloud Client
        uses: atomicjar/testcontainers-cloud-setup-action@v1
        with:
          token: ${{ secrets.TC_CLOUD_TOKEN }}

      # ... steps to execute your tests ...
      - name: Run Tests
        run: ./mvnw test

Notes:

  • The atomicjar/testcontainers-cloud-setup-action GitHub Action automates the installation and authentication of the Testcontainers Cloud Agent in your CI environment.
  • Ensure that your TC_CLOUD_TOKEN is kept secure using GitHub’s encrypted secrets.

Clarifying the components: Testcontainers Cloud Agent and Testcontainers Cloud

To make everything clear:

  • Testcontainers Cloud Agent (CLI in CI environments): In CI environments like GitHub Actions, you use the Testcontainers Cloud Agent (installed via the GitHub Action or command line) to connect your CI jobs to Testcontainers Cloud.
  • Testcontainers Cloud: The cloud service that runs your containers, offloading execution from your CI environment.

In CI environments:

  • Use the Testcontainers Cloud Agent (CLI) within your CI jobs.
  • Authenticate using the TC_CLOUD_TOKEN.
  • Tests executed in the CI environment will use Testcontainers Cloud.

Monitoring and debugging

Take advantage of the Testcontainers Cloud dashboard:

  • Session logs: View logs for individual test sessions.
  • Container details: Inspect container statuses and resource usage.
  • Debugging: Access container logs and output for troubleshooting.

Why developers prefer Testcontainers Cloud over DinD

Real-world impact

After integrating Testcontainers Cloud, our team observed the following:

  • Faster build times: Tests ran significantly faster due to optimized resource utilization.
  • Reduced maintenance: Less time spent on debugging and fixing CI pipeline issues.
  • Enhanced security: Eliminated the need for privileged mode, satisfying security audits.
  • Better observability: Improved logging and monitoring capabilities.

Addressing common concerns

Security and compliance

  • Data isolation: Each test runs in an isolated environment.
  • Encrypted communication: Secure data transmission.
  • Compliance: Meets industry-standard security practices.

Cost considerations

  • Efficiency gains: Time saved on maintenance offsets the cost.
  • Resource optimization: Reduces the need for expensive CI infrastructure.

Compatibility

  • Multi-language support: Works with Java, Node.js, Python, Go, .NET, and more.
  • Seamless integration: Minimal changes required to existing test code.

Conclusion

Switching to Testcontainers Cloud, with the help of the Testcontainers Cloud Agent, has been a game-changer for our team and many others in the industry. It addresses the key pain points associated with Docker-in-Docker and offers a secure, efficient, and developer-friendly alternative.

Key takeaways

  • Security: Eliminates the need for privileged containers and Docker socket exposure.
  • Performance: Accelerates test execution with scalable cloud resources.
  • Simplicity: Simplifies configuration and reduces maintenance overhead.
  • Observability: Enhances debugging with detailed logs and monitoring tools.

As someone who has navigated these challenges, I recommend trying Testcontainers Cloud. It’s time to move beyond the complexities of DinD and adopt a solution designed for modern development workflows.

Additional resources

Model-Based Testing with Testcontainers and Jqwik

23 October 2024 at 20:31

When testing complex systems, the more edge cases you can identify, the better your software performs in the real world. But how do you efficiently generate hundreds or thousands of meaningful tests that reveal hidden bugs? Enter model-based testing (MBT), a technique that automates test case generation by modeling your software’s expected behavior.

In this demo, we’ll explore the model-based testing technique to perform regression testing on a simple REST API.

We’ll use the jqwik test engine on JUnit 5 to run property and model-based tests. Additionally, we’ll use Testcontainers to spin up Docker containers with different versions of our application.

2400x1260 Testcontainers evergreen set 4

Model-based testing

Model-based testing is a method for testing stateful software by comparing the tested component with a model that represents the expected behavior of the system. Instead of manually writing test cases, we’ll use a testing tool that:

  • Takes a list of possible actions supported by the application
  • Automatically generates test sequences from these actions, targeting potential edge cases
  • Executes these tests on the software and the model, comparing the results

In our case, the actions are simply the endpoints exposed by the application’s API. For the demo’s code examples, we’ll use a basic service with a CRUD REST API that allows us to:

  • Find an employee by their unique employee number
  • Update an employee’s name
  • Get a list of all the employees from a department
  • Register a new employee
testcontainers model based f1
Figure 1: Finding an employee, updating their name, finding their department, and registering a new employee.

Once everything is configured and we finally run the test, we can expect to see a rapid sequence of hundreds of requests being sent to the two stateful services:

testcontainers model based f2
Figure 2: New requests being sent to the two stateful services.

Docker Compose

Let’s assume we need to switch the database from Postgres to MySQL and want to ensure the service’s behavior remains consistent. To test this, we can run both versions of the application, send identical requests to each, and compare the responses.

We can set up the environment using a Docker Compose that will run two versions of the app:

  • Model (mbt-demo:postgres): The current live version and our source of truth.
  • Tested version (mbt-demo:mysql): The new feature branch under test.
services:
  ## MODEL
  app-model:
      image: mbt-demo:postgres
      # ...
      depends_on:
          - postgres
  postgres:
      image: postgres:16-alpine
      # ...
      
  ## TESTED
  app-tested:
    image: mbt-demo:mysql
    # ...
    depends_on:
      - mysql
  mysql:
    image: mysql:8.0
    # ...

Testcontainers

At this point, we could start the application and databases manually for testing, but this would be tedious. Instead, let’s use Testcontainers’ ComposeContainer to automate this with our Docker Compose file during the testing phase.

In this example, we’ll use jqwik as our JUnit 5 test runner. First, let’s add the jqwik and Testcontainers and the jqwik-testcontainers dependencies to our pom.xml:

<dependency>
    <groupId>net.jqwik</groupId>
    <artifactId>jqwik</artifactId>
    <version>1.9.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>net.jqwik</groupId>
    <artifactId>jqwik-testcontainers</artifactId>
    <version>0.5.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <version>1.20.1</version>
    <scope>test</scope>
</dependency>

As a result, we can now instantiate a ComposeContainer and pass our test docker-compose file as argument:

@Testcontainers
class ModelBasedTest {

    @Container
    static ComposeContainer ENV = new ComposeContainer(new File("src/test/resources/docker-compose-test.yml"))
       .withExposedService("app-tested", 8080, Wait.forHttp("/api/employees").forStatusCode(200))
       .withExposedService("app-model", 8080, Wait.forHttp("/api/employees").forStatusCode(200));

    // tests
}

Test HTTP client

Now, let’s create a small test utility that will help us execute the HTTP requests against our services:

class TestHttpClient {
  ApiResponse<EmployeeDto> get(String employeeNo) { /* ... */ }
  
  ApiResponse<Void> put(String employeeNo, String newName) { /* ... */ }
  
  ApiResponse<List<EmployeeDto>> getByDepartment(String department) { /* ... */ }
  
  ApiResponse<EmployeeDto> post(String employeeNo, String name) { /* ... */ }

    
  record ApiResponse<T>(int statusCode, @Nullable T body) { }
    
  record EmployeeDto(String employeeNo, String name) { }
}

Additionally, in the test class, we can declare another method that helps us create TestHttpClients for the two services started by the ComposeContainer:

static TestHttpClient testClient(String service) {
  int port = ENV.getServicePort(service, 8080);
  String url = "http://localhost:%s/api/employees".formatted(port);
  return new TestHttpClient(service, url);
}

jqwik

Jqwik is a property-based testing framework for Java that integrates with JUnit 5, automatically generating test cases to validate properties of code across diverse inputs. By using generators to create varied and random test inputs, jqwik enhances test coverage and uncovers edge cases.

If you’re new to jqwik, you can explore their API in detail by reviewing the official user guide. While this tutorial won’t cover all the specifics of the API, it’s essential to know that jqwik allows us to define a set of actions we want to test.

To begin with, we’ll use jqwik’s @Property annotation — instead of the traditional @Test — to define a test:

@Property
void regressionTest() {
  TestHttpClient model = testClient("app-model");
  TestHttpClient tested = testClient("app-tested");
  // ...
}

Next, we’ll define the actions, which are the HTTP calls to our APIs and can also include assertions.

For instance, the GetOneEmployeeAction will try to fetch a specific employee from both services and compare the responses:

record ModelVsTested(TestHttpClient model, TestHttpClient tested) {}

record GetOneEmployeeAction(String empNo) implements Action<ModelVsTested> {
  @Override
  public ModelVsTested run(ModelVsTested apps) {
    ApiResponse<EmployeeDto> actual = apps.tested.get(empNo);
    ApiResponse<EmployeeDto> expected = apps.model.get(empNo);

    assertThat(actual)
      .satisfies(hasStatusCode(expected.statusCode()))
      .satisfies(hasBody(expected.body()));
    return apps;
  }
}

Additionally, we’ll need to wrap these actions within Arbitrary objects. We can think of Arbitraries as objects implementing the factory design pattern that can generate a wide variety of instances of a type, based on a set of configured rules.

For instance, the Arbitrary returned by employeeNos() can generate employee numbers by choosing a random department from the configured list and concatenating a number between 0 and 200:

static Arbitrary<String> employeeNos() {
  Arbitrary<String> departments = Arbitraries.of("Frontend", "Backend", "HR", "Creative", "DevOps");
  Arbitrary<Long> ids = Arbitraries.longs().between(1, 200);
  return Combinators.combine(departments, ids).as("%s-%s"::formatted);
}

Similarly, getOneEmployeeAction() returns an Aribtrary action based on a given Arbitrary employee number:

static Arbitrary<GetOneEmployeeAction> getOneEmployeeAction() {
  return employeeNos().map(GetOneEmployeeAction::new);
}

After declaring all the other Actions and Arbitraries, we’ll create an ActionSequence:

@Provide
Arbitrary<ActionSequence<ModelVsTested>> mbtJqwikActions() {
  return Arbitraries.sequences(
    Arbitraries.oneOf(
      MbtJqwikActions.getOneEmployeeAction(),
      MbtJqwikActions.getEmployeesByDepartmentAction(),
      MbtJqwikActions.createEmployeeAction(),
      MbtJqwikActions.updateEmployeeNameAction()
  ));
}


static Arbitrary<Action<ModelVsTested>> getOneEmployeeAction() { /* ... */ }
static Arbitrary<Action<ModelVsTested>> getEmployeesByDepartmentAction() { /* ... */ }
// same for the other actions

Now, we can write our test and leverage jqwik to use the provided actions to test various sequences. Let’s create the ModelVsTested tuple and use it to execute the sequence of actions against it:

@Property
void regressionTest(@ForAll("mbtJqwikActions") ActionSequence<ModelVsTested> actions) {
  ModelVsTested testVsModel = new ModelVsTested(
    testClient("app-model"),
    testClient("app-tested")
  );
  actions.run(testVsModel);
}

That’s it — we can finally run the test! The test will generate a sequence of thousands of requests trying to find inconsistencies between the model and the tested service:

INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-129?name=v
INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-129?name=v
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-129
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-129
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] POST /api/employees { name=sdxToS, empNo=Frontend-91 }
INFO com.etr.demo.utils.TestHttpClient -- [app-model] POST /api/employees { name=sdxToS, empNo=Frontend-91 }
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-4?name=PZbmodNLNwX
INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-4?name=PZbmodNLNwX
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-4
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-4
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees?department=ٺ⯟桸
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees?department=ٺ⯟桸
        ...

Catching errors

If we run the test and check the logs, we’ll quickly spot a failure. It appears that when searching for employees by department with the argument ٺ⯟桸 the model produces an internal server error, while the test version returns 200 OK:

Original Sample
---------------
actions:
ActionSequence[FAILED]: 8 actions run [
    UpdateEmployeeAction[empNo=Creative-13, newName=uRhplM],
    CreateEmployeeAction[empNo=Backend-184, name=aGAYQ],
    UpdateEmployeeAction[empNo=Backend-3, newName=aWCxzg],
    UpdateEmployeeAction[empNo=Frontend-93, newName=SrJTVwMvpy],
    UpdateEmployeeAction[empNo=Frontend-129, newName=v],
    CreateEmployeeAction[empNo=Frontend-91, name=sdxToS],
    UpdateEmployeeAction[empNo=Frontend-4, newName=PZbmodNLNwX],
    GetEmployeesByDepartmentAction[department=ٺ⯟桸]
]
    final currentModel: ModelVsTested[model=com.etr.demo.utils.TestHttpClient@5dc0ff7d, tested=com.etr.demo.utils.TestHttpClient@64920dc2]
Multiple Failures (1 failure)
    -- failure 1 --
    expected: 200
    but was: 500

Upon investigation, we find that the issue arises from a native SQL query using Postgres-specific syntax to retrieve data. While this was a simple issue in our small application, model-based testing can help uncover unexpected behavior that may only surface after a specific sequence of repetitive steps pushes the system into a particular state.

Wrap up

In this post, we provided hands-on examples of how model-based testing works in practice. From defining models to generating test cases, we’ve seen a powerful approach to improving test coverage and reducing manual effort. Now that you’ve seen the potential of model-based testing to enhance software quality, it’s time to dive deeper and tailor it to your own projects.

Clone the repository to experiment further, customize the models, and integrate this methodology into your testing strategy. Start building more resilient software today!

Thank you to Emanuel Trandafir for contributing this post.

Learn more

Leveraging Testcontainers for Complex Integration Testing in Mattermost Plugins

8 October 2024 at 20:22

This post was contributed by Jesús Espino, Principal Engineer at Mattermost.

In the ever-evolving software development landscape, ensuring robust and reliable plugin integration is no small feat. For Mattermost, relying solely on mocks for plugin testing became a limitation, leading to brittle tests and overlooked integration issues. Enter Testcontainers, an open source tool that provides isolated Docker environments, making complex integration testing not only feasible but efficient. 

In this blog post, we dive into how Mattermost has embraced Testcontainers to overhaul its testing strategy, achieving greater automation, improved accuracy, and seamless plugin integration with minimal overhead.

2400x1260 leveraging testcontainers for complex integration testing in mattermost plugins

The previous approach

In the past, Mattermost relied heavily on mocks to test plugins. While this approach had its merits, it also had significant drawbacks. The tests were brittle, meaning they would often break when changes were made to the codebase. This made the tests challenging to develop and maintain, as developers had to constantly update the mocks to reflect the changes in the code.

Furthermore, the use of mocks meant that the integration aspect of testing was largely overlooked. The tests did not account for how the different components of the system interacted with each other, which could lead to unforeseen issues in the production environment. 

The previous approach additionally did not allow for proper integration testing in an automated way. The lack of automation made the testing process time-consuming and prone to human error. These challenges necessitated a shift in Mattermost’s testing strategy, leading to the adoption of Testcontainers for complex integration testing.

Mattermost’s approach to integration testing

Testcontainers for Go

Mattermost uses Testcontainers for Go to create an isolated testing environment for our plugins. This testing environment includes the Mattermost server, the PostgreSQL server, and, in certain cases, an API mock server. The plugin is then installed on the Mattermost server, and through regular API calls or end-to-end testing frameworks like Playwright, we perform the required testing.

We have created a specialized Testcontainers module for the Mattermost server. This module uses PostgreSQL as a dependency, ensuring that the testing environment closely mirrors the production environment. Our module allows the developer to install and configure any plugin you want in the Mattermost server easily.

To improve the system’s isolation, the Mattermost module includes a container for the server and a container for the PostgreSQL database, which are connected through an internal Docker network.

Additionally, the Mattermost module exposes utility functionality that allows direct access to the database, to the Mattermost API through the Go client, and some utility functions that enable admins to create users, channels, teams, and change the configuration, among other things. This functionality is invaluable for performing complex operations during testing, including API calls, users/teams/channel creation, configuration changes, or even SQL query execution. 

This approach provides a powerful set of tools with which to set up our tests and prepare everything for verifying the behavior that we expect. Combined with the disposable nature of the test container instances, this makes the system easy to understand while remaining isolated.

This comprehensive approach to testing ensures that all aspects of the Mattermost server and its plugins are thoroughly tested, thereby increasing their reliability and functionality. But, let’s see a code example of the usage.

We can start setting up our Mattermost environment with a plugin like this:

pluginConfig := map[string]any{}
options := []mmcontainer.MattermostCustomizeRequestOption{
  mmcontainer.WithPlugin("sample.tar.gz", "sample", pluginConfig),
}
mattermost, err := mmcontainer.RunContainer(context.Background(), options...)
defer mattermost.Terminate(context.Background()

Once your Mattermost instance is initialized, you can create a test like this:

func TestSample(t *testing.T) {
    client, err mattermost.GetClient()
    require.NoError(t, err)
    reqURL := client.URL + "/plugins/sample/sample-endpoint"
    resp, err := client.DoAPIRequest(context.Background(), http.MethodGet, reqURL, "", "")
    require.NoError(t, err, "cannot fetch url %s", reqURL)
    defer resp.Body.Close()
    bodyBytes, err := io.ReadAll(resp.Body)
    require.NoError(t, err)
    require.Equal(t, 200, resp.StatusCode)
    assert.Contains(t, string(bodyBytes), "sample-response") 
}

Here, you can decide when you tear down your Mattermost instance and recreate it. Once per test? Once per a set of tests? It is up to you and depends strictly on your needs and the nature of your tests.

Testcontainers for Node.js

In addition to using Testcontainers for Go, Mattermost leverages Testcontainers for Node.js to set up our testing environment. In case you’re unfamiliar, Testcontainers for Node.js is a Node.js library that provides similar functionality to Testcontainers for Go. Using Testcontainers for Node.js, we can set up our environment in the same way we did with Testcontainers for Go. This allows us to write Playwright tests using JavaScript and run them in the isolated Mattermost environment created by Testcontainers, enabling us to perform integration testing that interacts directly with the plugin user interface. The code is available on GitHub.  

This approach provides the same advantages as Testcontainers for Go, and it allows us to use a more interface-based testing tool — like Playwright in this case. Let me show a bit of code with the Node.js and Playwright implementation:

We start and stop the containers for each test:

test.beforeAll(async () => { mattermost = await RunContainer() })
test.afterAll(async () => { await mattermost.stop(); })

Then we can use our Mattermost instance like any other server running to run our Playwright tests:

test.describe('sample slash command', () => {
  test('try to run a sample slash command', async ({ page }) => {
    const url = mattermost.url()
    await login(page, url, "regularuser", "regularuser")
    await expect(page.getByLabel('town square public channel')).toBeVisible();
    await page.getByTestId('post_textbox').fill("/sample run")
    await page.getByTestId('SendMessageButton').click();
    await expect(page.getByText('Sample command result', { exact: true })).toBeVisible();
    await logout(page)
  });  
});

With these two approaches, we can create integration tests covering the API and the interface without having to mock or use any other synthetic environment. Also, we can test things in absolute isolation because we consciously decide whether we want to reuse the Testcontainers instances. We can also reach a high degree of isolation and thereby avoid the flakiness induced by contaminated environments when doing end-to-end testing.

Examples of usage

Currently, we are using this approach for two plugins.

1. Mattermost AI Copilot

This integration helps users in their daily tasks using AI large language models (LLMs), providing things like thread and meeting summarization and context-based interrogation.

This plugin has a rich interface, so we used the Testcontainers for Node and Playwright approach to ensure we could properly test the system through the interface. Also, this plugin needs to call the AI LLM through an API. To avoid that resource-heavy task, we use an API mock, another container that simulates any API.

This approach gives us confidence in the server-side code but in the interface side as well, because we can ensure that we aren’t breaking anything during the development.

2. Mattermost MS Teams plugin

This integration is designed to connect MS Teams and Mattermost in a seamless way, synchronizing messages between both platforms.

For this plugin, we mainly need to do API calls, so we used Testcontainers for Go and directly hit the API using a client written in Go. In this case, again, our plugin depends on a third-party service: the Microsoft Graph API from Microsoft. For that, we also use an API mock, enabling us to test the whole plugin without depending on the third-party service.

We still have some integration tests with the real Teams API using the same Testcontainers infrastructure to ensure that we are properly handling the Microsoft Graph calls.

Benefits of using Testcontainers libraries

Using Testcontainers for integration testing offers benefits, such as:

  • Isolation: Each test runs in its own Docker container, which means that tests are completely isolated from each other. This approach prevents tests from interfering with one another and ensures that each test starts with a clean slate.
  • Repeatability: Because the testing environment is set up automatically, the tests are highly repeatable. This means that developers can run the tests multiple times and get the same results, which increases the reliability of the tests.
  • Ease of use: Testcontainers is easy to use, as it handles all the complexities of setting up and tearing down Docker containers. This allows developers to focus on writing tests rather than managing the testing environment.

Testing made easy with Testcontainers

Mattermost’s use of Testcontainers libraries for complex integration testing in their plugins is a testament to the power and versatility of Testcontainers.

By creating a well-isolated and repeatable testing environment, Mattermost ensures that our plugins are thoroughly tested and highly reliable.

Learn more

Exploring Docker for DevOps: What It Is and How It Works

30 September 2024 at 21:11

DevOps aims to dramatically improve the software development lifecycle by bringing together the formerly separated worlds of development and operations using principles that strive to make software creation more efficient. DevOps practices form a useful roadmap to help developers in every phase of the development lifecycle, from code planning to building, task automation, testing, monitoring, releasing, and deploying applications.

As DevOps use continues to expand, many developers and organizations find that the Docker containerization platform integrates well as a crucial component of DevOps practices. Using Docker, developers have the advantage of being able to collaborate in standardized environments using local containers and remote container tools where they can write their code, share their work, and collaborate. 

In this blog post, we will explore the use of Docker within DevOps practices and explain how the combination can help developers create more efficient and powerful workflows.

2400x1260 evergreen docker blog c

What is DevOps?

DevOps practices are beneficial in the world of developers and code creation because they encourage smart planning, collaboration, and orderly processes and management throughout the software development pipeline. Without unified DevOps principles, code is typically created in individual silos that can hamper creativity, efficient management, speed, and quality.

Bringing software developers, operations teams, and processes together under DevOps principles, can improve both developer and organizational efficiency through increased collaboration, agility, and innovation. DevOps brings these positive changes to organizations by constantly integrating user feedback regarding application features, shortcomings, and code glitches and — by making changes as needed on the fly — reducing operational and security risks in production code.

CI/CD

In addition to collaboration, DevOps principles are built around procedures for continuous integration/improvement (CI) and continuous deployment/delivery (CD) of code, shortening the cycle between development and production. This CI/CD approach lets teams more quickly adapt to feedback and thus build better applications from code conception all the way through to end-user experiences.

Using CI, developers can frequently and automatically integrate their changes into the source code as they create new code, while the CD side tests and delivers those vetted changes to the production environment. By integrating CI/CD practices, developers can create cleaner and safer code and resolve bugs ahead of production through automation, collaboration, and strong QA pipelines. 

What is Docker?

The Docker containerization platform is a suite of tools, standards, and services that enable DevOps practices for application developers. Docker is used to develop, ship, and run applications within lightweight containers. This approach allows developers to separate their applications from their business infrastructure, giving them the power to deliver better code more quickly. 

The Docker platform enables developers to package and run their application code in lightweight, local, standardized containers, which provide a loosely isolated environment that contains everything needed to run the application — including tools, packages, and libraries. By using Docker containers on a Docker client, developers can run an application without worrying about what is installed on the host, giving them huge flexibility, security, and collaborative advantages over virtual machines. 

In this controlled environment, developers can use Docker to create, monitor, and push their applications into a test environment, run automated and manual tests as needed, correct bugs, and then validate the code before deploying it for use in production. 

Docker also allows developers to run many containers simultaneously on a host, while allowing those same containers to be shared with others. Such a collaborative workspace can foster healthy and direct communications between developers, allowing development processes to become easier, more accurate, and more secure. 

Containers vs. virtualization

Containers are an abstraction that packages application code and dependencies together. Instances of the container can then be created, started, stopped, moved, or deleted using the Docker API or command-line interface (CLI). Containers can be connected to one or more networks, be attached to storage, or create new images based on their current states. 

Containers differ from virtual machines, which use a software abstraction layer on top of computer hardware, allowing the hardware to be shared more efficiently in multiple instances that will run individual applications. Docker containers require fewer physical hardware resources than virtual machines, and they also offer faster startup times and lower overhead. This makes Docker ideal for high-velocity environments, where rapid software development cycles and scalability are crucial. 

Basic components of Docker 

The basic components of Docker include:

  • Docker images: Docker images are the blueprints for your containers. They are read-only templates that contain the instructions for creating a Docker container. You can think of a container image as a snapshot of a specific state of your application.
  • Containers: Containers are the instances of Docker images. They are lightweight and portable, encapsulating your application along with its dependencies. Containers can be created, started, stopped, moved, and deleted using simple Docker commands.
  • Dockerfiles: A Dockerfile is a text document containing a series of instructions on how to build a Docker image. It includes commands for specifying the base image, copying files, installing dependencies, and setting up the environment. 
  • Docker Engine: Docker Engine is the core component of Docker. It’s a client-server application that includes a server with a long-running daemon process, APIs for interacting with the daemon, and a CLI client.
  • Docker Desktop: Docker Desktop is a commercial product sold and supported by Docker, Inc. It includes the Docker Engine and other open source components, proprietary components, and features like an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, security features, and administrative settings management. 
  • Docker Hub: Docker Hub is a public registry where you can store and share Docker images. It serves as a central place to find official Docker images and user-contributed images. You can also use Docker Hub to automate your workflows by connecting it to your CI/CD pipelines.

Basic Docker commands

Docker commands are simple and intuitive. For example:

  • docker run: Runs a Docker container from a specified image. For example, docker run hello-world will run a container from the “hello-world” image.
  • docker build: Builds an image from a Dockerfile. For example, docker build -t my-app . will build an image named “my-app” from the Dockerfile in the current directory.
  • docker pull: Pulls an image from Docker Hub. For example, docker pull nginx will download the latest NGINX image from Docker Hub.
  • docker ps: Lists all running containers. For example, docker ps -a will list all containers, including stopped ones.
  • docker stop: Stops a running Docker container. For example, docker stop <container_id> will stop the container with the specified ID.
  • docker rm: Removes a stopped container. For example, docker rm <container_id> will remove the container with the specified ID.

How Docker is used in DevOps

One of Docker’s most important benefits for developers is its critical role in facilitating CI/CD in the application development process. This makes it easier and more seamless for developers to work together to create better code.

Docker is a build environment where developers can get predictable results building and testing their applications inside Docker containers and where it is easier to get consistent, reproducible results compared to other development environments. Developers can use Dockerfiles to define the exact requirements needed for their build environments, including programming runtimes, operating systems, binaries, and more.

Using Docker as a build environment also makes application maintenance easier. For example, you can update to a new version of a programming runtime by just changing a tag or digest in a Dockerfile. That is easier than the process required on a virtual machine to manually reinstall a newer version and update the related configuration files.

Automated testing is also easier using Docker Hub, which can automatically test changes to source code repositories using containers or push applications into a test environment and run automated and manual tests.

Docker can be integrated with DevOps tools including Jenkins, GitLab, Kubernetes, and others, simplifying DevOps processes by automating pipelines and scaling operations as needed. 

Benefits of using Docker for DevOps 

Because the Docker containers used for development are the same ones that are moved along for testing and production, the Docker platform provides consistency across environments and delivers big benefits to developer teams and operations managers. Each Docker container is isolated from others being run, eliminating conflicting dependencies. Developers are empowered to build, run, and test their code while collaborating with others and using all the resources available to them within the Docker platform environment. 

Other benefits to developers include speed and agility, resource efficiency, error reduction, integrated version control, standardization, and the ability to write code once and run it on any system. Additionally, applications built on Docker can be pushed easily to customers on any computing environment, assuring quick, easy, and consistent delivery and deployment process. 

4 Common Docker challenges in DevOps

Implementing Docker in a DevOps environment can offer numerous benefits, but it also presents several challenges that teams must navigate:

1. Learning curve and skills gap

Docker introduces new concepts and technologies that require teams to acquire new skills. This can be a significant hurdle, especially if the team lacks experience with containerization. Docker’s robust documentation and guides and our international community can help new users quickly ramp up.

2. Security concerns

Ensuring the security of containerized applications involves addressing vulnerabilities in container images, managing secrets, and implementing network policies. Misconfigurations and running containers with root privileges can lead to security risks. Docker does, however, provide security guardrails for both administrators and developers.

The Docker Business subscription provides security and management at scale. For example, administrators can enforce sign-ins across Docker products for developers and efficiently manage, scale, and secure Docker Desktop instances using DevOps security controls like Enhanced Container Isolation and Registry Access Management.

Additionally, Docker offers security-focused tools, like Docker Scout, which helps administrators and developers secure the software supply chain by proactively monitoring image vulnerabilities and implementing remediation strategies. Introduced in 2024, Docker Scout health scores rate the security and compliance status of container images within Docker Hub, providing a single, quantifiable metric to represent the “health” of an image. This feature addresses one of the key friction points in developer-led software security — the lack of security expertise — and makes it easier for developers to turn critical insights from tools into actionable steps.

3. Microservice architectures

Containers and the ecosystem around them are specifically geared towards microservice architectures. You can run a monolith in a container, but you will not be able to leverage all of the benefits and paradigms of containers in that way. Instead, containers can be a useful gateway to microservices. Users can start pulling out individual pieces from a monolith into more containers over time.

4. Image management

Image management in Docker can also be a challenge for developers and teams as they search private registries and community repositories for images to use in building their applications. Docker Image Access Management can help with this challenge as it gives administrators control over which types of images — such as Docker Official Images, Docker Verified Publisher Images, or community images — their developers can pull for use from Docker Hub. Docker Hub tries to help by publishing only official images and verifying content from trusted partners. 

Using Image Access Management controls helps prevent developers from accidentally using an untrusted, malicious community image as a component of their application. Note that Docker Image Access Management is available only to customers of the company’s top Docker Business services offering.

Another important tool here is Docker Scout. It is built to help organizations better protect their software supply chain security when using container images, which consist of layers and software packages that may be susceptible to security vulnerabilities. Docker Scout helps with this issue by proactively analyzing container images and compiling a Software Bill of Materials (SBOM), which is a detailed inventory of code included in an application or container. That SBOM is then matched against a continuously updated vulnerability database to pinpoint and correct security weaknesses to make the code more secure.

More information and help about using Docker can be found in the Docker Trainings page, which offers training webcasts and other resources to assist developers and teams to negotiate their Docker landscapes and learn fresh skills to solve their technical inquiries. 

Examples of DevOps using Docker

Improving DevOps workflows is a major goal for many enterprises as they struggle to improve operations and developer productivity and to produce cleaner, more secure, and better code.

The Warehouse Group

At The Warehouse Group, New Zealand’s largest retail store chain with some 300 stores, Docker was introduced in 2016 to revamp its systems and processes after previous VMware deployments resulted in long setup times, inconsistent environments, and slow deployment cycles. 

“One of the key benefits we have seen from using Docker is that it enables a very flexible work environment,” said Matt Law, the chapter lead of DevOps for the company. “Developers can build and test applications locally on their own machines with consistency across environments, thanks to Docker’s containerization approach.”

Docker brought new autonomy to the company’s developers so they could test ideas and find new and better ways to solve bottlenecks, said Law. “That is a key philosophy that we have here — enabling developers to experiment with tooling to help them prove or disprove their philosophies or theories.”

Ataccama Corporation

Another Docker customer, Ataccama Corp., a Toronto-based data management software vendor, adopted Docker and DevOps practices when it moved to scale its business by moving from physical servers to cloud platforms like AWS and Azure to gain agility, scalability, and cost efficiencies using containerization. 

For Ataccama, Docker delivered rapid deployment, simplified application management, and seamless portability between environments, which brought accelerated feature development, increased efficiency and performance, valuable microservices capabilities, and required security and high availability. To boost the value of Docker for its developers and IT managers, Ataccama provided container and DevOps skills training and promoted collaboration to make Docker an integral tool and platform for the company and its operations.

“What makes Docker a class apart is its support for open standards like Open Container Initiative (OCI) and its amazing flexibility,” said Vladimir Mikhalev, senior DevOps engineer at Ataccama. “It goes far beyond just running containers. With Docker, we can build, share, and manage containerized apps seamlessly across infrastructure in a way that most tools can’t match.”

The most impactful feature of Docker is its ability to bundle an app, configuration, and dependencies into a single standardized unit, said Mikhalev. “This level of encapsulation has been a game-changer for eliminating environment inconsistencies.”

Wrapping up

Docker provides a transformative impact for enterprises that have adopted DevOps practices. The Docker platform enables developers to create, collaborate, test, monitor, ship, and run applications within lightweight containers, giving them the power to deliver better code more quickly. 

Docker simplifies and empowers development processes, enhancing productivity and improving the reliability of applications across different environments. 

Find the right Docker subscription to bolster your DevOps workflow. 

Learn more

10 Docker Myths Debunked

19 September 2024 at 20:59

Containers might seem like a relatively recent technological breakthrough, but their origins trace back to the 1970s when Unix systems first used container-like concepts to isolate applications. Fast-forward to 2013, and Docker revolutionized this idea by introducing a portable, user-friendly container platform, sparking widespread adoption. In 2015, Docker was instrumental in creating the Open Container Initiative (OCI) to promote open standards within the container ecosystem. With the stability provided by the OCI, container technology spread throughout the tech world.

Although Docker Desktop is the leading tool for creating containerized applications, Docker remains surrounded by numerous misconceptions. In this article, we’ll debunk the top Docker myths and explain the capabilities and benefits of this transformative technology.

2400x1260 evergreen docker blog e

Myth #1: Docker is no longer open source

Docker consists of multiple components, most of which are open source. The core Docker Engine is open source and licensed under the Apache 2.0 license, so developers can continue to use and contribute to it freely. Other vital parts of the Docker ecosystem, like the Docker CLI and Docker Compose, also remain open source. This allows the community to maintain transparency, contribute improvements, and customize their container solutions.

Docker’s commitment to open source is best illustrated by the Moby Project. In 2017, Moby was spun out of the then-monolithic Docker codebase to provide a set of “building blocks” to create containerized solutions and platforms. Docker uses the Moby project for the free Docker Engine project and our commercial Docker Desktop.

Users can also find Trusted Open Source Content on Docker Hub. These Docker-Sponsored Open Source and Docker Official Images offer trusted versions of open source projects and reliable building blocks for better development.

Docker is a founder and remains a crucial contributor to the OCI, which defines container standards. This initiative ensures that Docker and other container technologies remain interoperable and maintain a commitment to open source principles.

Myth #2: Docker containers are virtual machines 

Docker containers are often mistaken for virtual machines (VMs), but the technologies operate quite differently. Unlike VMs, Docker containers don’t include an entire operating system (OS). Instead, they share the host operating system kernel, making them more lightweight and efficient. VMs require a hypervisor to create virtual hardware for the guest OS, which introduces significant overhead. Docker only packages the application and its dependencies, allowing for faster startup times and minimal performance overhead.

By utilizing the host operating system’s resources efficiently, Docker containers use fewer resources overall than VMs, which need substantial resources to run multiple operating systems concurrently. Docker’s architecture efficiently runs numerous isolated applications on a single host, optimizing infrastructure and development workflows. Understanding this distinction is crucial for maximizing Docker’s lightweight and scalable potential.

However, when running on non-Linux systems, Docker needs to emulate a Linux environment. For example, Docker Desktop uses a fully managed VM to provide a consistent experience across Windows, Mac, and Linux by running its Linux components inside this VM.

Myth #3: Docker Engine vs. Docker Desktop vs. Docker Enterprise Edition — They’re all the same

Considerable confusion surrounds the different Docker options that are available, which include:

  • Mirantis Container Runtime: Docker Enterprise Edition (Docker EE) was sold to Mirantis in 2019 and rebranded as Mirantis Container Runtime. This software, which is managed and sold by Mirantis, is designed for production container deployments and offers a lightweight alternative to existing orchestration tools.
  • Docker Engine: Docker Engine is the fully open source version built from the Moby Project, providing the Docker Engine and CLI.
  • Docker Desktop: Docker Desktop is a commercial offering sold by Docker that combines Docker Engine with additional features to enhance developer productivity. The Docker Business subscription includes advanced security and governance features for enterprises.

All of these variants are OCI-compliant, differing mainly in features and experiences. Docker Engine caters to the open source community, Docker Desktop elevates developer workflows with a comprehensive suite of tools for building and scaling applications, and Mirantis Container Runtime provides a specialized solution for enterprise production environments with advanced management and support. Understanding these distinctions is crucial for selecting the appropriate Docker variant to meet specific project requirements and organizational goals.

Myth #4: Docker is the same thing as Kubernetes

This myth arises from the fact that both Docker and Kubernetes are associated with containerized environments. Although they are both key players in the container ecosystem, they serve different roles.

Kubernetes (K8s) is an orchestration system for managing container instances at scale. This container orchestration tool automates the deployment, scaling, and operations of multiple containers across clusters of hosts. Other orchestration technologies include Nomad, serverless frameworks, Docker’s Swarm mode, and Apache Mesos. Each offers different features for managing containerized workloads.

Docker is primarily a platform for developing, shipping, and running containerized applications. It focuses on packaging applications and their dependencies in a portable container and is often used for local development where scaling is not required. Docker Desktop includes Docker Compose, which is designed to orchestrate multi-container deployments locally

In many organizations, Docker is used to develop applications, and the resulting Docker images are then deployed to Kubernetes for production. To support this workflow, Docker Desktop includes an embedded Kubernetes installation and the Compose Bridge tool for translating Compose format into Kubernetes-friendly code.

Myth #5: Docker is not secure

The belief that Docker is not secure is often a result of misunderstandings around how security is implemented within Docker. To help reduce security vulnerabilities and minimize the attack surface, Docker offers the following measures:

Opt-in security configuration 

Except for a few components, Docker operates on an opt-in basis for security. This approach removes friction for new users, but means Docker can still be configured to be more secure for enterprise considerations and for security-conscious users with sensitive data.

“Rootless” mode capabilities 

Docker Engine can run in rootless mode, where the Docker daemon runs without root permissions. This capability reduces the potential blast radius of malicious code escaping a container and gaining root permissions on the host. Docker Desktop takes security further by offering Enhanced Container Isolation (ECI), which provides advanced isolation features beyond what rootless mode can offer.

Built-in security features

Additionally, Docker security includes built-in features such as namespaces, control groups (cgroups), and seccomp profiles that provide isolation and limit the capabilities of containers.

SOC 2 Type 2 Attestation and ISO 27001 Certification

It’s important to note that, as an open source tool, Docker Engine is not in scope for SOC 2 Type 2 Attestation or ISO 27001 Certification. These certifications pertain to Docker, Inc.’s paid products, which offer additional enterprise-grade security and compliance features. These paid features, outlined in a Docker security blog post, focus on enhancing security and simplifying compliance for SOC 2, ISO 27001, FedRAMP, and other standards.  

Along with these security measures, Docker also provides best practices in the Docker documentation and training materials to help users learn how to secure their containers effectively. Recognizing and implementing these features reduces security risks and ensures that Docker can be a secure platform for containerized applications.

Myth #6: Docker is dead

This myth stems from the rapid growth and changes within the container ecosystem over the past decade. To keep pace with these changes, Docker is actively developed and is also widely adopted. In fact, the Stack Overflow community chose Docker as the most-used and most-desired developer tool in the 2024 Developer Survey for the second year in a row and recognized it as the most-admired developer tool. 

Docker Hub is one of the world’s largest repositories of container images. According to the 2024 Docker State of Application Development Report, tools like Docker Desktop, Docker Scout, Docker Build Cloud, and Docker Debug are integral to more than two-thirds of container development workflows. And, as a founding member of the OCI and steward of the Moby project, Docker continues to play a guiding role in containerization.

In the automation space, Docker is crucial for building OCI images and creating lightweight runners for build queues. With the rise of data science and AI/ML, Docker images facilitate the exchange of models, notebooks, and applications, supported by GPU workload capabilities in Docker Desktop. Additionally, Docker is widely used for quickly and cost-effectively mocking up test scenarios as an alternative to deploying actual hardware or VMs.

Myth #7: Docker is hard to learn

The belief that Docker is difficult to learn often comes from the perceived complexity of container concepts and Docker’s many features. However, Docker is a foundational technology used by more than 20 million developers worldwide, and countless resources are available to make learning Docker accessible.

Docker, Inc. is committed to the developer experience, creating intuitive and user-friendly product design for Docker Desktop and supporting products. Documentation, workshops, training, and examples are accessible through Docker Desktop, the Docker website and blog, and the Docker Navigator newsletter. Additionally, the Docker documentation site offers comprehensive guides and learning paths, and Udemy courses co-produced with Docker help new users understand containerization and Docker usage.

The thriving Docker community also contributes a wealth of content and resources, including video tutorials, how-tos, and in-person talks.

Myth #8: Docker and container technology are only for developers

The idea that Docker is only for developers is a common misconception. Docker and containers are used across various fields beyond development. Docker Desktop’s ability to run containerized workloads on Windows, macOS, or Linux requires minimal technical knowledge from users. Its integration features — synchronized host filesystems, network proxy support, air-gapped containers, and resource controls — ensure administrators can enforce governance and security.

  • Data science: Docker provides consistent environments, enabling data scientists to share models, datasets, and development setups seamlessly.
  • Healthcare: Docker deploys scalable applications for managing patient data and running simulations, such as medical imaging software across different hospital systems.
  • Education: Educators and students use Docker to create reproducible research environments, which facilitate collaboration and simplify coding project setups.

Docker’s versatility extends beyond development, providing consistent, scalable, and secure environments for various applications.

Myth #9: Docker Desktop is just a GUI

The myth that Docker Desktop is merely a graphical user interface (GUI) overlooks its extensive features designed to enhance developer experience, streamline container management, and accelerate productivity, such as:

Cross-platform support

Docker is Linux-based, but most developer workstations run Windows or macOS. Docker Desktop enables these platforms to run Docker tooling inside a fully managed VM integrated with the host system’s networking, filesystem, and resources.

Developer tools

Docker Desktop includes built-in Kubernetes, Docker Scout for supply chain management, Docker Build Cloud for faster builds, and Docker Debug for container debugging.

Security and governance

For administrators, Docker Desktop offers Registry Access Management and Image Access Management, Enhanced Container Isolation, single sign-on (SSO) for authorization, and Settings Management, making it an essential tool for enterprise deployment and management.

Myth #10: Docker containers are for microservices only

Although Docker containers are popular for microservices architectures, they can be used for any type of application. For example, monolithic applications can be containerized, allowing them and their dependencies to be isolated into a versioned image that can run across different environments. This approach enables gradual refactoring into microservices if desired.

Additionally, Docker is excellent for rapid prototyping, allowing quick deployment of minimum viable products (MVPs). Containerized prototypes are easier to manage and refactor compared to those deployed on VMs or bare metal.

Now you know

Now that you have the facts, it’s clear that adopting Docker can significantly enhance productivity, scalability, and security for a variety of use cases. Docker’s versatility, combined with extensive learning resources and robust security features, makes it an indispensable tool in modern software development and deployment. Adopting Docker and its true capabilities can significantly enhance productivity, scalability, and security for your use case.

For more detailed insights, refer to the 2024 Docker State of Application Development Report or dive into Docker Desktop now to start your Docker journey today

Learn more

Docker for Web Developers: Getting Started with the Basics

17 September 2024 at 20:38

Docker is known worldwide as a popular application containerization platform. But it also has a lesser-known and intriguing alter ego. It’s a popular go-to platform among web developers for its speed, flexibility, broad user base, and collaborative capabilities. 

Docker has been growing as a modern solution that brings innovation to web development using containerization. With containers, developers and web development projects can become more efficient, save time, and drive fresh creativity. Web developers use Docker for development because it ensures consistency across different environments, eliminating the “it works on my machine” problem. Docker also simplifies dependency management, enhances resource efficiency, supports scalable microservices architectures, and allows for rapid deployment and rollback, making it an indispensable tool for modern web development projects.

In this post, we dive into the benefits of using Docker in businesses from small to large, and review Docker’s broad capabilities, strengths, and features for bolstering web development and developer productivity. 

2400x1260 docker for web developers

What is Docker?

Docker is secure, out-of-the-box containerization software offering developers and teams a robust, hybrid toolkit to develop, test, monitor, ship, deploy, and run enterprise and web applications. Containerization lets developers separate their applications from infrastructure so they can run them without worrying about what is installed on the host, giving development teams flexibility and collaborative advantages over virtual machines, while delivering better source code faster. 

The Docker suite enables developers to package and run their application code in lightweight, local, standardized containers that have everything needed to run the application — including an operating system and required services. Docker allows developers to run many containers simultaneously on a host, while also allowing the containers to be shared with others. By working within this collaborative workspace, productive and direct communications can thrive and development processes become easier, more accurate, and more secure. Many of the components in Docker are open source, including Docker Compose, BuildKit, the Docker command-line interface (Docker CLI), containerd, and more. 

As the #1 containerization software for developers and teams, Docker is well-suited for all flavors of development. Highlights include: 

  • Docker Hub: The world’s largest repository of container images, which helps developers and open source contributors find, use, and share their Docker-inspired container images.
  • Docker Compose: A tool for defining and running multi-container applications.
  • Docker Engine: An open source containerization technology for building and containerizing applications.
  • Docker Desktop: Includes the Docker Engine and other open source components; proprietary components; and features such as an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, and security features that support Enhanced Container Isolation (ECI), air-gapped containers, and administrative settings management.
  • Docker Build Cloud: A Docker service that lets developers build their container images on a cloud infrastructure that ensures fast builds anywhere for all team members. 

What is a container?

Containers are lightweight, standalone, executable packages of software that include everything needed to run an application: code, runtime, libraries, environment variables, and configuration files. Containers are isolated from each other and can be connected to networks or storage and can be used to create new images based on their current states. 

Docker containers are faster and more efficient for software creation than virtualization, which uses a resource-heavy software abstraction layer on top of computer hardware. Additionally, Docker containers require fewer physical hardware resources than virtual machines and communicate with their host systems through well-defined channels.

Why use Docker for web applications?

Docker is a popular choice for developers building enterprise applications for various reasons, including consistent environments, efficient resource usage, speed, container isolation, scalability, flexibility, and portability. And, Docker is popular for web development for these same reasons. 

Consistent environments

Using Docker containers, web developers can build web applications that provide consistent environments from their development all the way through to production. By including all the components needed to run an application within an isolated container, Docker addresses those issues by allowing developers to produce and package their containers and then run them through various development, testing, and production environments to ensure their quality, security, and performance. This approach helps developers prevent the common and frustrating “but it works on my machine” conundrum, assuring that the code will run and perform well anywhere, from development through deployment.

Efficiency in using resources

With its lightweight architecture, Docker uses system resources more efficiently than virtual machines, allowing developers to run more applications on the same hardware. Docker containers allow multiple containers to run on a single host and gain resource efficiency due to the isolation and allocation features that containers incorporate. Additionally, containers require less memory and disk space to perform their tasks, saving on hardware costs and making resource management easier. Docker also saves development time by allowing container images to be reused as needed. 

Speed

Docker’s design and components also give developers significant speed advantages in setting up and tearing down container environments, allowing needed processes to be completed in seconds due to its lightweight and flexible application architecture. This allows developers to rapidly iterate their containerized applications, increasing their productivity for writing, building, testing, monitoring, and deploying their creations.  

Isolation

Docker’s application isolation capabilities provide huge benefits for developers, allowing them to write code and build their containers and applications simultaneously, with changes made in one not affecting the others. For developers, these capabilities allow them to find and isolate any bad code before using it elsewhere, improving security and manageability.

Scalability, flexibility, and portability

Docker’s flexible platform design also gives developers broad capabilities to easily scale applications up or down based on demand, while also allowing them to be deployed across different servers. These features give developers the ability to manage different workloads and system resources as needed. And, its portability features mean that developers can create their applications once and then use them in any environment, further ensuring their reliability and proper operation through the development cycle to production.

How web developers use Docker

There is a wide range of Docker use cases for today’s web developers, including its flexibility as a local development environment that can be quickly set up to match desired production environments; as an important partner for microservices architectures, where each service can be developed, tested, and deployed independently; or as an integral component in continuous integration and continuous deployment (CI/CD) pipelines for automated testing and deployment.

Other important Docker use cases include the availability of a strong and knowledgeable user community to help drive developer experiences and skills around containerization; its importance and suitability for vital cross-platform production and testing; and deep resources and availability for container images that are usable for a wide range of application needs. 

Get started with Docker for web development (in 6 steps)

So, you want to get a Docker container up and running quickly? Let’s dive in using the Docker Desktop GUI. In this example, we will use the Docker version for Microsoft Windows, but there are also Docker versions for use on Mac and many flavors of Linux

Step 1: Install Docker Desktop

Start by downloading the installer from the docs or from the release notes.

Double-click Docker Desktop for Windows Installer.exe to run the installer. By default, Docker Desktop is installed at C:\Program Files\Docker\Docker.

When prompted, be sure to choose the WSL 2 option instead of the Hyper-V option on the configuration page, depending on your choice of backend. If your system only supports one of the two options, you will not be able to select which backend to use.

Follow the instructions on the installation wizard to authorize the installer and proceed with the installation. When the installation is successful, select Close to complete the installation process.

Step 2: Create a Dockerfile

A Dockerfile is a text-based file that contains a running script of instructions giving full details on how a developer wants to build their Docker container image. A Dockerfile, which uses no file extension, is built by creating a file named Dockerfile in the getting-started-app directory, which is also where the package.json file is found. 

A Dockerfile contains details about the container’s operating system, file locations, environment, dependencies, configuration, and more. Check out the useful Docker best practices documentation for creating quality Dockerfiles. 

Here is a basic Dockerfile example for setting up an Apache web server

Create a Dockerfile in your project:

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

Next, run the commands to build and run the Docker image:

$ docker build -t my-apache2
$ docker run -dit --name my-running-app -p 8080:80 my-apache2

Visit http://localhost:8080 to see it working.

Step 3: Build your Docker image

The Dockerfile that was just created allows us to start building our first Docker container image. The docker build command initiated in the previous step started the new Docker image using the Dockerfile and related “context,” which is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. Docker images begin with a base image that must be downloaded from a repository to start a new image project.

Step 4: Run your Docker container

To run a new container, start with the docker run command, which runs a command in a new container. The command pulls an image if needed and then starts the container. By default, when you create or run a container using docker create or docker run, the container does not expose any of its ports to the outside world. To make a port available to services outside of Docker you must use the --publish or -p flag commands. This creates a firewall rule in the host, mapping a container port to a port on the Docker host to the outside world. 

Step 5: Access your web application

How to access a web application that is running inside a Docker container.

To access a web application running inside a Docker container, you need to publish the container’s port to the host. This can be done using the docker run command with the --publish or -p flag. The format of the --publish command is [host_port]:[container_port].

Here is an example of how to run a container and publish its port using the Docker CLI:

$ docker run -d -p 8080:80 docker/welcome-to-docker

In this command, the first 8080 refers to the host port. This is the port on your local machine that will be used to access the application running inside the container. The second 80 refers to the container port. This is the port that the application inside the container listens on for incoming connections. Hence, the command binds to port 8080 of the host to port 80 on the container system.

After running the container with the published port, you can access the web application by opening a web browser and visiting http://localhost:8080.

You can also use Docker Compose to run the container and publish its port. Here is an example of a compose.yaml file that does this:

services:
 app:
   image: docker/welcome-to-docker
   ports:
     - 8080:80

After creating this file, you can start the application with the docker compose up command. Then, you can access the web application at http://localhost:8080.

Step 6: Make changes and update

Updating a Docker application in a container requires several steps. With the command-line interface use the docker stop command to stop the container, then the existing container can be removed by using the docker rm (remove) command. Next, a new updated container can be started by using a new docker run command with the updated container. The old container must be stopped before replacing it because the old container is already using the host’s port 3000. Only one process on the machine — including containers — can listen to a specific port at a time. Only after the old container is stopped can it be removed and replaced with a new one. 

Conclusion

In this blog post, we learned about how Docker brings valuable benefits to web developers to speed up and improve their operations and creativity, and we touched on how web developers can get started with the platform on Day One, including basic instructions on setting up Docker quickly to start using it for web development.

Docker delivers streamlined workflows for web development due to its lightweight architecture and broad collaboration, application design, scalability, and other benefits. Docker expands the capabilities of web application developers, giving them flexible tools for everything from building better code to testing, monitoring, and deploying reliable code more quickly. 

Subscribe to our newsletter to stay up-to-date about Docker and its latest uses and innovations. 

Learn more

❌
❌