Normal view

There are new articles available, click to refresh the page.
Yesterday — 12 November 2024Main stream

Accelerating AI Development with the Docker AI Catalog

12 November 2024 at 21:38

Developers are increasingly expected to integrate AI capabilities into their applications but they also face many challenges. Namely, the steep learning curve, coupled with an overwhelming array of tools and frameworks, makes this process too tedious. Docker aims to bridge this gap with the Docker AI Catalog, a curated experience designed to simplify AI development and empower both developers and publishers.

2400x1260 generic hub blog f

Why Docker for AI?

Docker and container technology has been a key technology used by developers at the forefront of AI applications for the past few years. Now, Docker is doubling down on that effort with our AI Catalog. Developers using Docker’s suite of products are often responsible for building, deploying, and managing complex applications — and, now, they must also navigate generative AI (GenAI) technologies, such as large language models (LLMs), vector databases, and GPU support.

For developers, the AI Catalog simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. This approach removes the hassle of evaluating numerous tools and configurations, allowing developers to focus on building innovative AI applications.

Key benefits for development teams

The Docker AI Catalog is tailored to help users overcome common hurdles in the evolving AI application development landscape, such as:

  • Decision overload: The GenAI ecosystem is crowded with new tools and frameworks. The Docker AI Catalog simplifies the decision-making process by offering a curated list of trusted content and container images, so developers don’t have to wade through endless options.
  • Steep learning curve: With the rise of new technologies like LLMs and retrieval-augmented generation (RAG), the learning curve can be overwhelming. Docker provides an all-in-one resource to help developers quickly get up to speed.
  • Complex configurations preventing production readiness: Running AI applications often requires specialized hardware configurations, especially with GPUs. Docker’s AI stacks make this process more accessible, ensuring that developers can harness the full power of these resources without extensive setup.

The result? Shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Empowering publishers

For Docker verified publishers, the AI Catalog provides a platform to differentiate themselves in a crowded market. Independent software vendors (ISVs) and open source contributors can promote their content, gain insights into adoption, and improve visibility to a growing community of AI developers.

Key features for publishers include:

  • Increased discoverability: Publishers can highlight their AI content within a trusted ecosystem used by millions of developers worldwide.
  • Metrics and insights: Verified publishers gain valuable insights into the performance of their content, helping them optimize strategies and drive engagement.

Unified experience for AI application development

The AI Catalog is more than just a repository of AI tools. It’s a unified ecosystem designed to foster collaboration between developers and publishers, creating a path forward for more innovative approaches to building applications supported by AI capabilities. Developers get easy access to essential AI tools and content, while publishers gain the visibility and feedback they need to thrive in a competitive marketplace.

With Docker’s trusted platform, development teams can build AI applications confidently, knowing they have access to the most relevant and reliable tools available.

The road ahead: What’s next?

Docker will launch the AI Catalog in preview on November 12, 2024, alongside a joint webinar with MongoDB. This initiative will further Docker’s role as a leader in AI application development, ensuring that developers and publishers alike can take full advantage of the opportunities presented by AI tools.

Stay tuned for more updates and prepare to dive into a world of possibilities with the Docker AI Catalog. Whether you’re an AI developer seeking to streamline your workflows or a publisher looking to grow your audience, Docker has the tools and support you need to succeed.

Ready to simplify your AI development process? Explore the AI Catalog and get access to trusted content that will accelerate your development journey. Start building smarter, faster, and more efficiently.

For publishers, now is the perfect time to join the AI Catalog and gain visibility for your content. Become a trusted source in the AI development space and connect with millions of developers looking for the right tools to power their next breakthrough.

Learn more

Before yesterdayMain stream

Using AI Tools to Convert a PDF into Images

4 November 2024 at 21:01

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

How do you turn a PDF into a set of images? A modern suggestion could be to get Claude or GPT to write you a CLI command for something like ImageMagick. Let’s follow that train of thought.

2400x1260 docker labs genai

If we enter a prompt like “How do I turn a PDF into a bunch of images using ImageMagick,” we will indeed get something that looks like it might be a solution.

Imagemagick includes a command-line tool called convert which you can use to convert PDF files into images. Here's how you can do it:

1. Open your terminal

2. Change directory to where your PDF is located:

```
cd /path/to/your/pdf

```
3. Use the following command to convert the PDF into images (in this case JPEG):

```
convert -density 300 -quality 100 your_file.pdf output_name.jpg

``` 

The agent will then explain how convert works and finish with a suggestion to “please make sure that you have installed ImageMagick before running these commands.”

Instead of worrying about whether ImageMagick is installed, however, let’s use Docker as our tool provider and prompt it to go ahead and run the command. If you’re following along in this series, you’ll have seen that we are using Markdown files to mix together tools and prompts.  Here’s the first prompt we tried:

---
tools:
  - name: imagemagick
---
# prompt user

Use Imagemagick to convert the family.pdf file into a bunch of jpg images.

After executing this prompt, the LLM generated a tool call, which we executed in the Docker runtime, and it successfully converted family.pdf into nine .jpg files (my family.pdf file had nine pages). 

Figure 1 shows the flow from our VSCode Extension.

Animated VSCode workflow showing the process of converting PDFs to images.
Figure 1: Workflow from VSCode Extension.

We have given enough context to the LLM that it is able to plan a call to this ImageMagick binary. And, because this tool is available on Docker Hub, we don’t have to “make sure that ImageMagick is installed.” This would be the equivalent command if you were to use docker run directly:

# family.pdf must be located in your $PWD

docker run --rm -v $PWD:/project --workdir /project vonwig/imageMagick:latest convert -density 300 -quality 300 family.pdf family.jpg 

The tool ecosystem

How did this work? The process relied on two things:

  • Tool distribution and discovery (pulling tools into Docker Hub for distribution to our Docker Desktop runtime).
  • Automatic generation of Agent Tool interfaces.

When we first started this project, we expected that we’d begin with a small set of tools because the interface for each tool would take time to design. We thought we were going to need to bootstrap an ecosystem of tools that had been prepared to be used in these agent workflows. 

However, we learned that we can use a much more generic approach. Most tools already come with documentation, such as command-line help, examples, and man pages. Instead of treating each tool as something special, we are using an architecture where an agent responds to failures by reading documentation and trying again (Figure 2).

Illustration of circular process showing "Run tool" leading to "Capture errors" leading to "Read docs" in a continuous loop.
Figure 2: Agent process.

We see a process of experimenting with tools that is not unlike what we, as developers, do on the command line. Try a command line, read a doc, adjust the command line, and try again.

The value of this kind of looping has changed our expectations. Step one is simply pulling the tool into Docker Hub and seeing whether the agent can use it with nothing more than its out-of-the-box documentation. We are also pulling open source software (OSS)  tools directly from nixpkgs, which gives us access to tens of thousands of different tools to experiment with. 

Docker keeps our runtimes isolated from the host operating system, while the nixpkgs ecosystem and maintainers provide a rich source of OSS tools.

As expected, packaging agents still run into issues that force us to re-plan how tools are packaged. For example, the prompt we showed above might have generated the correct tool call on the first try, but the ImageMagick container failed on the first run with this terrible-looking error message:

function call failed call exited with non-zero code (1): Error: sh: 1: gs: not found  

Fortunately, feeding that error back into the LLM resulted in the suggestion that convert needs another tool, called Ghostscript, to run successfully. Our agent was not able to fix this automatically today. However, we adjusted the image build slightly and now the “latest” version of the vonwig/imagemagick:latest no longer has this issue. This is an example of something we only need to learn once.

The LLM figured out convert on its own. But its agency came from the addition of a tool.

Read the Docker Labs GenAI series to see more of what we’ve been working on.

Learn more

Using Docker AI Tools for Devs to Provide Context for Better Code Fixes

21 October 2024 at 20:00

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

At Docker Labs, we’ve been exploring how LLMs can connect different parts of the developer workflow, bridging gaps between tools and processes. A key insight is that LLMs excel at fixing code issues when they have the right context. To provide this context, we’ve developed a process that maps out the codebase using linting violations and the structure of top-level code blocks. 

By combining these elements, we teach the LLM to construct a comprehensive view of the code, enabling it to fix issues more effectively. By leveraging containerization, integrating these tools becomes much simpler.

2400x1260 docker labs genai

Previously, my linting process felt a bit disjointed. I’d introduce an error, run Pylint, and receive a message that was sometimes cryptic, forcing me to consult Pylint’s manual to understand the issue. When OpenAI released ChatGPT, the process improved slightly. I could run Pylint, and if I didn’t grasp an error message, I’d copy the code and the violation into GPT to get a better explanation. Sometimes, I’d ask it to fix the code and then manually paste the solution back into my editor.

However, this approach still required several manual steps: copying code, switching between applications, and integrating fixes. How might we improve this process?

Docker’s AI Tools for Devs prompt runner is an architecture that allows us to integrate tools like Pylint directly into the LLM’s workflow through containerization. By containerizing Pylint and creating prompts that the LLM can use to interact with it, we’ve developed a system where the LLM can access the necessary tools and context to help fix code issues effectively.

Understanding the cognitive architecture

For the LLM to assist effectively, it needs a structured way of accessing and processing information. In our setup, the LLM uses the Docker prompt runner to interact with containerized tools and the codebase. The project context is extracted using tools such as Pylint and Tree-sitter that run against the project. This context is then stored and managed, allowing the LLM to access it when needed.

By having access to the codebase, linting tools, and the context of previous prompts, the LLM can understand where problems are, what they are, and have the right code fragments to fix them. This setup replaces the manual process of finding issues and feeding them to the LLM with something automatic and more engaging.

Streamlining the workflow

Now, within my workflow, I can ask the assistant about code quality and violations directly. The assistant, powered by an LLM, has immediate access to a containerized Pylint tool and a database of my code through the Docker prompt runner. This integration allows the LLM to use tools to assist me directly during development, making the programming experience more efficient.

This approach helps us rethink how we interact with our tools. By enabling a conversational interface with tools that map code to issues, we’re exploring possibilities for a more intuitive development experience. Instead of manually finding problems and feeding them to an AI, we can convert our relationship with tools themselves to be conversational partners that can automatically detect issues, understand the context, and provide solutions.

Walking through the prompts

Our project is structured around a series of prompts that guide the LLM through the tasks it needs to perform. These prompts are stored in a Git repository and can be versioned, tracked, and shared. They form the backbone of the project, allowing the LLM to interact with tools and the codebase effectively. We automate this entire process using Docker and a series of prompts stored in a Git repository. Each prompt corresponds to a specific task in the workflow, and Docker containers ensure a consistent environment for running tools and scripts.

Workflow steps

An immediate and existential challenge we encountered was that this class of problem has a lot of opportunities to overwhelm the context of the LLM. Want to read a source code file? It has to be small enough to read. Need to work on more than one file? Your realistic limit is three to four files at once. To solve this, we can instruct the LLM to automate its own workflow with tools, where each step runs in a Docker container.

Again, each step in this workflow runs in a Docker container, which ensures a consistent and isolated environment for running tools and scripts. The first four steps prepare the agent to be able to extract the right context for fixing violations. Once the agent has the necessary context, the LLM can effectively fix the code issues in step 5.

1. Generate violations report using Pylint:

Run Pylint to produce a violation report.

2. Create a SQLite database:

Set up the database schema to store violation data and code snippets.

3. Generate and run INSERT statements:

  • Decouple violations from the range they represent.
  • Use a script to convert every violation and range from the report into SQL insert statements.
  • Run the statements against the database to populate it with the necessary data.

4. Index code in the database:

  • Generate an abstract syntax tree (AST) of the project with Tree-sitter (Figure 1).
Screenshot of syntax tree, showing files, with detailed look at Example .py.parsed.
Figure 1: Generating an abstract syntax tree.
  • Find all second-level nodes (Figure 2). In Python’s grammar, second-level nodes are statements inside of a module.
Expanded look at Example .py.parsed with highlighted statements.
Figure 2: Extracting content for the database.
  • Index these top-level ranges into the database.
  • Populate a new table to store the source code at these top-level ranges.

5. Fix violations based on context:

Once the agent has gathered and indexed the necessary context, use prompts to instruct the LLM to query the database and fix the code issues (Figure 3).

Illustration of instructions, for example, to "fix the violation "some violation" which occurs in file.py on line 1" with information on the function it occurs in.
Figure 3: Instructions for fixing violations.

Each step from 1 to 4 builds the foundation for step 5, where the LLM, with the proper context, can effectively fix violations. The structured preparation ensures that the LLM has all the information it needs to address code issues with precision.

Refining the context for LLM fixes

To understand how our system improves code fixes, let’s consider a specific violation flagged by Pylint. Say we receive a message that there’s a violation on line 60 of our code file block_listed_name.py:

{
  "type": "convention",
  "module": "block_listed_name",
  "obj": "do_front",
  "line": 60,
  "column": 4,
  "endLine": 60,
  "endColumn": 7,
  "path": "cloned_repo/naming_conventions/block_listed_name.py",
  "symbol": "disallowed-name",
  "message": "Disallowed name \"foo\"",
  "message-id": "C0104"
}

From this Pylint violation, we know that the variable foo is a disallowed name. However, if we tried to ask the LLM to fix this issue based solely on this snippet of information, the response wouldn’t be as effective. Why? The LLM lacks the surrounding context — the full picture of the function in which this violation occurs.

This is where indexing the codebase becomes essential

Because we’ve mapped out the codebase, we can now ask the LLM to query the index and retrieve the surrounding code that includes the do_front function. The LLM can even generate the SQL query for us, thanks to its knowledge of the database schema. Once we’ve retrieved the full function definition, the LLM can work with a more complete view of the problem:

def do_front(front_filename, back_filename):
   """
   Front strategy: loop over front image,
   detect blue pixels there,
   substitute in pixels from back.
   Return changed front image.
   """
   foo = SimpleImage(front_filename)
   back = SimpleImage(back_filename)
   for y in range(foo.height):xc
       for x in range(foo.width):
           pixel = foo.get_pixel(x, y)
           # Detect blue pixels in front and replace with back pixels
           if pixel[2] > 2 * max(pixel[0], pixel[1]):
               back_pixel = back.get_pixel(x, y)
               foo.set_pixel(x, y, back_pixel)
   return foo

Now that the LLM can see the whole function, it’s able to propose a more meaningful fix. Here’s what it suggests after querying the indexed codebase and running the fix:

def do_front(front_filename, back_filename):
 """
    Front strategy: loop over front image,
 detect blue pixels there,
    substitute in pixels from back.
    Return changed front image.
    """
    front_image = SimpleImage(front)
    back_image = SimpleImage(back_filename)
    for y in range(front_image.height):
        for x in range(front_image.width            pixel = front_image.get_pixel(x y)
            # Detect blue pixels in front and replace with back pixels
            if pixel[2 >  * max(pixel[0 pixel[1]):
                back_pixel = back_image.get_pixel(x,)
                front_image.set_pixel(x,, back_pixel)
    return front_image

Here, the variable foo has been replaced with the more descriptive front_image, making the code more readable and understandable. The key step was providing the LLM with the correct level of detail — the top-level range — instead of just a single line or violation message. With the right context, the LLM’s ability to fix code becomes much more effective, which ultimately streamlines the development process.

Remember, all of this information is retrieved and indexed by the LLM itself through the prompts we’ve set up. Through this series of prompts, we’ve reached a point where the assistant has a comprehensive understanding of the codebase. 

At this stage, not only can I ask for a fix, but I can even ask questions like “what’s the violation at line 60 in naming_conventions/block_listed_name.py?” and the assistant responds with:

On line 60 of naming_conventions/block_listed_name.py, there's a violation: Disallowed name 'foo'. The variable name 'foo' is discouraged because it doesn't convey meaningful information about its purpose.

Although Pylint has been our focus here, this approach points to a new conversational way to interact with many tools that map code to issues. By integrating LLMs with containerized tools through architectures like the Docker prompt runner, we can enhance various aspects of the development workflow.

We’ve learned that combining tool integration, cognitive preparation of the LLM, and a seamless workflow can significantly improve the development experience. This integration allows an LLM to use tools to directly help while developing, and while Pylint has been the focus here, this also points to a new conversational way to interact with many tools that map code to issues.

To follow along with this effort, check out the GitHub repository for this project.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Using an AI Assistant to Read Tool Documentation

23 September 2024 at 20:41

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

Using new tools on the command line can be frustrating. Even if we are confident that we’ve found the right tool, we might not know how to use it.

Telling an agent to RT(F)M

A typical workflow might look something like the following.

  • Install tool.
  • Read the documentation.
  • Run the command.
  • Repeat.

Can we improve this flow using LLMs?

2400x1260 docker labs genai

Install tool

Docker provides us with isolated environments to run tools. Instead of requiring that commands be installed, we have created minimal Docker images for each tool so that using the tool does not impact the host system. Leave no trace, so to speak.

Read the documentation

Man pages are one of the ways that authors of tools ship content about how to use that tool. This content also comes with standard retrieval mechanisms (the man tool). A tool might also support a command-line option like --help. Let’s start with the idealistic notion that we should be able to retrieve usage information from the tool itself.

In this experiment, we’ve created two entry points for each tool. The first entry point is the obvious one. It is a set of arguments passed directly to a command-line program. The OpenAI-compatible description that we generate for this entry point is shown below. We are using the same interface for every tool.

{"name": "run_my_tool",
   "description": "Run the my_tool command.",
   "parameters":
   {"type": "object",
    "properties":
    {"args":
     {"type": "string",
      "description": "The arguments to pass to my_tool"}}},
   "container": {"image": "namespace/my_tool:latest"}}

The second entrypoint gives the agent the ability to read the man page and, hopefully, improve its ability to run the first entrypoint. The second entrypoint is simpler, because it only does one thing (asks a tool how to use it).

{"name": "my_tool_manual",
   "description": "Read the man page for my_tool",
   "container": {"image": "namespace/my_tool:latest", "command": ["man"]}}

Run the command

Let’s start with a simple example. We want to use a tool called qrencode to generate a QR code for a link. We have used our image generation pipeline to package this tool into a minimal image for qrencode. We will now pass this prompt to a few different LLMs; we are using LLMs that have been trained for tool calling (e.g., GPT 4, Llama 3.1, and Mistral). Here’s the prompt that we are testing:

Generate a QR code for the content https://github.com/docker/labs-ai-tools-for-devs/blob/main/prompts/qrencode/README.md. Save the generated image to qrcode.png.
If the command fails, read the man page and try again.

Note the optimism in this prompt. Because it’s hard to predict what different LLMs have already seen in their training sets, and many command-line tools use common names for arguments, it’s interesting to see what LLM will infer before adding the man page to the context.

The output of the prompt is shown below. Grab your phone and check it out.

Black and white QR code generated by AI assistant.
Figure 1: Content QR code generated by AI assistant.

Repeat

When an LLM generates a description of how to run something, it will usually format that output in such a way that it will be easy for a user to cut and paste the response into a terminal:

qrencode -o qrcode.png 'my content'

However, if the LLM is generating tool calls, we’ll see output that is instead formatted to be easier to run:

[{"function": {"arguments": "{
  \"args\": \"-o qrcode.png 'my content'\"
}"
               "name": "qrencode"}
  "id": "call_Vdw2gDFMAwaEUMgxLYBTg8MB"}]

We respond to this by spinning up a Docker container.

Running the tool as part of the conversation loop is useful even when the command fails. In Unix, there are standard ways to communicate failures. For example, we have exit codes, and stderr streams. This is how tools create feedback loops and correct our behavior while we’re iterating at the terminal. This same mechanism can be used in a conversation loop involving an LLM.

To illustrate, here is another simple example. We’ll try running a tool that writes ASCII art with the following prompt.

Write ascii art with the message "Docker" using the font "helvetica".  Read the man page for figlet if you detect any errors and then try again.

In our test, this did fail. However, it also described the apparent issue on the stderr output stream.

Error: error: could not load font helvetica

By including this message in the conversation loop, the assistant can suggest different courses of action. Different LLMs produced different results here. For example, Llama 3.1 gives instructions for how to install the missing font. On the other hand, GPT 4 re-ran the command, but only after having made the “executive” decision to try a different font.

I'm sorry, it seems the font Helvetica is not available. Let's try it with the default font.

Here is the output of the figlet command with the default font (term): 

 ____             _             
|   \    _       | | _____ _ 
| | | |/  \ / _| |/ /  \ '_|
| |_| | (_) | (__|   <  __/ |   
|____/ \___/ \___|_|\_\___|_|

We are very early in understanding how to take advantage of this apparent capacity to try different approaches. But this is another reason why quarantining these tools in Docker containers is useful. It limits their blast radius while we encourage experimentation.

Results

We started by creating a pipeline to produce minimal Docker images for each tool. The set of tools was selected based on whether they have outputs useful for developer-facing workflows. We continue to add new tools as we think of new use cases. The initial set is listed below.

gh pylint commitlint scalafix gitlint yamllint checkmake gqlint sqlint golint golangci-lint hadolint markdownlint-cli2 cargo-toml-lint ruff dockle clj-kondo selene tflint rslint yapf puppet-lint oxlint kube-linter csslint cpplint ansible-lint actionlint black checkov jfmt datefmt rustfmt cbfmt yamlfmt whatstyle rufo fnlfmt shfmt zprint jet typos docker-ls nerdctl diffoci dive kompose git-test kubectl fastly infracost sops curl fzf ffmpeg babl unzip jq graphviz pstree figlet toilet tldr qrencode clippy go-tools ripgrep awscli2 azure-cli luaformatter nixpkgs-lint hclfmt fop dnstracer undocker dockfmt fixup_yarn_lock github-runner swiftformat swiftlint nix-linter go-critic regal textlint formatjson5 commitmsgfmt

There was a set of initial problems with context extraction.

Missing manual pages

Only about 60% of the tools we selected have man pages. However, even in those cases, there are usually other ways to get help content. The following steps show the final procedure we used:

  • Try to run the man page.
  • Try to run the tool with the argument --help.
  • Try to run the tool with the argument -h.
  • Try to run the tool with --broken args and then read stderr.

Using this procedure, every tool in the list above eventually produced documentation.

Long manual pages

Limited context lengths impacted some of the longer manual pages, so it was still necessary to employ standard RAG techniques to summarize verbose man pages. Our tactic was to focus on descriptions of command-line arguments and sections that had sample usage. These had the largest impact on the quality of the agent’s output. The structure of Unix man pages helped with the chunking, because we were able to rely on standard sections to chunk the content.

Subcommands

For a small set of tools, it was necessary to traverse a tree of help menus. However, these were all relatively popular tools, and the LLMs we deployed already knew about this command structure. It’s easy to check this out for yourself. Ask an LLM, for example: “What are the subcommands of Git?” or “What are the subcommands of Docker?” Maybe only popular tools get big enough that they start to be broken up into subcommands.

Summary

We should consider the active role that agents can play when determining how to use a tool. The Unix model has given us standards such as man pages, stderr streams, and exit codes, and we can take advantage of these conventions when asking an assistant to learn a tool. Beyond distribution, Docker also provides us with process isolation, which is useful when creating environments for safe exploration.

Whether or not an AI can successfully generate tool calls may also become a metric for whether or not a tool has been well documented.

To follow along with this effort, check out the GitHub repository for this project.

Learn more

❌
❌