Normal view

There are new articles available, click to refresh the page.
Today — 22 November 2024Main stream

ASUSTOR Flashstor Gen2 NAS features AMD Ryzen Embedded V3C14, 10GbE networking, up to 12x NVMe SSD sockets

22 November 2024 at 10:25
AMD Ryzen V3C14 NAS

ASUSTOR Flashstor 6 Gen2 and Flashtor 6 Pro Gen2 are NAS systems based on AMD Ryzen Embedded V3C14 quad-core processor with up to two 10GbE RJ45 ports and taking up to 6 or 12 M.2 NVMe SSDs respectively. The Flashstor Gen2 models are updated to the ASUSTOR Flashtor NAS launched last year with similar specifications including 10GbE and up to 12 M.2 SSDs, but based on a relatively low-end Intel Celeron N5105 quad-core Jasper Lake processor. The new Gen2 NAS family features a more powerful AMD Ryzen Embedded V3C14 SoC, support for up to 64GB RAM with ECC, and USB4 ports. The downside is that it lacks video output, so it can’t be used for 4K/8K video consumption like its predecessor. Flashstor Gen2 NAS specifications: SoC – AMD Ryzen Embedded V3C14 quad-core/8-thread processor @ 2.3/3.8GHz; TDP: 15W System Memory Flashstor 6 Gen2 (FS6806X) – 8 GB DDR5-4800 Flashstor 12 Pro [...]

The post ASUSTOR Flashstor Gen2 NAS features AMD Ryzen Embedded V3C14, 10GbE networking, up to 12x NVMe SSD sockets appeared first on CNX Software - Embedded Systems News.

Important Announcement: Free Shipping Policy Adjustment and Service Enhancements

By: Rachel
22 November 2024 at 09:48

Thank you for your continued support! To further enhance your shopping experience, we are rolling out a series of new features designed to provide more efficient and localized services. Additionally, we will be updating our current free shipping policy. Please take note of the following important updates:


Recent Logistics Enhancements

Between June and October 2024, we implemented several key logistics upgrades to enhance service quality and lay the groundwork for upcoming policy adjustments:

1. Expanded Shipping Options

  • US Warehouse: Added UPS-2 Day, FedEx, and UPS-Ground for more fast shipping choices.
  • CN Warehouse: Introduced Airtransport Direct Line small package service, reducing delivery times from 20-35 days to just 7-10 days.

2. Optimized Small Parcel Shipping and Cost Control

  • Adjusted packaging specifications for CN warehouse shipments, significantly lowering costs while improving shipping efficiency.

3. Accelerated Overall Delivery Times

  • Streamlined export customs clearance from Shenzhen and synchronized handoffs with European and American couriers, reducing delivery times to just 3.5 days.

Enhanced Local Services for a Better Shopping Experience

To meet the diverse needs of our global users, we’ve implemented several improvements in local purchasing, logistics, and tax services:

1. Local Warehouse Pre-Order

  • Launch Date: Already Live
  • Highlights: Pre-order popular products from our US Warehouse and DE Warehouse. If immediate stock is needed, you can switch to the CN Warehouse for purchase.

2. Enhanced VAT Services for EU Customers

  • Launch Date: Already Live
  • Highlights: New VAT ID verification feature allows EU customers to shop tax-free with a valid VAT ID.

3. US Warehouse Sales Tax Implementation

  • Launch Date: January 1, 2025
  • Highlights: Automatic calculation of sales tax to comply with US local tax regulations.

Free Shipping Policy Adjustment

Starting December 31, 2024, our current free shipping policy (CN warehouse orders over $150, US & DE warehouse orders over $100) will no longer apply.

We understand that this change may cause some inconvenience in the short term. However, our aim is to offer more flexible and efficient shipping options, ensuring your shopping experience is more personalized and seamless.


Listening to Your Suggestions for Continuous Improvement

We understand that excellent logistics service stems from listening to every customer’s needs. During the optimization process, we received valuable suggestions, such as:

  • Adding a local warehouse in Australia to provide faster delivery for customers in that region.
  • Improving packaging designs to enhance protection during transit.
  • Supporting flexible delivery schedules, allowing customers to choose delivery times that work best for them.

We welcome your continued input! Starting today, submit your feedback via our Feedback Form, and receive coupon rewards for all adopted suggestions.


Important Reminder: Free Shipping Policy End Date

  • Current free shipping policy will officially end on December 31, 2024.
  • Plan your purchases in advance to enjoy the remaining free shipping benefits!

We are also working on future logistics enhancements and may introduce region-specific free shipping or special holiday promotions, so stay tuned!


Thank You for Your Support

Your trust and support inspire Seeed Studio to keep innovating. We remain focused on improving localized services, listening to your needs, and delivering a more convenient and efficient shopping experience.

If you have any questions, please don’t hesitate to contact our customer support team. Together, let’s move towards a smarter, more efficient future!

The post Important Announcement: Free Shipping Policy Adjustment and Service Enhancements appeared first on Latest Open Tech From Seeed.

Nice File Performance Optimizations Coming With Linux 6.13

22 November 2024 at 09:02
In addition to the pull requests managed by Microsoft engineer Christian Brauner for VFS untorn writes for atomic writes with XFS and EXT4, Tmpfs case insensitive file/folder support, new Rust file abstractions, and the renewed multi-grain timestamps work, another interesting Linux 6.13 pull submitted by Brauner revolves around VFS file enhancements...

Friday Five — November 22, 2024

22 November 2024 at 07:00
Red Hat Enterprise Linux AI Brings Greater Generative AI Choice to Microsoft AzureRHEL AI expands the ability of organizations to streamline AI model development and deployment on Microsoft Azure to fast-track AI innovation in the cloud. Learn more Technically Speaking | How open source can help with AI transparencyExplore the challenges of transparency in AI and how open source development processes can help create a more open and accessible future for AI. Learn more ZDNet - Red Hat's new OpenShift delivers AI, edge and security enhancementsRed Hat introduces new capabilities for Red Hat O

NVIDIA JetPack 6.1 Boosts Performance and Security through Camera Stack Optimizations and Introduction of Firmware TPM

22 November 2024 at 05:01
Connected icons show the workflow.NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release,...Connected icons show the workflow.

NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release, JetPack has enhanced its performance, introduced new features, and optimized existing tools to deliver increased value to its users. This means that your existing Jetson Orin-based products experience performance optimizations by upgrading to…

Source

Mesa 24.3 Released With Many Open-Source Vulkan Driver Improvements

22 November 2024 at 00:20
Mesa 24.3 has managed to make it out today, one week ahead of the previous release plans due to the lack of any major blocker bugs appearing. Mesa 24.3 has a lot of new feature work on the contained open-source Vulkan drivers as well as evolutionary improvements to their OpenGL drivers and other user-space 3D driver code...

Using Python with virtual environments | The MagPi #148

22 November 2024 at 00:17

Raspberry Pi OS comes with Python pre-installed, and you need to use its virtual environments to install packages. The latest issue of The MagPi, out today, features this handy tutorial, penned by our documentation lead Nate Contino, to get you started.

Raspberry Pi OS comes with Python 3 pre-installed. Interfering with the system Python installation can cause problems for your operating system. When you install third-party Python libraries, always use the correct package-management tools.

On Linux, you can install python dependencies in two ways:

  • use apt to install pre-configured system packages
  • use pip to install libraries using Python’s dependency manager in a virtual environment
It is possible to create virtual environments inside Thonny as well as from the command line

Install Python packages using apt

Packages installed via apt are packaged specifically for Raspberry Pi OS. These packages usually come pre-compiled, so they install faster. Because apt manages dependencies for all packages, installing with this method includes all of the sub-dependencies needed to run the package. And apt ensures that you don’t break other packages if you uninstall.

For instance, to install the Python 3 library that supports the Raspberry Pi Build HAT, run the following command:

$ sudo apt install python3-build-hat

To find Python packages distributed with apt, use apt search. In most cases, Python packages use the prefix python- or python3-: for instance, you can find the numpy package under the name python3-numpy.

Install Python libraries using pip

In older versions of Raspberry Pi OS, you could install libraries directly into the system version of Python using pip. Since Raspberry Pi OS Bookworm, users cannot install libraries directly into the system version of Python.

Attempting to install packages with pip causes an error in Raspberry Pi OS Bookworm

Instead, install libraries into a virtual environment (venv). To install a library at the system level for all users, install it with apt.

Attempting to install a Python package system-wide outputs an error similar to the following:

$ pip install buildhat
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    For more information visit http://rptl.io/venv

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Python users have long dealt with conflicts between OS package managers like apt and Python-specific package management tools like pip. These conflicts include both Python-level API incompatibilities and conflicts over file ownership.

Starting in Raspberry Pi OS Bookworm, packages installed via pip must be installed into a Python virtual environment (venv). A virtual environment is a container where you can safely install third-party modules so they won’t interfere with your system Python.

Use pip with virtual environments

To use a virtual environment, create a container to store the environment. There are several ways you can do this depending on how you want to work with Python:

per-project environments

Create a virtual environment in a project folder to install packages local to that project

Many users create separate virtual environments for each Python project. Locate the virtual environment in the root folder of each project, typically with a shared name like env. Run the following command from the root folder of each project to create a virtual environment configuration folder:

$ python -m venv env

Before you work on a project, run the following command from the root of the project to start using the virtual environment:

$ source env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

When you finish working on a project, run the following command from any directory to leave the virtual environment:

$ deactivate

per-user environments

Instead of creating a virtual environment for each of your Python projects, you can create a single virtual environment for your user account. Activate that virtual environment before running any of your Python code. This approach can be more convenient for workflows that share many libraries across projects.

When creating a virtual environment for multiple projects across an entire user account, consider locating the virtual environment configuration files in your home directory. Store your configuration in a folder whose name begins with a period to hide the folder by default, preventing it from cluttering your home folder.

Add a virtual environment to your home directory to use it in multiple projects and share the packages

Use the following command to create a virtual environment in a hidden folder in the current user’s home directory:

$ python -m venv ~/.env

Run the following command from any directory to start using the virtual environment:

$ source ~/.env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

To leave the virtual environment, run the following command from any directory:

$ deactivate

Create a virtual environment

Run the following command to create a virtual environment configuration folder, replacing <env-name> with the name you would like to use for the virtual environment (e.g. env):

$ python -m venv <env-name>

Enter a virtual environment

Then, execute the bin/activate script in the virtual environment configuration folder to enter the virtual environment:

$ source <env-name>/bin/activate

You should then see a prompt similar to the following:

$ (<env-name>) $

The (<env-name>) command prompt prefix indicates that the current terminal session is in a virtual environment named <env-name>.

To check that you’re in a virtual environment, use pip list to view the list of installed packages:

$ (<env-name>) $ pip list
Package    Version
---------- -------
pip        23.0.1
setuptools 66.1.1

The list should be much shorter than the list of packages installed in your system Python. You can now safely install packages with pip. Any packages you install with pip while in a virtual environment only install to that virtual environment. In a virtual environment, the python or python3 commands automatically use the virtual environment’s version of Python and installed packages instead of the system Python.

Top Tip
Pass the –system-site-packages flag before the folder name to preload all of the currently installed packages in your system Python installation into the virtual environment.

Exit a virtual environment

To leave a virtual environment, run the following command:

$ (<env-name>) $ deactivate

Use the Thonny editor

We recommend Thonny for editing Python code on the Raspberry Pi.

By default, Thonny uses the system Python. However, you can switch to using a Python virtual environment by clicking on the interpreter menu in the bottom right of the Thonny window. Select a configured environment or configure a new virtual environment with Configure interpreter.

The MagPi #148 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Using Python with virtual environments | The MagPi #148 appeared first on Raspberry Pi.

Meet the CapibaraZero, a multifunctional security and hacking tool based on the Nano ESP32

22 November 2024 at 00:13

In recent years, tools such as the FlipperZero have become quite popular amongst hobbyists and security professionals alike for their small size and wide array of hacking tools. Inspired by the functionality of the FlipperZero, Project Hub user ‘andreockx’ created a similar multi-radio tool named the CapibaraZero, which has the same core abilities and even a little more.

The project uses an Arduino Nano ESP32 as its processor and as a way to provide Wi-Fi, Bluetooth Low-Energy, and human interface features. The chipset can scan for nearby Wi-Fi networks, present fake captive portals, prevent other devices from receiving IP addresses through DHCP starvation, and even carry out ARP poisoning attacks. Andre’s inclusion of a LoRa radio module further differentiates his creation by letting it transmit information in the sub-GHz spectrum over long distances. And lastly, the PN532 RFID module can read encrypted MiFare NFC tags and crack them through brute force.

This collection of the Nano ESP32, wireless radios, and a LiPo battery + charging module were all attached to a custom PCB mainboard while five additional buttons were connected via secondary daughterboard before the entire assembly was placed into a 3D printed case.

For more details about the CapibaraZero, you can read Andre’s write-up here on the Project Hub.

The post Meet the CapibaraZero, a multifunctional security and hacking tool based on the Nano ESP32 appeared first on Arduino Blog.

SoundSlide capacitive touch USB-C adapter aims to ease volume control on laptops

22 November 2024 at 00:01
USB-C capacitive touch volume control

SoundSlide is an open-source hardware USB-C adapter that adds a capacitive touch interface to your laptop or keyboard PC in order to control the volume without having to reach out to the volume keys on the keyboard that may require Alt or Fn presses. SoundSlide is meant to be more intuitive than pressing keys and works without drivers with macOS, Windows, and Linux. At just 20.9 x 6.9 x 3.5 mm in size excluding the USB Type-C port, you can leave it connected to your laptop when you move around or put the laptop in your backpack. The SoundSlide relies on the touch interface from the Microchip SAM D11 Arm Cortex-M0+ microcontroller, and the company behind the project – Drake Labs – has made the firmware, schematics (PDF/WebP), and a command-line interface written on Go available on GitHub. You can check out how it works on a laptop in the [...]

The post SoundSlide capacitive touch USB-C adapter aims to ease volume control on laptops appeared first on CNX Software - Embedded Systems News.

Yesterday — 21 November 2024Main stream

Unlocking New Possibilities in Cloud Deployment with Arm at KubeCon NA 2024

21 November 2024 at 23:00

As developers and platform engineers seek greater performance, efficiency, and scalability for their workloads, Arm-based cloud services provide a powerful and trusted solution. At KubeCon NA 2024, we had the pleasure of meeting many of these developers face-to-face to showcase Arm solutions as they migrate to Arm.   

Today, all major hyperscalers, including Amazon Web Services (AWS), Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure (OCI), offer Arm-based servers optimized for modern cloud-native applications. This shift offers a significant opportunity for organizations to improve price-performance ratios, deliver a lower total cost of ownership (TCO), and meet sustainability goals, while gaining access to a robust ecosystem of tools and support.  

At KubeCon NA, it was amazing to hear from those in the Arm software ecosystem share their migration stories and the new possibilities they’ve unlocked. 

Arm from cloud to edge at KubeCon

Building on Arm unlocks a wide range of options from cloud to edge. It enables developers to run their applications seamlessly in the cloud, while tapping into the entire Arm software and embedded ecosystem and respective workflows. 

Arm-based servers are now integrated across leading cloud providers, making them a preferred choice for many organizations looking to enhance their infrastructure. At KubeCon NA 2024, attendees learned about the latest custom Arm compute offerings available from major cloud service providers including: 

  • AWS Graviton series for enhanced performance and energy efficiency; 
  • Microsoft Azure Arm-based VMs for scalable, cost-effective solutions; 
  • Google Cloud’s Tau T2A instances for price-performance optimization; and 
  • OCI Ampere A1 Compute for flexible and powerful cloud-native services.
Developer kit based on the Ampere Altra SoC512

Ampere showcased their Arm-based hardware in multiple form factors across different partner booths at the show to demonstrate how the Arm compute platform is enabling server workloads both in the cloud and on premises.

System76 ‘s Thelio Astra, an Arm64 developer desktop, featuring Ampere Altra processors, was also prominently displayed in booths across the KubeCon NA show floor. The workstation is streamlining developer workflows for Linux development and deployment across various markets, including automotive and IoT.

System76’s Thelio Astra

During the show, the Thelio Astra showcased its IoT capabilities by aggregating and processing audio sensor data from Arduino devices to assess booth traffic. This demonstrated cloud-connected IoT workloads in action. 

Arm Cloud-to-edge workloads with Arm-based compute from Arduino endpoints to Ampere servers running lightweight Fermyon Kubernetes and WASM

Migrating to Arm has never been easier

Migrating workloads to Arm-based servers is more straightforward than ever. Today, 95% of graduated CNCF (Cloud Native Computing Foundation) projects are optimized for Arm, ensuring seamless, efficient, and high-performance execution.  

Companies of all sizes visited the Arm booth at KubeCon NA to tell us about their migration journey and learn how to take advantage of the latest developer technologies. They included leading financial institutions, global telecommunications providers and large retail brands.  

For developers ready to add multi-architecture support to their deployments, we demonstrated a new tool – kubearchinspect – that can be deployed on a Kubernetes cluster and scan for container images to check for Arm compatibility. Check out our GitHub repo to get started and how to validate Arm support for your container images. 

Hundreds of independent software vendors (ISVs) are enabling their applications and services on Arm, with developers easily monitoring application performance and managing their workloads via the Arm Software Dashboard.  

For developers, the integration of GitHub Actions, GitHub Runners, and the soon to be available Arm extension for GitHub Copilot, means a seamless cloud-native CI/CD workflow is now fully supported on Arm. Graduated projects can scale using cost-effective Arm runners, while incubating projects benefit from lower pricing and improved support from open-source Arm runners. 

Extensive Arm ecosystem and Kubernetes support 

As Kubernetes continues to grow, with 5.6 million developers worldwide, expanding the contributor base is essential to sustaining the cloud-native community and supporting its adoption in technology stacks. Whether developers are using AWS EKS, Azure AKS, or OCI’s Kubernetes service, Arm is integrated to provide native support. This enables the smooth deployment and management of containerized applications. 

Scaling AI workloads and optimizing complex inference pipelines can be challenging across different architectures. Developers can deploy their AI models across distributed infrastructure, seamlessly integrating with the latest AI frameworks to enhance processing efficiency.  

Through a demonstration at the Arm booth, Pranay Bhakre, a Principal Solutions Engineer at Arm, showcased AI over Kubernetes. This brought together Kubernetes, Prometheus and Grafana open-source projects into a power-efficient real-time, scalable, sentiment analysis application. More information about how to enable real-time sentiment analysis on Arm Neoverse-based Kubernetes clusters can be found in this Arm Community blog

Pranay Bakre explains the real-time sentiment analysis demo at KubeCon NA 2024

Pranay also showcased the ability to run AKS on the very latest Arm Neoverse-powered Microsoft Azure Cobalt 100 processors. To jumpstart running Kubernetes via AKS on Microsoft Azure Cobalt 100, check out this learning path and the corresponding GitHub repo

Additionally, at Kubecon 2024, we launched a pilot expansion of our “Works on Arm” program into the CNCF community. This offers comprehensive resources to help scale and optimize cloud-native projects on the Arm architecture. Developers can click here to take a short survey and request to be included in this new initiative. 

Switch to Arm for smarter deployment and scalable performance 

As demonstrated at KubeCon 2024, Arm is transforming cloud-native deployment and accelerating the developer migration to Arm. 

In fact, now is the perfect time to harness Arm-based cloud services for better performance, lower costs, and scalable flexibility. Developers can start building or migrating today to deploy smarter, optimized cloud-native applications on Arm, for Arm. 

Developers are welcome to join us at KubeCon Europe in April 2025 to learn more about our latest advancements in platform engineering and cloud-native technologies.

The post Unlocking New Possibilities in Cloud Deployment with Arm at KubeCon NA 2024 appeared first on Arm Newsroom.

Pine64 Unveils PineCam with RISC-V SG2000 SoC and 2MP Camera

21 November 2024 at 22:41
The Pine64 November update introduces the PineCam, a successor to the PineCube IP camera. With a redesigned structure and enhanced features, the PineCam is aimed at applications like monitoring, video streaming, and hardware experimentation. The device is built on the SG2000 System-on-Chip from the Oz64 single board computer covered in October. This SoC combines two […]

WordPress 6.7.1 Maintenance Release

21 November 2024 at 21:56

WordPress 6.7.1 is now available!

This minor release features 16 bug fixes throughout Core and the Block Editor.

WordPress 6.7.1 is a fast-follow release with a strict focus on bugs introduced in WordPress 6.7. The next major release will be version 6.8, planned for April 2025.

If you have sites that support automatic background updates, the update process will begin automatically.

You can download WordPress 6.7.1 from WordPress.org, or visit your WordPress Dashboard, click “Updates”, and then click “Update Now”.

For more information on this release, please visit the HelpHub site. You can find a summary of the maintenance updates in this release in the Release Candidate announcement.

Thank you to these WordPress contributors

This release was led by Jonathan Desrosiers and Carlos Bravo.

WordPress 6.7.1 would not have been possible without the contributions of the following people. Their asynchronous coordination to deliver maintenance fixes into a stable release is a testament to the power and capability of the WordPress community.

abcsun, Adam Silverstein, Ahsan Khan, Aki Hamano, Alexander Bigga, Andrew Ozz, Ankit Kumar Shah, Antoine, bluantinoo, Carlos Bravo, Carolina Nymark, charleslf, Christoph Daum, David Smith, dhewercorus, Dhruvang21, Dilip Bheda, dooperweb, Eshaan Dabasiya, Felix Arntz, finntown, Firoz Sabaliya, George Mamadashvili, glynnquelch, Greg Ziółkowski, Himanshu Pathak, jagirbahesh, Jarda Snajdr, Jb Audras, Jeffrey Paul, Joe Dolson, Joe McGill, John Blackbourn, Jonathan Desrosiers, Jon Surrell, Julie Moynat, Julio Potier, laurelfulford, Lee Collings, Lena Morita, luisherranz, Matias Benedetto, Mayank Tripathi, Michal Czaplinski, Miguel Fonseca, miroku, Mukesh Panchal, Narendra Sishodiya, Nik Tsekouras, Oliver Campion, Pascal Birchler, Peter Wilson, ramonopoly, Ravi Gadhiya, Rishi Mehta, room34, Roy Tanck, Ryo, sailpete, Sainath Poojary, Sarthak Nagoshe, Sergey Biryukov, SirLouen, S P Pramodh, Stephen Bernhardt, stimul, Sukhendu Sekhar Guria, TigriWeb, Tim W, tobifjellner (Tor-Bjorn “Tobi” Fjellner), Vania, Yogesh Bhutkar, YoWangdu, Zargarov, and zeelthakkar.

How to contribute

To get involved in WordPress core development, head over to Trac, pick a ticket, and join the conversation in the #core and #6-8-release-leads channels. Need help? Check out the Core Contributor Handbook.

Thanks to @marybaum, @aaroncampbell, @jeffpaul, @audrasjb, @cbravobernal, @ankit-k-gupta for proofreading.

What Are the Latest Docker Desktop Enterprise-Grade Performance Optimizations?

21 November 2024 at 21:34

Key highlights:

At Docker, we’re continuously enhancing Docker Desktop to meet the evolving needs of enterprise users. Since Docker Desktop 4.23, where we reduced startup time by 75%, we’ve made significant investments in both performance and stability. These improvements are designed to deliver a faster, more reliable experience for developers across industries. (Read more about our previous performance milestones.)

In this post, we walk through the latest performance enhancements.

2400x1260 evergreen docker blog a

Latest performance enhancements

Boost performance with Docker VMM on Apple Silicon Mac

Apple Silicon Mac users, we’re excited to introduce Docker Virtual Machine Manager (Docker VMM) — a powerful new virtualization option designed to enhance performance for Docker Desktop on M1 and M2 Macs. Currently in beta, Docker VMM gives developers a faster, more efficient alternative to the existing Apple Virtualization Framework for many workflows (Figure 1). Docker VMM is available starting in the Docker Desktop 4.35 release.

Screenshot of Docker Desktop showing Virtual Machine Options including Docker VMM (beta), Apple Virtualization Framework, and QEMU (legacy).
Figure 1: Docker virtual machine options.

Why try Docker VMM?

If you’re running native ARM-based images on Docker Desktop, Docker VMM offers a performance boost that could make your development experience smoother and more efficient. With Docker VMM, you can:

  • Experience faster operations: Docker VMM shows improved speeds on essential commands like git status and others, especially when caches are built up. In our benchmarks, Docker VMM eliminates certain slowdowns that can occur with the Apple Virtualization framework.
  • Enjoy flexibility: Not sure if Docker VMM is the right fit? No problem! Docker VMM is still in beta, so you can switch back to the Apple Virtualization framework at any time and try Docker VMM again in future releases as we continue optimizing it.

What about emulated Intel images?

If you’re using Rosetta to emulate Intel images, Docker VMM may not be the ideal choice for now, as it currently doesn’t support Rosetta. For workflows requiring Intel emulation, the Apple Virtualization framework remains the best option, as Docker VMM is optimized for native Arm binaries.

Key benchmarks: Real-world speed gains

Our testing reveals significant improvements when using Docker VMM for common commands, including git status:

  • Initial git status: Docker VMM outperforms, with the first run significantly faster compared to the Apple Virtualization framework (Figure 2).
  • Subsequent git status: With Docker VMM, subsequent runs are also speedier due to more efficient caching (Figure 3).

With Docker VMM, you can say goodbye to frustrating delays and get a faster, more responsive experience right out of the gate.

Graph comparison of git status times for cold caches between the Apple Virtualization Framework (~27 seconds) and Docker VMM (slightly under 10 seconds).
Figure 2: Initial git status times.
Graph comparison of git status times for warm caches between the Apple Virtualization Framework (~3 seconds) and Docker VMM (less than 1 second).
Figure 3: Subsequent git status times.

Say goodbye to QEMU

For users who may have relied on QEMU, note that we’re transitioning it to legacy support. Docker VMM and Apple Virtualization Framework now provide superior performance options, optimized for the latest Apple hardware.

Docker Desktop for Windows on Arm

For specific workloads, particularly those involving parallel computing or Arm-optimized tasks, Arm64 devices can offer significant performance benefits. With Docker Desktop now supporting Windows on Arm, developers can take advantage of these performance boosts while maintaining the familiar Docker Desktop experience, ensuring smooth, efficient operations on this architecture.

Synchronized file shares

Unlike traditional file-sharing mechanisms that can suffer from performance degradation with large projects or frequent file changes, the synchronized file shares feature offers a more stable and performant alternative. It uses efficient synchronization processes to ensure that changes made to files on the host are rapidly reflected in the container, and vice versa, without the bottlenecks or slowdowns experienced with older methods.

This feature is a major performance upgrade for developers who work with shared files between the host and container. It reduces the performance issues related to intensive file system operations and enables smoother, more responsive development workflows. Whether you’re dealing with frequent file changes or working on large, complex projects, synchronized file sharing improves efficiency and ensures that your containers and host remain in sync without delays or excessive resource usage.

Key highlights of synchronized file sharing include:

  • Selective syncing: Developers can choose specific directories to sync, avoiding unnecessary overhead from syncing unneeded files or directories.
  • Faster file changes: It significantly reduces the time it takes for changes made in the host environment to be recognized and applied within containers.
  • Improved performance with large projects: This feature is especially beneficial for large projects with many files, as it minimizes the file-sharing latency that often accompanies such setups.
  • Cross-platform support: Synchronized file sharing is supported on both macOS and Windows, making it versatile across platforms and providing consistent performance.

The synchronized file shares feature is available in Docker Desktop 4.27 and newer releases.

GA for Docker Desktop on Red Hat Enterprise Linux (RHEL)

Red Hat Enterprise Linux (RHEL) is known for its high-performance capabilities and efficient resource utilization, which is essential for developers working with resource-intensive applications. Docker Desktop on RHEL enables enterprises to fully leverage these optimizations, providing a smoother, faster experience from development through to production. Moreover, RHEL’s robust security framework ensures that Docker containers run within a highly secure, certified operating system, maintaining strict security policies, patch management, and compliance standards — vital for industries like finance, healthcare, and government.

Continuous performance improvements in every Docker Desktop release

At Docker, we are committed to delivering continuous performance improvements with every release. Recent updates to Docker Desktop have introduced the following optimizations across file sharing and network performance:

  • Advanced VirtioFS optimizations: The performance journey continued in Docker Desktop 4.33 with further fine-tuning of VirtioFS. We increased the directory cache timeout, optimized host change notifications, and removed extra FUSE operations related to security.capability attributes. Additionally, we introduced an API to clear caches after container termination, enhancing overall file-sharing efficiency and container lifecycle management.
  • Faster read and write operations on bind mounts. In Docker Desktop 4.32, we further enhanced VirtioFS performance by optimizing read and write operations on bind mounts. These changes improved I/O throughput, especially when dealing with large files or high-frequency file operations, making Docker Desktop more responsive and efficient for developers.
  • Enhanced caching for faster performance: Continuing with performance gains, Docker Desktop 4.31 brought significant improvements to VirtioFS file sharing by extending attribute caching timeouts and improving invalidation processes. This reduced the overhead of constant file revalidation, speeding up containerized applications that rely on shared files.

Why these updates matter for you

Each update to Docker Desktop is focused on improving speed and reliability, ensuring it scales effortlessly with your infrastructure. Whether you’re using RHEL, Apple Silicon, or Windows Arm, these performance optimizations help you work faster, reduce downtime, and boost productivity. Stay current with the latest updates to keep your development environment running at peak efficiency.

Share your feedback and help us improve

We’re always looking for ways to enhance Docker Desktop and make it the best tool for your development needs. If you have feedback on performance, ideas for improvement, or issues you’d like to discuss, we’d love to hear from you. If you have feedback on performance, ideas for improvement, or issues you’d like to discuss, we’d love to hear from you. Feel free to reach out and schedule time to chat directly with a Docker Desktop Product Manager via Calendly.

Learn more

November Update: Something Borrowed Something New

21 November 2024 at 07:00
This month we are announcing a couple of new products including a SBC and a successor to the PineCube. We have updates to share about the PineNote and the PineDio USB adapter this month along with a talk by one of our community members Dsimic.
❌
❌