Seeed Studio is excited to announce a strategic partnership with AtianzaSolutions. This collaboration aims to provide cutting-edge Edge AI solutions to industries across Latin America, advancing innovation and Edge AI solutions in the region.
About Seeed Studio
Seeed Studio, a leading AI hardware partner, has been a leading Open Hardware company since 2008, empowering over half a million direct users to create real-world digital solutions. Through relentless efforts and earned trust, Seeed’s ever-growing product lines now form around emerging AI scenarios like like AI Sensing, AI Robotics, Sensor Networks, and Maker. Seeed provides industrial-ready modules and devices, and open up the capability of prototype development and production. Innovators from different vertical domains co-create with Seeed to make their creations widely available for diversified markets by embracing open source, community building, and integrated vertical industry solutions together with Seeed.
About Atianza
Atianzais a multidisciplinary team based in the heart of Santiago de Chile with more than 10 years of experience in the industry. With a presence in three countries – Chile, Colombia and Spain – Atianza bridges borders to deliver agile and personalized solutions.
Atianza specializes in:
Architecture and Engineering
Design and Usability
Management and Control
At Atianza, innovation, experience and agility come together to create an alliance towards a more connected and efficient future. It is not just about developing software, but about co-creating solutions that transform and enhance business vision.
The Partnership
Seeed and Atianza join forces to deliver comprehensive Edge AI solutions across Latin America. This partnership combines Seeed’s state-of-the-art hardware and Atianza’s tailored analytics expertise to address diverse market needs.
Key Initiatives Include:
Distribution of Seeed Studio’s Edge AI devices and development kits. Including:
NVIDIA Jetson, the leading platform for robotics and embedded edge AI applications, highlights the Seeed Studio reComputer Industrial J3010– Fanless Edge AI Device with Jetson Orin Nano 4GB
2. Industrial Raspberry Pi devices, highlights the reTerminal DM, a Panel PC, HMI, IIoT Gateway all-in-one device powered by Raspberry Pi
Customized video analytics and data analytics solutions tailored for industries in Chile, Colombia, and Spain.
Together, Seeed and Atianza aim to create a more intelligent and efficient future for businesses across the region, leveraging their combined strengths in hardware, software, and integration expertise.
Get in touch at support@atianza.com today for a tailored AI solution designed to accelerate your business growth!
The OpenFlexture Microscope is a DIY, open-source, 3D-printed microscope built around the Raspberry Pi 4, a Raspberry Pi Camera Module v2, and a choice of optics or various qualities up to lab-grade optics. It can be motorized using low-cost geared stepper motors and can achieve a resolution of up to around 100 nanometers I found out about the OpenFlexture Microscope in one of the sessions at the upcoming FOSDEM 2025 event whose description partially reads: The OpenFlexure Microscope is an open-source laboratory-grade digital robotic microscope. As a robotic microscope, it is able to automatically scan microscope slides creating, enormous multi-gigapixel digital representations of samples. The microscope is already undergoing evaluation for malaria and cancer diagnosis in Tanzania, Rwanda, and the Philippines. As an open project, our key goal is to support local manufacturing of microscopes in low-resource settings. [..] high-quality consistent documentation has enabled thousands of microscopes to be built [...]
IBASE Technology’s INA1607 is a fanless uCPE (universal Customer Premises Equipment) and SD-WAN (Software-defined WAN) appliance powered by an Intel Atom x7405C Amston Lake processor coupled with up to 16GB ECC or non-EEC DDR5 memory. The embedded computer comes with offers up to 64GB eMMC flash, features a 2.5-inch SATA bay, offers four 2.5GbE RJ45 ports, two dual-function GbE ports (RJ45/SFP), and supports wireless expansion for WiFi 6 or 4G LTE/5G modules through mini PCIe and M.2 slots and up to six antennas. IBASE INA1607 specifications: SoC – Intel Atom X7405C quad-core Amston Lake processor @ up to 2.2GHz with 6MB cache; TDP: 12W System Memory – Up to 16GB DDR5 4800MHz ECC/Non-ECC via SODIMM socket Storage 16GB, 32GB, or 64GB eMMC flash M.2 2280 B-Key 3042/3080 socket for SATA III or PCIe SSD 2.5-inch SATA drive bay Networking 4x 2.5GbE RJ45 ports via Intel i226-V controllers 2x GbE RJ45 [...]
OpenZFS 2.3 is out as stable this evening as the latest major feature release to this open-source ZFS file-system implementation used on Linux and FreeBSD systems. OpenZFS 2.3 is heavy on new features...
The Fedora Engineering and Steering Committee (FESCo) has granted approval of the change proposal for shipping Fedora Linux WSL images to enhance the user experience for those wanting to run this Linux distribution within the confines of Microsoft's Windows 11 WSL2 environment...
Last month with the launch of Intel Battlemage with the Arc B580 graphics card, there was fairly nice open-source GPU compute performance but with some outliers... Today it's a pleasure to report that with the newest open-source GPU compute stack as of this past week, there are some nice Xe2 / Battlemage improvements for enhancing the performance of some OpenCL workloads and also correcting the performance of some workloads that were in poor standing on launch day.
A vehicle’s wheel diameter has a dramatic effect on several aspects of performance. The most obvious is gearing, with larger wheels increasing the ultimate gear ratio — though transmission and transfer case gearing can counteract that. But wheel size also affects mobility over terrain, which is why Gourav Moger and Huseyin Atakan Varol’s prototype mobile robot, called Improbability Roller, has the ability to dynamically alter its wheel diameter.
If all else were equal (including final gear ratio), smaller wheels would be better, because they result in less unsprung mass. But that would only be true in a hypothetical world on perfectly flat surfaces. As the terrain becomes more irregular, larger wheels become more practical. Stairs are an extreme example and only a vehicle with very large wheels can climb stairs.
Most vehicles sacrifice either efficiency or capability through wheel size, but this robot doesn’t have to. Each of its wheels is a unique collapsing mechanism that can expand or shrink as necessary to alter the effective rolling diameter. Pulley rope actuators on each wheel, driven by Dynamixel geared motors by an Arduino Mega 2560 board through a Dynamixel shield, perform that change. A single drive motor spins the wheels through a rigid gear set mounted on the axles, and a third omni wheel provides stability.
This unique arrangement has additional benefits beyond terrain accommodation. The robot can, for instance, shrink its wheels in order to fit through tight spaces. It can also increase the size of one wheel, relative to the other, to turn without a dedicated steering rack or differential drive system.
Oracle today announced the Oracle Linux Enhanced Diagnostics (OLED) as their newest project that aims to enhance the debugability of the Linux kernel...
Morse Micro MM8108 is a new WiFi HaLow (802.11ah) SoC with a throughput of up to 43.33 Mbps, and improved range and power efficiency compared to its predecessor the Morse Micro MM6108 introduced in 2022 and supporting up to 32.3 Mbps transfer rate. The new chip is also smaller at just 5x5mm in a BGA package instead of 6x6mm in a QFN48 package for the MM6108/MM6104, adds a USB 2.0 host interface besides SDIO 2.0 and SPI, as well as a MIPI RFFE (Radio Frequency Front-End) for integration and interoperability with multi-radio systems. Morse Micro MM8108 specifications: 32-bit RISC-V Host Applications Processor (HAP) Single-Chip IEEE802.11ah Wi-Fi HaLow transceiver for low-power, long-reach IoT applications Worldwide Sub-1 GHz frequency bands (850MHz to 950MHz) On-chip 26 dBm power amplifier with support for external FEM (Front End Module) option 1/2/4/8 MHz channel bandwidth for up to 43.3 Mbps data rate using 256-QAM modulation at [...]
The "48.alpha" releases of GNOME Shell and Mutter were tagged on Sunday for this week's release of the GNOME 48 Alpha in leading up to the GNOME 48.0 stable release in mid-March...
Once again, CES 2025 in Las Vegas proved to be a premier event for showcasing the latest and greatest in technology innovation. This year’s event featured a host of groundbreaking products and announcements that will help to transform a broad range of industries.
Whether it was the significant advancements in autonomous driving, leading-edge technologies for consumer tech markets, or new partnerships, Arm’s presence at CES 2025 highlighted our dedication to driving technology innovation in the age of AI.
Arm’s landmark partnership with the Aston Martin Aramco Formula One Team
The big Arm announcement at CES 2025 was the new landmark multiyear partnership with the Aston Martin Aramco Formula One® Team, with Arm named as the team’s ‘Official AI Compute Platform Partner.’ In Formula One, Arm’s compute platform will drive advancements in AI and computing, helping Aston Martin Aramco push the boundaries of performance on and off the track.
Moreover, through this unique partnership, Arm and Aston Martin Aramco aim to:
Accelerate equity and inclusivity for the future of STEM and motorsport as described in this blog;
Encourage opportunities for women in STEM and motorsport, with Jessica Hawkins, Head of F1 Academy at Aston Martin Aramco, representing Arm as an Official Ambassador; and
Empower the next generation of engineers, racers, and innovators.
As part of a CES 2025 panel live stream, Ami Badani, Arm’s Chief Marketing Officer, and Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, sat down with Jessica and Charlie Blackwall, Head of Electronics at Aston Martin Aramco Formula One Team. They spoke about the new partnership and its commitment to driving technology innovation alongside greater equity and inclusivity in tech and motorsport.
Pioneering innovations with NVIDIA
On the Monday before the start of CES 2025, NVIDIA showcased its latest AI-based technology innovations in Jensen Huang’s keynote. Arm’s technology is playing a pivotal role in NVIDIA’s solutions for the next generation of consumer and commercial vehicles. Arm CPU cores are also central to NVIDIA’s new personal AI supercomputer that will deliver accessible high-performance AI compute to developers.
New AI capabilities for next-generation vehicles
During the NVIDIA keynote, Jensen Huang announced that the NVIDIA DRIVE AGX Thor, a centralized compute system that delivers advanced AI capabilities for a range of automotive applications, will be available for production vehicles later this year. This is the first solution to use Arm Neoverse V3AE, our first ever Neoverse CPU enhanced for automotive applications, with many leading automakers already making plans to adopt NVIDIA DRIVE AGX Thor for the next-generation of software-defined vehicles (SDVs). These include Jaguar Land Rover, Mercedes Benz and Volvo Cars.
For more details on this collaboration, you can read Dipti Vachani’s news blog here.
High-performance AI at developers’ desks
NVIDIA also introduced Project DIGITS, a new personal AI supercomputer that makes it possible for every AI developer to have a high-performance AI system on their desk. This will help to democratize access to high-performance AI computing, enabling a new wave of innovation and research.
Project DIGITS is powered by the NVIDIA GB10 Grace Blackwell Superchip, which features 20 of Arm’s leading-edge CPU cores. Working with NVIDIA and our leading software ecosystem, we cannot wait to see how this new device brings the next generation of highly innovative AI applications to market.
For more insights into Project DIGITS and the GB10 features, you can read a blog from Parag Beeraka, Senior Director, Consumer Computing in the Client Line of Business, here.
The future of automotive is built on Arm
As the big screen at the entrance to the West Hall in the Las Vegas Convention Center said: “the future of automotive is built on Arm”, with 94 percent of global automakers using Arm-based technology in their newest vehicle models.
On the CES 2025 showfloor, this was apparent with a range of new vehicles featuring Arm-based technology, including Honda’s new family concept car called the SUV Zero. This caught the attention of Ami Badani and Will Abbey, Arm’s Chief Commercial Officer, during their CES 2025 show walkthrough, as shown in the video below.
Honda was represented on the Arm-sponsored session “Revolutionizing the Future of Driving – Unleashing the Power of AI“, which also featured representatives from the BMW Group, Nuro and Rivian. During the session, Dipti Vachani outlined how AI is helping to revolutionize the automotive industry across three key trends:
Electrification;
Autonomy; and
The driver experience.
Hearing from the leading automotive companies represented on the panel, it was clear that scalable, consistent, power-efficient compute platforms are needed to deliver the next-generation of AI-enabled SDVs.
Dipti Vachani also participated in a “Six Five On The Road at CES 2025” discussion about how Arm aims to shape innovation in the automotive industry. This covered a range of topics, from the biggest technology trends for 2025 to Arm’s role across the automotive ecosystem.
Meanwhile, Arm technology was shown to be accelerating software across leading automotive applications throughout CES 2025. Mapbox, a leading platform for powering location experiences, demoed its new virtual platform, the Virtual Head Unit (VHU), which it developed in partnership with Arm and Corellium.
This creates virtual prototypes of the Arm-based in-vehicle hardware before seamlessly integrating these with Mapbox’s navigation stack. Automotive OEMs can use the new VHU to build maps how they want, and then test and render it their way at a quicker rate before deployment.
Elsewhere, AWS Automotive showed how Arm optimizations supported the development of its prototype chatbot-based application for the next-generation of SDVs.
New, innovative consumer tech solutions
As with every CES, the event in Las Vegas highlighted a broad range of the latest consumer technology innovations. On the showfloor, it was difficult to escape the broad range of new TV products, including the latest AI TVs – many of which would be powered by Arm technology. This also included brand-new smart displays that provide a range of information for the smart home or even images and video, like the fireplace in the video below.
However, one notable highlight for the TV market away from the showfloor was Eclipsa Audio. Developed by Google, Samsung, Arm, and the Alliance for Open Media, this new open-source technology delivers a three-dimensional (3D) audio experience, revolutionizing the way people experience sound. Through leveraging the Immersive Audio Model and Formats (IAMF), Eclipsa Audio produces immersive soundscapes, spreading audio vertically and horizontally to closely mimic natural settings.
Arm played a crucial role in the technology through optimizing the Opus codec and IAMF library to enable better performance on Arm CPUs. These enhancements ensure that Eclipsa Audio can deliver unparalleled performance across a variety of consumer devices, from high-end cinema systems to entry-level mobile devices and TVs.
For more information on Eclipsa Audio, you can read the blog here.
Two new Arm-based XR products garnered significant attention at CES 2025. ThinkAR showcased its AiLens product series, which are lightweight AR smart glasses that offer intuitive experiences enhanced by powerful edge AI capabilities. The devices are powered by Ambiq’s ultra-efficient Apollo4 SoC, which features Arm technology.
Working in conjunction with SoftBank, the AiLens will cover a variety of applications and use cases, including healthcare, workplace productivity and training, retail, navigation and travel, education and skill development, and entertainment.
Moreover, XREAL displayed its new XREAL One Series AR smart glasses. Built on Arm Cortex-A CPU technology, these AR wearables offer impressive display capabilities on a very lightweight form factor, with users able to generate 3D objects through speaking to the devices.
Elsewhere at CES 2025, MediaTek highlighted the capabilities of its Arm-based Kompanio 838 for gaming and education on Chromebook devices. For gaming, all Android games can be played on Chromebook devices that are built on MediaTek’s Kompanio 838 processor. This provides a smooth and responsive experience for players. Meanwhile, its AI intelligence enhances camera capabilities for high-quality image capture and “text-to-image” translation, supporting education use cases for students.
Also leading OEM ASUS demonstrated its Chromebook CZ12, which is designed as a “rugged, student-centric study mate.” The device, which is powered by the Arm-based MediaTek 520 processor, aims to provide enriched educational experiences through a robust design that is easy for students to use.
Bringing advanced AI capabilities to edge and endpoint devices
Alif Semiconductor made waves at CES 2025 by announcing the integration of Arm’s Ethos-U85 NPU into its second generation of Ensemble microcontrollers (MCUs). These new Ensemble MCUs and fusion processors are designed to support generative AI workloads, enabling advanced AI capabilities at the edge and endpoint devices. This is particularly valuable for edge AI applications focused on vision, voice, text, and sensor fusion, providing instant, accurate, and satisfying user experiences without relying on the cloud.
Arm’s standardized Ethos NPU IP was chosen for its superior performance and efficiency, as well as its broad ecosystem support.
Sustainable AI for the future
On the last day of CES 2025, Ami Badani hosted a fascinating panel discussion with representatives from Meta and NVIDIA on “the key to powering a sustainable AI revolution.” All agreed that the next frontier of AI compute will require unprecedented compute power, with Arm, Meta and NVIDIA committed to power-efficient AI technologies and software from cloud to edge. The panel also discussed how the future of AI will see different models for different levels of performance and use cases, with AI resources being delivered more efficiently as part of this sustainable AI future.
Arm technology across every corner of CES 2025
With Arm technology touching 100 percent of the connected global population, AI innovations from our global partner ecosystem were across every corner of CES 2025. Alongside some incredibly exciting announcements, Arm’s presence at CES 2025 is setting the scene for the year ahead, with the Arm compute platform at the heart of all AI experiences.
Image mode for Red Hat Enterprise Linux (RHEL) and RHEL for edge provide very similar benefits and operational workflows, and also address similar use cases. Image mode is becoming the preferred deployment method in RHEL 10, so in this article we'll go over what this means for users of RHEL for edge.If you’re following news about the upcoming RHEL 10 release (currently in beta) or about Red Hat Enterprise Linux AI (RHEL AI), you’ve heard about image mode for RHEL. And, if you’re a user of RHEL for edge, which is also a part of Red Hat Device Edge, you'll have noticed that both start from
Philip Rebohle working for Valve has just released DXVK 2.5.3 as the newest update to this Direct3D 9 / 10 / 11 implementation over the Vulkan API that is used for enjoying older Windows games on Linux...
Last year an AMD engineer proposed the notion of "Attack Vector Controls" for the Linux kernel to re-think how the CPU security mitigation handling is done and making it easier for system administrators/users to toggle the mitigations they are concerned about or not...
This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.
In previous articles, we focused on how AI-based tools can help developers streamline tasks and offered ideas for enabling agentic workflows, like reviewing branches and understanding code changes.
In this article, we’ll explore our experiments around the idea of creating a Docker AI Agent — something that could both help new users learn about our tools and products and help power users get things done faster.
During our explorations around this Docker Agent and AI-based tools, we noticed that the main pain points we encountered were often the same:
LLMs need good context to provide good answers (garbage in -> garbage out).
Using AI tools often requires context switching (moving to another app, to a different website, etc.).
We’d like agents to be able to suggest and perform actions on behalf of the users.
Direct product integrations with AI are often more satisfying to use than chat interfaces.
At first, we tried to see what’s possible using off-the-shelf services like ChatGPT or Claude.
By using testing prompts such as “optimize the following Dockerfile, following all best practices” and providing the model with a sub-par but common Dockerfile, we could sometimes get decent answers. Often, though, the resulting Dockerfile had subtle bugs, hallucinations, or simply wasn’t optimized or didn’t use many of the best practices we would’ve hoped for. Thus, this approach was not reliable enough.
Data ended up being the main issue. Training data for LLM models is always outdated by some amount of time, and the number of bad Dockerfiles that you can find online vastly outnumbers the amount of up-to-date Dockerfiles using all best practices, etc.
After doing proof-of-concept tests using a RAG approach, including some documents with lots of useful advice for creating good Dockerfiles, we realized that the AI Agent idea was definitely possible. However, setting up all the things required for a good RAG would’ve taken too much bandwidth from our small team.
Because of this, we opted to use kapa.ai for that specific part of our agent. Docker already uses them to provide the AI docs assistant on Docker docs, so most of our high-quality documentation is already available for us to reference as part of our LLM usage through their service. Using kapa.ai allowed us to experiment more, getting high-quality results faster, and allowing us to try different ideas around the AI agent concept.
Enter Gordon
Out of this experimentation came a new product that you can try: Gordon. With Gordon, we’d like to tackle these pain points. By integrating Gordon into Docker Desktop and the Docker CLI (Figure 1), we can:
Access much more context that can be used by the LLMs to best understand the user’s questions and provide better answers or even perform actions on the user’s behalf.
Be where the users are. If you launch a container via Docker Desktop and it fails, you can quickly debug with Gordon. If you’re in the terminal hacking away, Docker AI will be there, too.
Avoid being a purely chat-based agent by providing Gordon-based features directly as part of Docker Desktop UI elements. If Gordon detects certain scenarios, like a container that failed to start, a button will appear in the UI to directly get suggestions, or run actions, etc. (Figure 2).
What Gordon can do
We want to start with Gordon by optimizing for Docker-related tasks — not general-purpose questions — but we are not excluding expanding the scope to more development-related tasks as work on the agent continues.
Work on Gordon is at an early stage and its capabilities are constantly evolving, but it’s already really good at some things (Figure 3). Here are things to definitely try out:
Ask general Docker-related questions. Gordon knows Docker well and has access to all of our documentation.
Get help debugging container build or runtime errors.
Get help optimizing Docker-related files and configurations.
Ask it how to run specific containers (e.g., “How can I run MongoDB?”).
How Gordon works
The Gordon backend lives on Docker servers, while the client is a CLI that lives on the user’s machine and is bundled with Docker Desktop. Docker Desktop uses the CLI to access the local machine’s files, asking the user for the directory each time it needs that context to answer a question. When using the CLI directly, it has access to the working directory it’s executed in. For example, if you are in a directory with a Dockerfile and you run “Docker AI, rate my Dockerfile”, it will find the one that’s present in that directory
Currently, Gordon does not have write access to any files, so it will not edit any of your files. We’re hard at work on future features that will allow the agent to do the work for you, instead of only suggesting solutions.
Figure 4 shows a rough overview of how we are thinking about things behind the scenes.
The first step of this pipeline, “Understand the user’s input and figure out which action to perform”, is done using “tool calling” (also known as “function calling”) with the OpenAI API.
Although this is a popular approach, we noticed that the documentation online isn’t very good, and general best practices aren’t well defined yet. This led us to experiment a lot with the feature and try to figure out what works for us and what doesn’t.
Things we noticed:
Tool descriptions are important, and we should prefer more in-depth descriptions with examples.
Testing around tool-detection code is also important. Adding new tools to a request could confuse the LLM and cause it to no longer trigger the expected tool.
The LLM model used influences how the whole tool calling functionality should be implemented, as different models might prefer descriptions written in a certain way, behave better/worse under certain scenarios (e.g. when using lots of tools), etc.
Try Gordon for yourself
Gordon is available as an opt-in Beta feature starting with Docker Desktop version 4.37. To participate in the closed beta, all you need to do is fill out the form on the site.
Initially, Gordon will be available for use both in Docker Desktop and the Docker CLI, but our idea is to surface parts of this tech in various other parts of our products as well.
For more on what we’re doing at Docker, subscribe to our newsletter.
With the release of Red Hat OpenStack Services on OpenShift, there is a major change in the design and architecture that impacts how OpenStack is deployed and managed. The OpenStack control plane has moved from traditional standalone containers on Red Hat Enterprise Linux (RHEL) to an advanced pod-based Kubernetes managed architecture.Introducing Red Hat OpenStack Services on OpenShiftIn this new form factor, the OpenStack control services such as keystone, nova, glance and neutron that were once deployed as standalone containers on top of bare metal or virtual machines (VMs) are now deployed
It feels like 2024 was the Year of artificial intelligence (AI), quickly going from being an interesting experiment to seemingly the only thing anyone was talking about. And it can be hard to keep up with all the news and advancements being made, but hopefully this will help. In these 11 short videos, Red Hatters cover a variety of topics from open source AI, to the new InstructLab project, through identifying which large language model (LLM) is right for your organization, and more.Try Red Hat Enterprise Linux AIGrab a cup of coffee and catch up on some of what Red Hat has been up to in the w
A change to the Linux 6.13 kernel contributed by a Microsoft engineer ended up changing Linux x86_64 code without proper authorization and in turn causing troubles for users and now set to be disabled ahead of the Linux 6.13 stable release expected next Sunday...
Alibaba engineers have recently been working through some AMD Linux kernel graphics driver bugs uncovered during suspend-and-resume testing with AMD graphics cards...
Queued up into the networking subsystem's "net-next" branch last week ahead of the Linux 6.14 kernel cycle is AF_XDP zero-copy support for the common Intel Gigabit Ethernet "IGB" driver. With this the AF_XDP performance improvements can be quite dramatic in leveraging this zero-copy path...