Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NVIDIA JetPack 6.1 Boosts Performance and Security through Camera Stack Optimizations and Introduction of Firmware TPM

22 November 2024 at 05:01
Connected icons show the workflow.NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release,...Connected icons show the workflow.

NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release, JetPack has enhanced its performance, introduced new features, and optimized existing tools to deliver increased value to its users. This means that your existing Jetson Orin-based products experience performance optimizations by upgrading to…

Source

MYiR Tech MYC-LR3576 Rockchip RK3576 LGA SoM offers 6 TOPS NPU and 8K video support for AIoT applications

21 November 2024 at 16:41
MYC LR3576 SOM as Controller Board

MYiR Tech MYC-LR3576 is a system-on-module (SoM) based on Rockchip RK3576 octa-core Cortex-A72/A53 SoC with a 6 TOPS NPU and 8K video support suitable for AIOT applications that powers the MYD-LR3576 development board. The SoM supports up to 8GB of LPDDR4X RAM and 64GB of eMMC storage, along with a 32Kbit EEPROM. Connectivity options include dual Gigabit Ethernet ports, USB 3.2, and more. For multimedia, it supports HDMI, DisplayPort, eDP, MIPI-DSI, and MIPI CSI interfaces, and up to 8K video decoding/4K video encoding. The MYC-LR3576 also offers several audio I/O and multiple GPIO and I2C interfaces suitable for embedded systems. The MYD-LR3576 development board gives access to dual Gigabit Ethernet ports, Wi-Fi and Bluetooth support, HDMI, Mini DisplayPort, USB 3.0 ports, and GPIO headers. It also supports MIPI camera modules and a 10.1-inch LCD module and provides full access to the SoM’s features. Previously we have written about various development [...]

The post MYiR Tech MYC-LR3576 Rockchip RK3576 LGA SoM offers 6 TOPS NPU and 8K video support for AIoT applications appeared first on CNX Software - Embedded Systems News.

Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs

19 November 2024 at 23:00
Meta recently released its Llama 3.2 series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are...

Meta recently released its Llama 3.2 series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only small language model (SLM) variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and…

Source

Bringing real-time edge AI applications to developers

19 November 2024 at 19:25

In this guest post, Ramona Rayner from our partner Sony shows you how to quickly explore different models and AI capabilities, and how you can easily build applications on top of the Raspberry Pi AI Camera.

The recently launched Raspberry Pi AI Camera is an extremely capable piece of hardware, enabling you to build powerful AI applications on your Raspberry Pi. By offloading the AI inference to the IMX500 accelerator chip, more computational resources are available to handle application logic right on the edge! We are very curious to see what you will be creating and we are keen to give you more tools to do so. This post will cover how to quickly explore different models and AI capabilities, and how to easily build applications on top of the Raspberry Pi AI Camera.

If you didn’t have the chance to go through the Getting Started guide, make sure to check that out first to verify that your AI Camera is set up correctly.

Explore pre-trained models

A great way to start exploring the possibilities of the Raspberry Pi AI Camera is to try out some of the pre-trained models that are available in the IMX500 Model Zoo. To simplify the exploration process, consider using a GUI Tool, designed to quickly upload different models and see the real-time inference results on the AI Camera.

In order to start the GUI Tool, make sure to have Node.js installed. (Verify Node.js is installed by running node --version in the terminal.) And build and run the tool by running the following commands in the root of the repository:

make build
./dist/run.sh

The GUI Tool will be accessible on http://127.0.0.1:3001. To see a model in action:

  • Add a custom model by clicking the ADD button located at the top right corner of the interface.
  • Provide the necessary details to add a custom network and upload the network.rpk file, and the (optional) labels.txt file.
  • Select the model and navigate to Camera Preview to see the model in action!

Here are just a few of the models available in the IMX500 Model Zoo:

Network NameNetwork TypePost ProcessorColor FormatPreserve Aspect RatioNetwork FileLabels File
mobilenet_v2packagedClassificationRGBTruenetwork.rpkimagenet_labels.txt
efficientdet_lite0_pppackagedObject Detection (EfficientDet Lite0)RGBTruenetwork.rpkcoco_labels.txt
deeplabv3pluspackagedSegmentationRGBFalsenetwork.rpk
posenetpackagedPose EstimationRGBFalsenetwork.rpk

Exploring the different models gives you insight into the camera’s capabilities and enables you to identify the model that best suits your requirements. When you think you’ve found it, it’s time to build an application.

Building applications

Plenty of CPU is available to run applications on the Raspberry Pi while model inference is taking place on the IMX500. To demonstrate this we’ll run a Workout Monitoring sample application.

The goal is to count real-time exercise repetitions by detecting and tracking people performing common exercises like pull-ups, push-ups, ab workouts and squats. The app will count repetitions for each person in the frame, making sure multiple people can work out simultaneously and compete while getting automated rep counting.

To run the example, clone the sample apps repository and make sure to download the HigherHRNet model from the Raspberry Pi IMX500 Model Zoo.

Make sure you have OpenCV with Qt available:

sudo apt install python3-opencv

And from the root of the repository run:

python3 -m venv venv --system-site-packages
source venv/bin/activate
cd examples/workout-monitor/
pip install -e .

Switching between exercises is straightforward; simply provide the appropriate --exercise argument as one of pullup, pushup, abworkout or squat.

workout-monitor --model /path/to/imx500_network_higherhrnet_coco.rpk
 --exercise pullup

Note that this application is running:

  • Model post-processing to interpret the model’s output tensor into bounding boxes and skeleton keypoints
  • A tracker module (ByteTrack) to give the detected people a unique ID so that you can count individual people’s exercise reps
  • A matcher module to increase the accuracy of the tracker results, by matching people over frames so as not to lose their IDs
  • CV2 visualisation to visualise the results of the detections and see the results of the application

And all of this in real time, on the edge, while the IMX500 is taking care of the AI inference!

Now both you and the AI Camera are testing out each other’s limits. How many pull-ups can you do?

We hope by this point you’re curious to explore further; you can discover more sample applications on GitHub.

The post Bringing real-time edge AI applications to developers appeared first on Raspberry Pi.

Orange Pi 4A low-cost octa-core SBC is powered by Allwinner T527 Cortex-A55 AI SoC with a 2TOPS NPU

16 November 2024 at 10:25
Orange Pi 4A

Orange Pi 4A is a new low-cost credit card-size single board computer (SBC) powered by an Allwinner T527 octa-core Cortex-A55 processor with a 2TOP NPU and offered with either 2GB or 4GB RAM. The board also comes with multiple storage options: a 128 or 256Mbit SPI NOR flash for the bootloader, an eMMC socket for up to 128GB modules, an M.2 socket for NVMe SSDs, and a microSD card slot. It’s also equipped with four USB 2.0 ports, a gigabit Ethernet port, three display interfaces (HDMI, MIPI DSI, eDP),  two camera interfaces, and a 40-pin “Raspberry Pi” header. The Orange Pi 4A is somewhat equivalent to an octa-core Raspberry Pi 3/4 with some extra features. Orange Pi 4A specifications: SoC – Allwinner T527 CPU Octa-core Arm Cortex-A55 @ up to 1.8GHz (four cores) and up to 1.42 GHz (four cores) XuanTie E906 RISC-V core @ 200MHz GPU – Arm Mali-G57 [...]

The post Orange Pi 4A low-cost octa-core SBC is powered by Allwinner T527 Cortex-A55 AI SoC with a 2TOPS NPU appeared first on CNX Software - Embedded Systems News.

Upcoming Webinar – 8 Business Solutions Built with LoRaWAN and Low-Code IoT Platform

13 November 2024 at 20:01

Hey community, we’re excited to share that we’re speaking at a joint webinar, “8 Business Solutions Built with LoRaWAN and Low-Code IoT Platform,” hosted by Blynk and The Things Industries. Join us on Thursday, November 21st, at 10:00 AM EST / 4:00 PM CET for the insightful webinar, where you can explore how to combine LoRaWAN-enabled hardware with Blynk’s low-code IoT platform can help you quickly launch and scale IoT solutions that are practical, powerful, and easy to manage.

Blynk TTI Webinar poster

Why You Should Attend

Building an IoT solution doesn’t have to be complex. This webinar offers a step-by-step look into using off-the-shelf LoRaWAN hardware including our SenseCAP LoRaWAN devices (read on to explore), paired with Blynk’s intuitive platform, to create impactful IoT solutions without heavy coding or lengthy development time. Whether you’re an IoT beginner or looking to expand your current deployments, this session has everything you need to set your IoT solutions up for long-term success.

What You’ll Learn

In just one hour, you’ll gain insights from industry experts on:

    • Quickly deploy and manage IoT solutions using best-in-class LoRaWAN hardware.
    • Seamlessly provision and manage devices with Blynk’s low-code platform for faster setup and efficient data handling.
    • Visualize data through no-code web and mobile dashboards to easily monitor and control your devices.
    • Explore 8 game-changing IoT Solutions which are designed to solve specific business challenges, offering ready-to-deploy options that can scale as your business grows

This session is designed to empower you to take your IoT deployments from prototyping to enterprise scale effortlessly.

Discover 8 Game-Changing IoT Solutions for Business

The hosts, Pavlo and Felix together with 8 industry leaders from Thermokon Sensortechnik GmbH, Pepperl+Fuchs Group, Miromico, Seeed Studio, MOKO SMART, Milesight IoT, ATIM Radiocommunication, and Blues, will introduce eight proven IoT applications across various industries. Each solution is designed to solve a specific business challenge, offering ready-to-deploy options that can scale as your business grows. Here’s a sneak peek at what you’ll explore:

    • Smart heating for hotels 
    • Warehouse & production site solutions 
    • Building Management 
    • People Counting 
    • Personnel safety on construction site 
    • Water leak detection 
    • Smart metering 
    • Refrigerator fleet monitoring

During the webinar, our BD manager Violet Su will present our latest LoRaWAN solution with built-in Vision AI capabilities for efficient meter reading, person detection, people counting, object detection, and more. 

The solution is powered by SenseCAP A1102 Vision AI Sensor and SenseCAP S21000 LoRaWAN DTU

    • Built-in AI Camera: Local AI models for high-accuracy detection
    • Battery-Saving and Long Range LoRaWAN Connectivity: Perfect for various settings with range up to 10km
    • Industrial RS485 Support: Connect to your legacy LoRaWAN DTUs via RS485
    • Actionable Data: Integrated with Blynk’s low-code platform, making data easy to analyze and act on

Register Now

Don’t miss this opportunity to learn from the experts and bring your IoT ideas to life with LoRaWAN and low-code technology!

Mark your calendar for November 21st and take the next step in your IoT journey. We look forward to seeing you there!

The post Upcoming Webinar – 8 Business Solutions Built with LoRaWAN and Low-Code IoT Platform appeared first on Latest Open Tech From Seeed.

Graperain G3562 – A Rockchip RK3562 system-on-module and development board

13 November 2024 at 10:24
Rockchip RK3562 development board

Graperain G3562 is a Rockchip RK3562 quad-core Cortex-A53 system-on-module (SoM) with up to 8GB LPDDR4, up to 128GB eMMC flash suitable for Edge AI, IoT, automation, and consumer electronic applications. The company also provides the G3562 development board for the SoM with an M.2 socket for NVMe SSD, dual Ethernet, WiFi 5 and Bluetooth 5.0, and optional 4G LTE/3G cellular connectivity, plus a MIPI DSI/LVDS display connector, two MIPI CSI camera connectors, three USB 2.0 ports, audio interfaces, and expansion through a 30-pin GPIO header and UART connector. Graperain G3562 SoM GrapeRain G3562 specifications: SoC –  Rockchip RK3562 CPU – Quad-core Arm Cortex-A53 quad-core @ 2.0 GHz GPU – Mali-G52-2EE with support for OpenGL ES 1.1/2.0/3.2, OpenCL 2.0, Vulkan 1.0/1.1 AI accelerator – 1 TOPS (INT8) NPU VPU Encoder – H.264 1920×1080 @ 60fps Decoder – H.265/VP9 4096×2304 @ 30fps; H.264 1920×1080 @ 60fps RAM – 2GB LPDDR4 by default [...]

The post Graperain G3562 – A Rockchip RK3562 system-on-module and development board appeared first on CNX Software - Embedded Systems News.

Accelerating AI Development with the Docker AI Catalog

12 November 2024 at 21:38

Developers are increasingly expected to integrate AI capabilities into their applications but they also face many challenges. Namely, the steep learning curve, coupled with an overwhelming array of tools and frameworks, makes this process too tedious. Docker aims to bridge this gap with the Docker AI Catalog, a curated experience designed to simplify AI development and empower both developers and publishers.

2400x1260 generic hub blog f

Why Docker for AI?

Docker and container technology has been a key technology used by developers at the forefront of AI applications for the past few years. Now, Docker is doubling down on that effort with our AI Catalog. Developers using Docker’s suite of products are often responsible for building, deploying, and managing complex applications — and, now, they must also navigate generative AI (GenAI) technologies, such as large language models (LLMs), vector databases, and GPU support.

For developers, the AI Catalog simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. This approach removes the hassle of evaluating numerous tools and configurations, allowing developers to focus on building innovative AI applications.

Key benefits for development teams

The Docker AI Catalog is tailored to help users overcome common hurdles in the evolving AI application development landscape, such as:

  • Decision overload: The GenAI ecosystem is crowded with new tools and frameworks. The Docker AI Catalog simplifies the decision-making process by offering a curated list of trusted content and container images, so developers don’t have to wade through endless options.
  • Steep learning curve: With the rise of new technologies like LLMs and retrieval-augmented generation (RAG), the learning curve can be overwhelming. Docker provides an all-in-one resource to help developers quickly get up to speed.
  • Complex configurations preventing production readiness: Running AI applications often requires specialized hardware configurations, especially with GPUs. Docker’s AI stacks make this process more accessible, ensuring that developers can harness the full power of these resources without extensive setup.

The result? Shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Empowering publishers

For Docker verified publishers, the AI Catalog provides a platform to differentiate themselves in a crowded market. Independent software vendors (ISVs) and open source contributors can promote their content, gain insights into adoption, and improve visibility to a growing community of AI developers.

Key features for publishers include:

  • Increased discoverability: Publishers can highlight their AI content within a trusted ecosystem used by millions of developers worldwide.
  • Metrics and insights: Verified publishers gain valuable insights into the performance of their content, helping them optimize strategies and drive engagement.

Unified experience for AI application development

The AI Catalog is more than just a repository of AI tools. It’s a unified ecosystem designed to foster collaboration between developers and publishers, creating a path forward for more innovative approaches to building applications supported by AI capabilities. Developers get easy access to essential AI tools and content, while publishers gain the visibility and feedback they need to thrive in a competitive marketplace.

With Docker’s trusted platform, development teams can build AI applications confidently, knowing they have access to the most relevant and reliable tools available.

The road ahead: What’s next?

Docker will launch the AI Catalog in preview on November 12, 2024, alongside a joint webinar with MongoDB. This initiative will further Docker’s role as a leader in AI application development, ensuring that developers and publishers alike can take full advantage of the opportunities presented by AI tools.

Stay tuned for more updates and prepare to dive into a world of possibilities with the Docker AI Catalog. Whether you’re an AI developer seeking to streamline your workflows or a publisher looking to grow your audience, Docker has the tools and support you need to succeed.

Ready to simplify your AI development process? Explore the AI Catalog and get access to trusted content that will accelerate your development journey. Start building smarter, faster, and more efficiently.

For publishers, now is the perfect time to join the AI Catalog and gain visibility for your content. Become a trusted source in the AI development space and connect with millions of developers looking for the right tools to power their next breakthrough.

Learn more

Raspberry Pi AI Kit projects

By: Phil King
11 November 2024 at 21:24

This #MagPiMonday, we’re hoping to inspire you to add artificial intelligence to your Raspberry Pi designs with this feature by Phil King, from the latest issue of The MagPi.

With their powerful AI accelerator modules, Raspberry Pi’s Camera Module and AI Kit open up exciting possibilities in computer vision and machine learning. The versatility of the Raspberry Pi platform, combined with AI capabilities, opens up a world of new possibilities for innovative smart projects. From creative experiments to practical applications like smart pill dispensers, makers are harnessing the kit’s potential to push the boundaries of AI. In this feature, we explore some standout projects, and hope they inspire you to embark on your own.

Peeper Pam boss detector

By VEEB Projects

AI computer vision can identify objects within a live camera view. In this project, VEEB’s Martin Spendiff and Vanessa Bradley have used it to detect humans in the frame, so you can tell if your boss is approaching behind you as you sit at your desk!

The project comprises two parts. A Raspberry Pi 5 equipped with a Camera Module and AI Kit handles the image recognition and also acts as a web server. This uses web sockets to send messages wirelessly to the ‘detector’ part — a Raspberry Pi Pico W and a voltmeter whose needle moves to indicate the level of AI certainty for the ID.

Having got their hands on an AI Kit — “a nice intro into computer vision” — it took the pair just three days to create Peeper Pam. “The most challenging bit was that we’d not used sockets — more efficient than the Pico constantly asking Raspberry Pi ‘do you see anything?’,” says Martin. “Raspberry Pi does all the heavy lifting, while Pico just listens for an ‘I’ve seen something’ signal.”

While he notes that you could get Raspberry Pi 5 to serve both functions, the two-part setup means you can place the camera in a different position to monitor a spot you can’t see. Also, by adapting the code from the project’s GitHub repo, there are lots of other uses if you get the AI to deter other objects. “Pigeons in the window box is one that we want to do,” Martin says.

Monster AI Pi PC

By Jeff Geerling

Never one to do things by halves, Jeff Geerling went overboard with Raspberry Pi AI Kit and built a Monster AI Pi PC with a total of eight neural processors. In fact, with 55 TOPS (trillions of operations per second), it’s faster than the latest AMD, Qualcomm, and Apple Silicon processors!

The NPU chips — including the AI Kit’s Hailo-8L — are connected to a large 12× PCIe slot card with a PEX 8619 switch capable of handling 16 PCI Express Gen 2 lanes. The card is then mounted on a Raspberry Pi 5 via a Pineboards uPCIty Lite HAT, which has an additional 12V PSU to supply the extra wattage needed for all those processors.

With a bit of jiggery-pokery with the firmware and drivers on Raspberry Pi, Jeff managed to get it working.

Car detection & tracking system

By Naveen

As a proof of concept, Japanese maker Naveen aimed to implement an automated system for identifying and monitoring cars at toll plazas to get an accurate tally of the vehicles entering and exiting.

With the extra processing power provided by a Raspberry AI Kit, the project uses Edge Impulse computer vision to detect and count cars in the view from a Camera Module Wide. “We opted for a wide lens because it can capture a larger area,” he says, “allowing the camera to monitor multiple lanes simultaneously.” He also needed to train and test a YOLOv5 machine learning model. All the details can be found on the project page via the link above, which could prove useful for learning how to train custom ML models for your own AI project.

Safety helmet detection system

By Shakhizat Nurgaliyev

Wearing a safety helmet on a building site is essential and could save your life. This computer vision project uses Raspberry Pi AI Kit with the advanced YOLOv8 machine learning model to quickly and accurately identify objects within the camera view, running at an impressive inference speed of 30fps.

The project page has a guide showing how to make use of Raspberry Pi AI Kit to achieve efficient AI inferencing for safety helmet detection. This includes details of the software installation and model training process, for which the maker has provided a link to a dataset of 5000 images with bounding box annotations for three classes: helmet, person, and head.

Accelerating MediaPipe models

By Mario Bergeron

Google’s MediaPipe is an open-source framework developed for building machine learning pipelines, especially useful for working with videos and images.

Having used MediaPipe on other platforms, Mario Bergeron decided to experiment with it on a Raspberry Pi AI Kit. On the project page (linked above) he details the process, including using his Python demo application with options to detect hands/palms, faces, or poses.

Mario’s test results show how much better the AI Kit’s Hailo-8L AI accelerator module performs compared to running reference TensorFlow Lite models on Raspberry Pi 5 alone: up to 5.8 times faster. With three models running for hand and landmarks detection, the frame rate is 26–28fps with one hand detected, and 22–25fps for two.

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Raspberry Pi AI Kit projects appeared first on Raspberry Pi.

Hacktober Recap, Workshop and Seminar for Open Source Community

5 November 2024 at 16:50

This October has been tremendously packed with my duty as Seeed Studio’s Ranger. From OSHWA to Pycon APAC 2024, every event left a great spot on my mind. Here is the recap for my activity this October 2024.

1. October 3rd 2024, OSHWA 24 Hours Membership Drive Show & Tell

This live online session is initiated by OSHWA and runs 24 hours straight. My session is scheduled to run at 6 PM local time. After rushing to be at home before my schedule, I managed to get on time and share my open-source projects on Hackster. After sharing my open-source projects, I introduce the Seeed Studio Co-Create Gadget campaign to the audience.

2. October 17th 2024, Transforming IoT with Edge Intelligence Seminar

This seminar is initiated by the student organization on my campus and has two speakers. The other speaker is one of my ex student whom now work as a cloud engineer. It is the perfect opportunity to introduce Seeed Studio to my campus while also giving the students insight into the latest technology in AI, Machine Learning, IoT, and Edge Computing.

In the seminar, I mainly focus on the next step of IoT which is the inclusivity of Machine Learning in IoT solutions and how reComputer could be a small compact solution to implement it.

3. October 25th 2024, Pycon APAC 2024

This is my highlight of the month because the Python Indonesia community is becoming the host for this year’s annual Python Conference of the Asia Pacific area. In total, there are 80 Speakers from across Asia pacific with hundreds of participants.

At Pycon I managed to schedule a workshop session on 25th October 2024 titled “Building IoT and Camera Project with XIAO ESP32S3 and Micropython”.

In this workshop, I shared various things that the XIAO ESP32S3 can do with Micropython such as basic input and output, camera stream, displaying JSON weather data on the OLED screen, and interacting with the Grove Vision AI V2 sensor.

4. November 2nd 2024, Internet of Things Seminar at Universitas Banten Jaya

A friend from another campus has invited me to this seminar. The location of campus is in Serang, a neighboring city from where I live and it is a 2-hour drive from home.

The material that I shared in this seminar is the same as what I shared at my previous seminar on my campus. The organizer told me that they don’t have much insight regarding IoT on campus, so this seminar could be a huge step for advancing the knowledge on their campus.

The post Hacktober Recap, Workshop and Seminar for Open Source Community appeared first on Latest Open Tech From Seeed.

Using AI Tools to Convert a PDF into Images

4 November 2024 at 21:01

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

How do you turn a PDF into a set of images? A modern suggestion could be to get Claude or GPT to write you a CLI command for something like ImageMagick. Let’s follow that train of thought.

2400x1260 docker labs genai

If we enter a prompt like “How do I turn a PDF into a bunch of images using ImageMagick,” we will indeed get something that looks like it might be a solution.

Imagemagick includes a command-line tool called convert which you can use to convert PDF files into images. Here's how you can do it:

1. Open your terminal

2. Change directory to where your PDF is located:

```
cd /path/to/your/pdf

```
3. Use the following command to convert the PDF into images (in this case JPEG):

```
convert -density 300 -quality 100 your_file.pdf output_name.jpg

``` 

The agent will then explain how convert works and finish with a suggestion to “please make sure that you have installed ImageMagick before running these commands.”

Instead of worrying about whether ImageMagick is installed, however, let’s use Docker as our tool provider and prompt it to go ahead and run the command. If you’re following along in this series, you’ll have seen that we are using Markdown files to mix together tools and prompts.  Here’s the first prompt we tried:

---
tools:
  - name: imagemagick
---
# prompt user

Use Imagemagick to convert the family.pdf file into a bunch of jpg images.

After executing this prompt, the LLM generated a tool call, which we executed in the Docker runtime, and it successfully converted family.pdf into nine .jpg files (my family.pdf file had nine pages). 

Figure 1 shows the flow from our VSCode Extension.

Animated VSCode workflow showing the process of converting PDFs to images.
Figure 1: Workflow from VSCode Extension.

We have given enough context to the LLM that it is able to plan a call to this ImageMagick binary. And, because this tool is available on Docker Hub, we don’t have to “make sure that ImageMagick is installed.” This would be the equivalent command if you were to use docker run directly:

# family.pdf must be located in your $PWD

docker run --rm -v $PWD:/project --workdir /project vonwig/imageMagick:latest convert -density 300 -quality 300 family.pdf family.jpg 

The tool ecosystem

How did this work? The process relied on two things:

  • Tool distribution and discovery (pulling tools into Docker Hub for distribution to our Docker Desktop runtime).
  • Automatic generation of Agent Tool interfaces.

When we first started this project, we expected that we’d begin with a small set of tools because the interface for each tool would take time to design. We thought we were going to need to bootstrap an ecosystem of tools that had been prepared to be used in these agent workflows. 

However, we learned that we can use a much more generic approach. Most tools already come with documentation, such as command-line help, examples, and man pages. Instead of treating each tool as something special, we are using an architecture where an agent responds to failures by reading documentation and trying again (Figure 2).

Illustration of circular process showing "Run tool" leading to "Capture errors" leading to "Read docs" in a continuous loop.
Figure 2: Agent process.

We see a process of experimenting with tools that is not unlike what we, as developers, do on the command line. Try a command line, read a doc, adjust the command line, and try again.

The value of this kind of looping has changed our expectations. Step one is simply pulling the tool into Docker Hub and seeing whether the agent can use it with nothing more than its out-of-the-box documentation. We are also pulling open source software (OSS)  tools directly from nixpkgs, which gives us access to tens of thousands of different tools to experiment with. 

Docker keeps our runtimes isolated from the host operating system, while the nixpkgs ecosystem and maintainers provide a rich source of OSS tools.

As expected, packaging agents still run into issues that force us to re-plan how tools are packaged. For example, the prompt we showed above might have generated the correct tool call on the first try, but the ImageMagick container failed on the first run with this terrible-looking error message:

function call failed call exited with non-zero code (1): Error: sh: 1: gs: not found  

Fortunately, feeding that error back into the LLM resulted in the suggestion that convert needs another tool, called Ghostscript, to run successfully. Our agent was not able to fix this automatically today. However, we adjusted the image build slightly and now the “latest” version of the vonwig/imagemagick:latest no longer has this issue. This is an example of something we only need to learn once.

The LLM figured out convert on its own. But its agency came from the addition of a tool.

Read the Docker Labs GenAI series to see more of what we’ve been working on.

Learn more

This Week in Beagle #2

28 October 2024 at 12:00

Hello everyone. This week mostly involved a lot of chasing stuff around (sometimes in vain), so while there was not much headline work, this post might end up a bit longer than usual. Let’s get started without delay.

BeagleConnect Freedom Adventures

I started the week by trying to add IEEE802154 subg socket support in MicroPython for BeagleConnect Freedom. However, I quickly learned that it would not be simple. For some reason, BeagleConnect Freedom would completely lock up after starting.

Initially, I thought it might be a MicroPython issue, so I tried tracking down where the freeze happened. This, however, led to a dead end since, for some reason, the program would not be able to read from the UART console. While doing this, I also remembered a similar issue I encountered while working on BeagleConnect Freedom rc car demo. At the time, I fixed it by just removing unused config items like ADC and PWM from config, but forgot about it after the OOSC conference.

After some experimenting with PWM, ADC, and IEEE802154 subg radio, I figured out that the problem is reproducible in other Zephyr samples like echo_cliet, etc. For some reason, if both PWM pins (MB1 PWM and MB2 PWM) are enabled alongside the subg radio, everything freezes. If one or both of the PWM are disabled, everything works fine. This seems to be an issue with timers but it needs further investigation.

I have created a Zephyr issue and a Ti E2E question for the same.

Code Composer Studio Theia Adventures

With the MicroPython issue and a bricked BeagleConnect Freedom, I thought it is a good time to setup and learn Ti’s Code Composer Studio.

I use Fedora Sway Atomic as my daily driver, and thus mostly rely on flatpaks or podman containers. However, running Code Composer Studio inside a podman container (created using toolbox) was not a great experience for me. It would randomly stutter (maybe a hardware acceleration problem?) and freeze. Additionally, while udev can make it almost painless to handle device permissions, it can occasionally cause hiccups with flashing. In fact, one of the primary reasons I switched to neovim was that my emacs GUI kept having weird performance problems inside the container.

So, I finally went ahead and installed CCS Theia on my base system. The install procedure is a bit weird since there is no rpm or deb package. Instead, there is an installer which installs everything in $HOME/ti folder. It also creates an uninstall, which seems to work. All in all, while I prefer a flatpack or app image, it wasn’t too bad.

I hit a snag quite early on when I was unable to flash the cc1352p1 on my launchpad. I tried various things and opened a Ti E2E question for the same. However, the solution turned out to be quite weird. I was not saving my workspace since, well, nothing was working anyway, and CCS Theia would open the unsaved workspace. But everything magically worked once I saved my workspace because I was tired of the dialog on exit. Not really sure why.

Once I could flash the launchpad, I tried using the xds110 in launchpad with my BeagleConnect Freedom. I was able to flash a simple blinky on it and even set up breakpoints.

Now, I need to figure out how to use openocd and add instructions in Beagle docs and Zephyr docs for the same.

KUnit Adventures

I have been working on kernel patches that require writing some unit tests. So I was trying to get KUnit to work. However, kunit run kept on failing for some reason, even with the default config. The output was not very clear either. However, after following some debugging instructions, I found out that I could not execute the user mode kernel from inside the podman container. I have created an issue in Fedora toolbox regarding the same.

MicroPython

I have added MicroPython support for BeaglePlay cc1352p7 in my draft PR. It supports IEEE802154 subg sockets and also helped me ensure that MicroPython networking should work fine on BeagleConnect Freedom as well once the timer issue is resolved.

Since BeaglePlay cc1352p7 Zephyr support was merged after the 3.7.0 release, the MicroPython support will continue to live in the draft PR until MicroPython supports a newer Zephyr version.

Zephyr

Zephyr support for BeagleBoard boards continues to improve. We will continue to work to make Beagle one of the best-supported platforms for Zephyr development.

BeagleBone AI-64

Thanks to the work by Andrew Davis, Zephyr support for R5 cores in BeagleBone AI64 was merged this week. Here is the Zephyr page for BeagleBone AI 54. This adds one more board to the growing list of BeagleBoard boards that support Zephyr.

BeagleY-AI

A PR for Zephyr support was opened by Andrew Davis after BBAI-64 support was merged. Anyone interested should feel free to try it out. Hopefully, it can get merged upstream soon.

BeagleBoard Imager Rust Updates

While working on BeagleY-AI, I found a bug in the sha256 handling of the Rust-based imager while translating the old bb-imager config. So, I have created release 0.0.2 for the imager. I probably should implement changelogs before the next release.

Ending Thoughts

This was it for this week. Hopefully, this helps bring transparency regarding where the development efforts are concentrated, and how the community can help. Look forward to next update.

Helpful links

The post This Week in Beagle #2 appeared first on BeagleBoard.

Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS

24 October 2024 at 13:59

Following the successful launch of the Raspberry Pi AI Kit and AI Camera, we are excited to introduce the newest addition to our AI product line: the Raspberry Pi AI HAT+.

The AI HAT+ features the same best-in-class Hailo AI accelerator technology as our AI Kit, but now with a choice of two performance options: the 13 TOPS (tera-operations per second) model, priced at $70 and featuring the same Hailo-8L accelerator as the AI Kit, and the more powerful 26 TOPS model at $110, equipped with the Hailo-8 accelerator.

The image you uploaded shows a Raspberry Pi single-board computer with an attached AI accelerator module, likely the Raspberry Pi AI Hat. This hat includes a green circuit board with a central chip that appears to be from Hailo, a company that specializes in artificial intelligence (AI) processors. The board is connected to the Raspberry Pi via the GPIO pins, and it has several components related to AI processing and other features to enable high-performance machine learning on the device. This configuration is designed for AI applications like real-time image processing, neural network acceleration, and other computationally intensive tasks. The text "26 TOPS" refers to the AI hat's ability to perform 26 trillion operations per second, which is a significant performance specification for AI applications.

Designed to conform to our HAT+ specification, the AI HAT+ automatically switches to PCIe Gen 3.0 mode to maximise the full 26 TOPS of compute power available in the Hailo-8 accelerator.

Unlike the AI Kit, which utilises an M.2 connector, the Hailo accelerator chip is directly integrated onto the main PCB. This change not only simplifies setup but also offers improved thermal dissipation, allowing the AI HAT+ to handle demanding AI workloads more efficiently.

What can you do with the 26 TOPS model over the 13 TOPS model? The same, but more… You can run more sophisticated neural networks in real time, achieving better inference performance. The 26 TOPS model also allows you to run multiple networks simultaneously at high frame rates. For instance, you can perform object detection, pose estimation, and subject segmentation simultaneously on a live camera feed using the 26 TOPS AI HAT+:

Both versions of the AI HAT+ are fully backward compatible with the AI Kit. Our existing Hailo accelerator integration in the camera software stack works in exactly the same way with the AI HAT+. Any neural network model compiled for the Hailo-8L will run smoothly on the Hailo-8; while models specifically built for the Hailo-8 may not work on the Hailo-8L, alternative versions with lower performance are generally available, ensuring flexibility across different use cases.

After an exciting few months of AI product releases, we now offer an extensive range of options for running inferencing workloads on Raspberry Pi. Many such workloads – particularly those that are sparse, quantised, or intermittent – run natively on Raspberry Pi platforms; for more demanding workloads, we aim to be the best possible embedded host for accelerator hardware such as our AI Camera and today’s new Raspberry Pi AI HAT+. We are eager to discover what you make with it.

The post Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS appeared first on Raspberry Pi.

Open Source AI is here

28 October 2024 at 21:26
Open-source AI is here - Nextcloud

Today, the Open Source Initiative has released its first official definition of Open Source AI. This is an important milestone. Let me explain why.

Why Open Source matters

In a world where speech depends on software, free speech depends on free software.

The key tenet of open source is is that it puts the user in control. Software is ever growing in complexity and importance in our society. Software is how we do our work, how we communicate, how we pay, how we access information. When software is a black box, subject to subtle (or not so much) manipulation, users are at risk.

Risks of AI

AI brings these risks to an entirely new level. Not only because it makes decisions that are often entirely intransparent, but also because its easy, human like interface often lulls users to trust it far more than deserved.

The big AI firms have done their level best to ensure the attention of the public around risks with AI were aimed at existential, contrived notions akin to Skynet, the AI in the movie series Terminator, scenarios where AI would take over the world. In reality, the risks associated with AI are far more mundane. Surveillance, bias, explosive energy usage and job losses are the concerns we should focus on.

Need for control

And, just like with software, what matters is control. Who controls the AI, who makes decisions about what it can and can’t do, what goes in and what does not. With control, we can address the real risks of AI. Without control, we can simply hope that the billion dollar companies do the right thing.

They haven’t in the past. So we need Open Source AI. AI that that gives users the ability to study and modify the AI models that govern their lives.

Nextcloud Ethical AI Rating 🟢🟡🟠🔴

Nextcloud gave this a first shot in March 2023, when we launched our Ethical AI Rating. A simple traffic light would show with green/yellow/orange/red if a given AI model was freely available, if its data was publicly available and if the code needed to run and train it was open source. This way we help users make an informed decision without restricting their choice of models and features.

Nextcloud Ethical AI

Users of AI solutions deserve transparency and control, which is why we introduced our Ethical AI rating in early 2023. Now, we see big tech firms trying to hijack the term open source AI. We fully endorse the creation of a clear definition of open source AI by the community to protect users and the market.

Frank Karlitschek
CEO and fouder of Nextcloud

The wider open source community has picked up the gauntlet as well, and after extensive consultation with the community, today the OSI has announced an official definition of Open Source AI. This will help users, from private users, to governments, research institutes, hospitals and businesses, to make decisions what systems they can trust.

So, today, It is a first step in a journey and we are glad to be a part of it. Nextcloud has formally endorsed the definition, even if we think there is room for improvement. We will use it as a basis for our Ethical AI Rating. Our rating is a bit more granular and also more critical in some areas – for example, when it comes to data, we believe it should be always fully available – and thus, for now, we will keep using it, as it fits the use cases of our users more.

We look forward to your input, both on the OSI definition – on the road to a 2.0 – and on our AI rating.

The post Open Source AI is here appeared first on Nextcloud.

HiFive Premier P550 mini-DTX motherboard features ESWIN EIC7700X RISC-V AI SoC, up to 32GB DDR5, a PCIe x16 slot

21 October 2024 at 22:00
SiFive HiFive Premier P550 RISC-V mini-DTX motherboard

SiFive HiFive Premier P550 is a mini-DTX (203 x 170mm) motherboard powered by a 1.4 GHz ESWIN EIC7700X quad-core RISC-V SiFive P550 SoC with up to 19.95 TOPS of AI performance, and equipped with up to 32GB LPDDR5 memory and a 128GB eMMC flash all soldered on a system-on-module. The motherboard itself features a SATA III connector for data storage, includes an HDMI 2.0 port for 4K video output, a PCIe Gen3 x16 slot (working at x4), two gigabit Ethernet ports, an M.2 Key-E socket to add a WiFi/Bluetooth card, up to five USB interfaces, and more. HiFive Premier P550 specifications: SoC – ESWIN EIC7700X CPU 4x SiFive Performance P550 RV64GC RISC-V cores @ 1.4GHz (up to 1.8GHz when overclocked) with Cortex-A75-class performance 32KB(I) + 32KB(D) L1 Cache 256KB L2 Cache 4MB shared L3 Cache Cache supports ECC (support SECDED) NPU (Not currently supported in software) – Up to 19.95 [...]

The post HiFive Premier P550 mini-DTX motherboard features ESWIN EIC7700X RISC-V AI SoC, up to 32GB DDR5, a PCIe x16 slot appeared first on CNX Software - Embedded Systems News.

Using Docker AI Tools for Devs to Provide Context for Better Code Fixes

21 October 2024 at 20:00

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

At Docker Labs, we’ve been exploring how LLMs can connect different parts of the developer workflow, bridging gaps between tools and processes. A key insight is that LLMs excel at fixing code issues when they have the right context. To provide this context, we’ve developed a process that maps out the codebase using linting violations and the structure of top-level code blocks. 

By combining these elements, we teach the LLM to construct a comprehensive view of the code, enabling it to fix issues more effectively. By leveraging containerization, integrating these tools becomes much simpler.

2400x1260 docker labs genai

Previously, my linting process felt a bit disjointed. I’d introduce an error, run Pylint, and receive a message that was sometimes cryptic, forcing me to consult Pylint’s manual to understand the issue. When OpenAI released ChatGPT, the process improved slightly. I could run Pylint, and if I didn’t grasp an error message, I’d copy the code and the violation into GPT to get a better explanation. Sometimes, I’d ask it to fix the code and then manually paste the solution back into my editor.

However, this approach still required several manual steps: copying code, switching between applications, and integrating fixes. How might we improve this process?

Docker’s AI Tools for Devs prompt runner is an architecture that allows us to integrate tools like Pylint directly into the LLM’s workflow through containerization. By containerizing Pylint and creating prompts that the LLM can use to interact with it, we’ve developed a system where the LLM can access the necessary tools and context to help fix code issues effectively.

Understanding the cognitive architecture

For the LLM to assist effectively, it needs a structured way of accessing and processing information. In our setup, the LLM uses the Docker prompt runner to interact with containerized tools and the codebase. The project context is extracted using tools such as Pylint and Tree-sitter that run against the project. This context is then stored and managed, allowing the LLM to access it when needed.

By having access to the codebase, linting tools, and the context of previous prompts, the LLM can understand where problems are, what they are, and have the right code fragments to fix them. This setup replaces the manual process of finding issues and feeding them to the LLM with something automatic and more engaging.

Streamlining the workflow

Now, within my workflow, I can ask the assistant about code quality and violations directly. The assistant, powered by an LLM, has immediate access to a containerized Pylint tool and a database of my code through the Docker prompt runner. This integration allows the LLM to use tools to assist me directly during development, making the programming experience more efficient.

This approach helps us rethink how we interact with our tools. By enabling a conversational interface with tools that map code to issues, we’re exploring possibilities for a more intuitive development experience. Instead of manually finding problems and feeding them to an AI, we can convert our relationship with tools themselves to be conversational partners that can automatically detect issues, understand the context, and provide solutions.

Walking through the prompts

Our project is structured around a series of prompts that guide the LLM through the tasks it needs to perform. These prompts are stored in a Git repository and can be versioned, tracked, and shared. They form the backbone of the project, allowing the LLM to interact with tools and the codebase effectively. We automate this entire process using Docker and a series of prompts stored in a Git repository. Each prompt corresponds to a specific task in the workflow, and Docker containers ensure a consistent environment for running tools and scripts.

Workflow steps

An immediate and existential challenge we encountered was that this class of problem has a lot of opportunities to overwhelm the context of the LLM. Want to read a source code file? It has to be small enough to read. Need to work on more than one file? Your realistic limit is three to four files at once. To solve this, we can instruct the LLM to automate its own workflow with tools, where each step runs in a Docker container.

Again, each step in this workflow runs in a Docker container, which ensures a consistent and isolated environment for running tools and scripts. The first four steps prepare the agent to be able to extract the right context for fixing violations. Once the agent has the necessary context, the LLM can effectively fix the code issues in step 5.

1. Generate violations report using Pylint:

Run Pylint to produce a violation report.

2. Create a SQLite database:

Set up the database schema to store violation data and code snippets.

3. Generate and run INSERT statements:

  • Decouple violations from the range they represent.
  • Use a script to convert every violation and range from the report into SQL insert statements.
  • Run the statements against the database to populate it with the necessary data.

4. Index code in the database:

  • Generate an abstract syntax tree (AST) of the project with Tree-sitter (Figure 1).
Screenshot of syntax tree, showing files, with detailed look at Example .py.parsed.
Figure 1: Generating an abstract syntax tree.
  • Find all second-level nodes (Figure 2). In Python’s grammar, second-level nodes are statements inside of a module.
Expanded look at Example .py.parsed with highlighted statements.
Figure 2: Extracting content for the database.
  • Index these top-level ranges into the database.
  • Populate a new table to store the source code at these top-level ranges.

5. Fix violations based on context:

Once the agent has gathered and indexed the necessary context, use prompts to instruct the LLM to query the database and fix the code issues (Figure 3).

Illustration of instructions, for example, to "fix the violation "some violation" which occurs in file.py on line 1" with information on the function it occurs in.
Figure 3: Instructions for fixing violations.

Each step from 1 to 4 builds the foundation for step 5, where the LLM, with the proper context, can effectively fix violations. The structured preparation ensures that the LLM has all the information it needs to address code issues with precision.

Refining the context for LLM fixes

To understand how our system improves code fixes, let’s consider a specific violation flagged by Pylint. Say we receive a message that there’s a violation on line 60 of our code file block_listed_name.py:

{
  "type": "convention",
  "module": "block_listed_name",
  "obj": "do_front",
  "line": 60,
  "column": 4,
  "endLine": 60,
  "endColumn": 7,
  "path": "cloned_repo/naming_conventions/block_listed_name.py",
  "symbol": "disallowed-name",
  "message": "Disallowed name \"foo\"",
  "message-id": "C0104"
}

From this Pylint violation, we know that the variable foo is a disallowed name. However, if we tried to ask the LLM to fix this issue based solely on this snippet of information, the response wouldn’t be as effective. Why? The LLM lacks the surrounding context — the full picture of the function in which this violation occurs.

This is where indexing the codebase becomes essential

Because we’ve mapped out the codebase, we can now ask the LLM to query the index and retrieve the surrounding code that includes the do_front function. The LLM can even generate the SQL query for us, thanks to its knowledge of the database schema. Once we’ve retrieved the full function definition, the LLM can work with a more complete view of the problem:

def do_front(front_filename, back_filename):
   """
   Front strategy: loop over front image,
   detect blue pixels there,
   substitute in pixels from back.
   Return changed front image.
   """
   foo = SimpleImage(front_filename)
   back = SimpleImage(back_filename)
   for y in range(foo.height):xc
       for x in range(foo.width):
           pixel = foo.get_pixel(x, y)
           # Detect blue pixels in front and replace with back pixels
           if pixel[2] > 2 * max(pixel[0], pixel[1]):
               back_pixel = back.get_pixel(x, y)
               foo.set_pixel(x, y, back_pixel)
   return foo

Now that the LLM can see the whole function, it’s able to propose a more meaningful fix. Here’s what it suggests after querying the indexed codebase and running the fix:

def do_front(front_filename, back_filename):
 """
    Front strategy: loop over front image,
 detect blue pixels there,
    substitute in pixels from back.
    Return changed front image.
    """
    front_image = SimpleImage(front)
    back_image = SimpleImage(back_filename)
    for y in range(front_image.height):
        for x in range(front_image.width            pixel = front_image.get_pixel(x y)
            # Detect blue pixels in front and replace with back pixels
            if pixel[2 >  * max(pixel[0 pixel[1]):
                back_pixel = back_image.get_pixel(x,)
                front_image.set_pixel(x,, back_pixel)
    return front_image

Here, the variable foo has been replaced with the more descriptive front_image, making the code more readable and understandable. The key step was providing the LLM with the correct level of detail — the top-level range — instead of just a single line or violation message. With the right context, the LLM’s ability to fix code becomes much more effective, which ultimately streamlines the development process.

Remember, all of this information is retrieved and indexed by the LLM itself through the prompts we’ve set up. Through this series of prompts, we’ve reached a point where the assistant has a comprehensive understanding of the codebase. 

At this stage, not only can I ask for a fix, but I can even ask questions like “what’s the violation at line 60 in naming_conventions/block_listed_name.py?” and the assistant responds with:

On line 60 of naming_conventions/block_listed_name.py, there's a violation: Disallowed name 'foo'. The variable name 'foo' is discouraged because it doesn't convey meaningful information about its purpose.

Although Pylint has been our focus here, this approach points to a new conversational way to interact with many tools that map code to issues. By integrating LLMs with containerized tools through architectures like the Docker prompt runner, we can enhance various aspects of the development workflow.

We’ve learned that combining tool integration, cognitive preparation of the LLM, and a seamless workflow can significantly improve the development experience. This integration allows an LLM to use tools to directly help while developing, and while Pylint has been the focus here, this also points to a new conversational way to interact with many tools that map code to issues.

To follow along with this effort, check out the GitHub repository for this project.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Discover #Virgil: history comes to life with Arduino

21 October 2024 at 19:36

We’re excited to introduce #Virgil, an innovative project that combines the power of Arduino technology with a passion for history, creating a groundbreaking interactive experience for museums

Using Arduino’s versatile and scalable ecosystem, #Virgil operates completely offline, allowing visitors to interact with 3D avatars in a seamless and immersive way. The project brings the past to life, offering dialogue-driven encounters with key historical figures thanks to voice recognition and edge AI – with the option to choose among many different languages.

“#Virgil is meant to celebrate the past and, more importantly, open new avenues for education and inspiration. We want to prove how technology, when guided by ethical values, can amplify and perpetuate our cultural heritage in ways that used to be unimaginable,” comments Enrico Benevenuta, coordinator of the Territori Svelati project and AI expert.

Matteo Olivetti, great-grandson of Olivetti’s founder Camillo, drew inspiration from the iconic Divisumma to design a dedicated hardware setup, Olivox. 

Powered by the Portenta X8 and Max Carrier, the device connects via HDMI to any screen, engaging visitors in a rich, interactive experience without the need for smartphones or a stable internet connection. This approach allows the project to adapt easily to different exhibitions and contexts, while offering full control over the visitor experience.

Internationally renowned 3D artist Elvis Morelli was entrusted with creating the first avatar of the project – and it’s no coincidence that Camillo Olivetti was chosen. 

The story of Olivetti resonates deeply with Arduino’s own mission of pushing the boundaries of technology, and #Virgil represents a continuation of that legacy by bridging the gap between the past and future through cutting-edge tools.

To find out more about the project and perhaps have a chat with your favorite pioneer of technology and innovation, visit #Virgil’s booth at the upcoming Maker Faire Rome 2024, booth E.09. Don’t forget to stop by Arduino’s booth N.07 to find out more about our products, and let us know what you asked Camillo!

The post Discover #Virgil: history comes to life with Arduino appeared first on Arduino Blog.

WatchThis: A Wearable Point-and-Ask Interface Powered by Vision-Language Models and XIAO ESP32S3 Sense

21 October 2024 at 11:58

MIT Media Lab researchers Cathy Mengying Fang, Patrick Chwalek, Quincy Kuang, and Pattie Maes have developed WatchThis, a groundbreaking wearable device that enables natural language interactions with real-world objects through simple pointing gestures. Cathy conceived the idea for WatchThis during a one-day hackathon in Shenzhen, organized as part of MIT Media Lab’s “Research at Scale” initiative. Organized by Cedric Honnet and hosted by Southern University of Science and Technology and Seeed Studio, the hackathon provided the perfect setting to prototype this innovative device using components from the Seeed Studio XIAO ESP32S3 suite. By integrating Vision-Language Models (VLMs) with a compact wrist-worn device, WatchThis allows users to ask questions about their surroundings in real-time, making contextual queries as intuitive as pointing and asking.

Credit: Cathy Fang

Hardwares

The WatchThis project utilizes the following hardware components:

Credit: Cathy Fang

How the Project Works

WatchThis is designed to seamlessly integrate natural, gesture-based interaction into daily life. The wearable device consists of a watch with a rotating, flip-up camera attached to the back of a display. When the user points at an object of interest, the camera captures the area, and the device processes contextual queries based on the user’s gesture.

The interaction begins when the user flips up the watch body to reveal the camera, which then captures the area where the finger points at. The watch’s display shows a live feed from the camera, allowing precise aiming. When the user touches the screen, the device captures the image and pauses the camera feed. The captured RGB image is then compressed into JPG format and converted to base64, after which an API request is made to query the image.

The device uses these API calls to interact with OpenAI’s GPT-4o model, which accepts both text and image inputs. This allows the user to ask questions such as “What is this?” or “Translate this,” and receive immediate responses. The text response is displayed on the screen, overlaid on the captured image. After the response is shown for 3 seconds, the screen returns to streaming the camera feed, ready for the next command.

The software driving WatchThis is written in Arduino-compatible C++ and runs directly on the device. It is optimized for quick and efficient performance, with an end-to-end response time of around 3 seconds. Instead of relying on voice recognition or text-to-speech—which can be error-prone and resource-intensive—the system uses direct text input for queries. Users can further personalize their interactions by modifying the default query prompt through an accompanying WebApp served on the device, allowing tailored actions such as identifying objects, translating text, or requesting instructions.

Credit: Cathy Fang

Applications

Imagine strolling through a city and pointing at a building to learn its history, or identifying an exotic plant in a botanical garden with a mere gesture.

The device goes beyond simple identification, offering practical applications like real-time translation of, for example, menu items, which is a game-changer for travelers and language learners alike.

The research team has discussed even more exciting potential applications:

    • A “Remember this” function could serve as a visual reminder system, potentially aiding those who need to take medication regularly.
    • For urban explorers, a “How do I get there” feature could provide intuitive, spatially-aware navigation by allowing users to point at distant landmarks.
    • A “Zoom in on that” capability could offer a closer look at far-off objects without disrupting the user’s activities.
    • Perhaps most intriguingly, a “Turn that off” function could allow users to control smart home devices with a combination of voice commands and gestures, seamlessly integrating with IoT ecosystems.

While some of these features are still in conceptual stages, they paint a picture of a future where our interactions with the world around us are more intuitive, informative, and effortless than ever before.

Credit: Cathy Fang

Build Your Own WatchThis

Interested in building your own WatchThis wearable? Explore the open-source hardware and software components on GitHub and start creating today! Check out their paper below for full details.

End Note

Hey community, we’re curating a monthly newsletter centering around the beloved Seeed Studio XIAO. If you want to stay up-to-date with:

🤖 Cool Projects from the Community to get inspiration and tutorials
📰 Product Updates: firmware update, new product spoiler
📖 Wiki Updates: new wikis + wiki contribution
📣 News: events, contests, and other community stuff

Please click the image below👇 to subscribe now!

The post WatchThis: A Wearable Point-and-Ask Interface Powered by Vision-Language Models and XIAO ESP32S3 Sense appeared first on Latest Open Tech From Seeed.

ASUS NUC 14 Pro AI with Intel Core Ultra Processor, Delivering up to 120 Platform TOPS

20 October 2024 at 21:36
The ASUS NUC 14 Pro AI is a powerful, compact mini PC featuring the Intel Core Ultra processor (Series 2), which integrates CPU, GPU, and NPU architectures to deliver up to 120 platform TOPS for AI processing. It is designed for consumer, commercial, and edge computing applications. According to the Asus product announcement earlier last […]
❌
❌