Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Bringing real-time edge AI applications to developers

19 November 2024 at 19:25

In this guest post, Ramona Rayner from our partner Sony shows you how to quickly explore different models and AI capabilities, and how you can easily build applications on top of the Raspberry Pi AI Camera.

The recently launched Raspberry Pi AI Camera is an extremely capable piece of hardware, enabling you to build powerful AI applications on your Raspberry Pi. By offloading the AI inference to the IMX500 accelerator chip, more computational resources are available to handle application logic right on the edge! We are very curious to see what you will be creating and we are keen to give you more tools to do so. This post will cover how to quickly explore different models and AI capabilities, and how to easily build applications on top of the Raspberry Pi AI Camera.

If you didn’t have the chance to go through the Getting Started guide, make sure to check that out first to verify that your AI Camera is set up correctly.

Explore pre-trained models

A great way to start exploring the possibilities of the Raspberry Pi AI Camera is to try out some of the pre-trained models that are available in the IMX500 Model Zoo. To simplify the exploration process, consider using a GUI Tool, designed to quickly upload different models and see the real-time inference results on the AI Camera.

In order to start the GUI Tool, make sure to have Node.js installed. (Verify Node.js is installed by running node --version in the terminal.) And build and run the tool by running the following commands in the root of the repository:

make build
./dist/run.sh

The GUI Tool will be accessible on http://127.0.0.1:3001. To see a model in action:

  • Add a custom model by clicking the ADD button located at the top right corner of the interface.
  • Provide the necessary details to add a custom network and upload the network.rpk file, and the (optional) labels.txt file.
  • Select the model and navigate to Camera Preview to see the model in action!

Here are just a few of the models available in the IMX500 Model Zoo:

Network NameNetwork TypePost ProcessorColor FormatPreserve Aspect RatioNetwork FileLabels File
mobilenet_v2packagedClassificationRGBTruenetwork.rpkimagenet_labels.txt
efficientdet_lite0_pppackagedObject Detection (EfficientDet Lite0)RGBTruenetwork.rpkcoco_labels.txt
deeplabv3pluspackagedSegmentationRGBFalsenetwork.rpk
posenetpackagedPose EstimationRGBFalsenetwork.rpk

Exploring the different models gives you insight into the camera’s capabilities and enables you to identify the model that best suits your requirements. When you think you’ve found it, it’s time to build an application.

Building applications

Plenty of CPU is available to run applications on the Raspberry Pi while model inference is taking place on the IMX500. To demonstrate this we’ll run a Workout Monitoring sample application.

The goal is to count real-time exercise repetitions by detecting and tracking people performing common exercises like pull-ups, push-ups, ab workouts and squats. The app will count repetitions for each person in the frame, making sure multiple people can work out simultaneously and compete while getting automated rep counting.

To run the example, clone the sample apps repository and make sure to download the HigherHRNet model from the Raspberry Pi IMX500 Model Zoo.

Make sure you have OpenCV with Qt available:

sudo apt install python3-opencv

And from the root of the repository run:

python3 -m venv venv --system-site-packages
source venv/bin/activate
cd examples/workout-monitor/
pip install -e .

Switching between exercises is straightforward; simply provide the appropriate --exercise argument as one of pullup, pushup, abworkout or squat.

workout-monitor --model /path/to/imx500_network_higherhrnet_coco.rpk
 --exercise pullup

Note that this application is running:

  • Model post-processing to interpret the model’s output tensor into bounding boxes and skeleton keypoints
  • A tracker module (ByteTrack) to give the detected people a unique ID so that you can count individual people’s exercise reps
  • A matcher module to increase the accuracy of the tracker results, by matching people over frames so as not to lose their IDs
  • CV2 visualisation to visualise the results of the detections and see the results of the application

And all of this in real time, on the edge, while the IMX500 is taking care of the AI inference!

Now both you and the AI Camera are testing out each other’s limits. How many pull-ups can you do?

We hope by this point you’re curious to explore further; you can discover more sample applications on GitHub.

The post Bringing real-time edge AI applications to developers appeared first on Raspberry Pi.

Raspberry Pi AI Kit projects

By: Phil King
11 November 2024 at 21:24

This #MagPiMonday, we’re hoping to inspire you to add artificial intelligence to your Raspberry Pi designs with this feature by Phil King, from the latest issue of The MagPi.

With their powerful AI accelerator modules, Raspberry Pi’s Camera Module and AI Kit open up exciting possibilities in computer vision and machine learning. The versatility of the Raspberry Pi platform, combined with AI capabilities, opens up a world of new possibilities for innovative smart projects. From creative experiments to practical applications like smart pill dispensers, makers are harnessing the kit’s potential to push the boundaries of AI. In this feature, we explore some standout projects, and hope they inspire you to embark on your own.

Peeper Pam boss detector

By VEEB Projects

AI computer vision can identify objects within a live camera view. In this project, VEEB’s Martin Spendiff and Vanessa Bradley have used it to detect humans in the frame, so you can tell if your boss is approaching behind you as you sit at your desk!

The project comprises two parts. A Raspberry Pi 5 equipped with a Camera Module and AI Kit handles the image recognition and also acts as a web server. This uses web sockets to send messages wirelessly to the ‘detector’ part — a Raspberry Pi Pico W and a voltmeter whose needle moves to indicate the level of AI certainty for the ID.

Having got their hands on an AI Kit — “a nice intro into computer vision” — it took the pair just three days to create Peeper Pam. “The most challenging bit was that we’d not used sockets — more efficient than the Pico constantly asking Raspberry Pi ‘do you see anything?’,” says Martin. “Raspberry Pi does all the heavy lifting, while Pico just listens for an ‘I’ve seen something’ signal.”

While he notes that you could get Raspberry Pi 5 to serve both functions, the two-part setup means you can place the camera in a different position to monitor a spot you can’t see. Also, by adapting the code from the project’s GitHub repo, there are lots of other uses if you get the AI to deter other objects. “Pigeons in the window box is one that we want to do,” Martin says.

Monster AI Pi PC

By Jeff Geerling

Never one to do things by halves, Jeff Geerling went overboard with Raspberry Pi AI Kit and built a Monster AI Pi PC with a total of eight neural processors. In fact, with 55 TOPS (trillions of operations per second), it’s faster than the latest AMD, Qualcomm, and Apple Silicon processors!

The NPU chips — including the AI Kit’s Hailo-8L — are connected to a large 12× PCIe slot card with a PEX 8619 switch capable of handling 16 PCI Express Gen 2 lanes. The card is then mounted on a Raspberry Pi 5 via a Pineboards uPCIty Lite HAT, which has an additional 12V PSU to supply the extra wattage needed for all those processors.

With a bit of jiggery-pokery with the firmware and drivers on Raspberry Pi, Jeff managed to get it working.

Car detection & tracking system

By Naveen

As a proof of concept, Japanese maker Naveen aimed to implement an automated system for identifying and monitoring cars at toll plazas to get an accurate tally of the vehicles entering and exiting.

With the extra processing power provided by a Raspberry AI Kit, the project uses Edge Impulse computer vision to detect and count cars in the view from a Camera Module Wide. “We opted for a wide lens because it can capture a larger area,” he says, “allowing the camera to monitor multiple lanes simultaneously.” He also needed to train and test a YOLOv5 machine learning model. All the details can be found on the project page via the link above, which could prove useful for learning how to train custom ML models for your own AI project.

Safety helmet detection system

By Shakhizat Nurgaliyev

Wearing a safety helmet on a building site is essential and could save your life. This computer vision project uses Raspberry Pi AI Kit with the advanced YOLOv8 machine learning model to quickly and accurately identify objects within the camera view, running at an impressive inference speed of 30fps.

The project page has a guide showing how to make use of Raspberry Pi AI Kit to achieve efficient AI inferencing for safety helmet detection. This includes details of the software installation and model training process, for which the maker has provided a link to a dataset of 5000 images with bounding box annotations for three classes: helmet, person, and head.

Accelerating MediaPipe models

By Mario Bergeron

Google’s MediaPipe is an open-source framework developed for building machine learning pipelines, especially useful for working with videos and images.

Having used MediaPipe on other platforms, Mario Bergeron decided to experiment with it on a Raspberry Pi AI Kit. On the project page (linked above) he details the process, including using his Python demo application with options to detect hands/palms, faces, or poses.

Mario’s test results show how much better the AI Kit’s Hailo-8L AI accelerator module performs compared to running reference TensorFlow Lite models on Raspberry Pi 5 alone: up to 5.8 times faster. With three models running for hand and landmarks detection, the frame rate is 26–28fps with one hand detected, and 22–25fps for two.

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Raspberry Pi AI Kit projects appeared first on Raspberry Pi.

Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS

24 October 2024 at 13:59

Following the successful launch of the Raspberry Pi AI Kit and AI Camera, we are excited to introduce the newest addition to our AI product line: the Raspberry Pi AI HAT+.

The AI HAT+ features the same best-in-class Hailo AI accelerator technology as our AI Kit, but now with a choice of two performance options: the 13 TOPS (tera-operations per second) model, priced at $70 and featuring the same Hailo-8L accelerator as the AI Kit, and the more powerful 26 TOPS model at $110, equipped with the Hailo-8 accelerator.

The image you uploaded shows a Raspberry Pi single-board computer with an attached AI accelerator module, likely the Raspberry Pi AI Hat. This hat includes a green circuit board with a central chip that appears to be from Hailo, a company that specializes in artificial intelligence (AI) processors. The board is connected to the Raspberry Pi via the GPIO pins, and it has several components related to AI processing and other features to enable high-performance machine learning on the device. This configuration is designed for AI applications like real-time image processing, neural network acceleration, and other computationally intensive tasks. The text "26 TOPS" refers to the AI hat's ability to perform 26 trillion operations per second, which is a significant performance specification for AI applications.

Designed to conform to our HAT+ specification, the AI HAT+ automatically switches to PCIe Gen 3.0 mode to maximise the full 26 TOPS of compute power available in the Hailo-8 accelerator.

Unlike the AI Kit, which utilises an M.2 connector, the Hailo accelerator chip is directly integrated onto the main PCB. This change not only simplifies setup but also offers improved thermal dissipation, allowing the AI HAT+ to handle demanding AI workloads more efficiently.

What can you do with the 26 TOPS model over the 13 TOPS model? The same, but more… You can run more sophisticated neural networks in real time, achieving better inference performance. The 26 TOPS model also allows you to run multiple networks simultaneously at high frame rates. For instance, you can perform object detection, pose estimation, and subject segmentation simultaneously on a live camera feed using the 26 TOPS AI HAT+:

Both versions of the AI HAT+ are fully backward compatible with the AI Kit. Our existing Hailo accelerator integration in the camera software stack works in exactly the same way with the AI HAT+. Any neural network model compiled for the Hailo-8L will run smoothly on the Hailo-8; while models specifically built for the Hailo-8 may not work on the Hailo-8L, alternative versions with lower performance are generally available, ensuring flexibility across different use cases.

After an exciting few months of AI product releases, we now offer an extensive range of options for running inferencing workloads on Raspberry Pi. Many such workloads – particularly those that are sparse, quantised, or intermittent – run natively on Raspberry Pi platforms; for more demanding workloads, we aim to be the best possible embedded host for accelerator hardware such as our AI Camera and today’s new Raspberry Pi AI HAT+. We are eager to discover what you make with it.

The post Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS appeared first on Raspberry Pi.

❌
❌