Reading view

There are new articles available, click to refresh the page.

Building a Raspberry Pi Pico 2-powered drone from scratch

The summer, and Louis Wood’s internship with our Maker in Residence, was creeping to a close without his final build making it off the ground. But as if by magic, on his very last day, Louis got his handmade drone flying.

3D-printed CAD design

The journey of building a custom drone began with designing in CAD software. My initial design was fully 3D-printed with an enclosed structure and cantilevered arms to support point forces. The honeycomb lid provided cooling, and the enclosure allowed for embedded XT-60 and MR-30 connections, creating a clean and integrated look. Inside, I ensured all electrical components were rigidly mounted to avoid unwanted movement that could destabilise the flight.

Testing quickly revealed that 3D-printed frames were brittle, often breaking during crashes. Moreover, the limitations of my printer’s build area meant that motor placement was cramped. To overcome these issues, I CNC-routed a new frame from 4 mm carbon fibre, increasing the wheelbase for better stability. Using Carveco software, I generated toolpaths and cut the frame on a WorkBee CNC in our Maker Lab. After two hours, I had a sturdy, assembled frame ready for electronics.

Not one, not two, but three Raspberry Pis

For the drone’s brain, I used a Raspberry Pi Pico 2 connected to an MPU6050 gyroscope for real-time orientation data and an IBUS protocol receiver for streamlined control inputs. Initially, I faced issues with signal processing due to the delay of handling five separate PWM signals. Switching to IBUS sped up the loop frequency by tenfold, which greatly improved flight response. The Pico handled PID (Proportional-Integral-Derivative) calculations for stability, and a 4-in-1 ESC managed the motor signals. The drone also carries a Raspberry Pi Zero with a Camera Module 2 and an analogue VTX for real-time FPV (first-person view) flying.

All coming together in the Maker Lab at Pi Towers

Programming was based on Tim Hanewich’s Scout flight controller code, implementing a ‘rate’ mode controller that uses PID values to maintain desired angular velocities. Fine-tuning the PID gains was essential; improper settings could lead to instability and dangerous oscillations. I followed a careful tuning process, starting with low values for each parameter and slowly increasing them.

To make the process safer, I constructed a testing rig to isolate each axis and simulate flight conditions. This allowed me to achieve a rough tune before moving on to actual flight tests, ultimately ensuring the drone’s safe and stable performance.

The post Building a Raspberry Pi Pico 2-powered drone from scratch appeared first on Raspberry Pi.

Raspberry Pi AI Camera on sale now at $70

People have been using Raspberry Pi products to build artificial intelligence projects for almost as long as we’ve been making them. As we’ve released progressively more powerful devices, the range of applications that we can support natively has increased; but in any generation there will always be some workloads that require an external accelerator, like the Raspberry Pi AI Kit, which we launched in June.

The AI Kit is an awesomely powerful piece of hardware, capable of performing thirteen trillion operations per second. But it is only compatible with Raspberry Pi 5, and requires a separate camera module to capture visual data. We are very excited therefore to announce a new addition to our camera product line: the Raspberry Pi AI Camera.

This image features a Raspberry Pi AI Camera Module connected to a long, curved orange ribbon cable. The small, square-shaped green circuit board has a black camera lens at its center and yellow mounting holes at each corner. The ribbon cable is flexed into a loop and prominently displays white text that reads "Raspberry Pi Camera Cable Standard – Mini – 200mm." The cable is designed to connect the camera to a Raspberry Pi device, and the image is set against a plain gray background.

The AI Camera is built around a Sony IMX500 image sensor with an integrated AI accelerator. It can run a wide variety of popular neural network models, with low power consumption and low latency, leaving the processor in your Raspberry Pi free to perform other tasks.

Key features of the Raspberry Pi AI Camera include:

  • 12 MP Sony IMX500 Intelligent Vision Sensor
  • Sensor modes: 4056×3040 at 10fps, 2028×1520 at 30fps
  • 1.55 µm × 1.55 µm cell size
  • 78-degree field of view with manually adjustable focus
  • Integrated RP2040 for neural network and firmware management

The AI Camera can be connected to all Raspberry Pi models, including Raspberry Pi Zero, using our regular camera ribbon cables.

This image shows a Raspberry Pi setup on a wooden surface, featuring a Raspberry Pi board connected to an AI camera module via an orange ribbon cable. The Raspberry Pi board is attached to several cables: a red one on the left for power and a white HDMI cable on the right. The camera module sits in the lower right corner, with its lens facing up. Part of a white and red keyboard is visible on the right side of the image, and a small plant in a white pot is partially visible on the left. The scene suggests a Raspberry Pi project setup in progress.

Using Sony’s suite of AI tools, existing neural network models using frameworks such as TensorFlow or PyTorch can be converted to run efficiently on the AI Camera. Alternatively, new models can be designed to take advantage of the AI accelerator’s specific capabilities.

Under the hood

To make use of the integrated AI accelerator, we must first upload a model. On older Raspberry Pi devices this process uses the I2C protocol, while on Raspberry Pi 5 we are able to use a much faster custom two-wire protocol. The camera end of the link is managed by an on-board RP2040 microcontroller; an attached 16MB flash device caches recently used models, allowing us to skip the upload step in many cases.

The image shows a Raspberry Pi AI Camera Module. It's a small, square-shaped green circuit board with four yellow mounting holes at each corner. In the center, there's a black camera lens marked with "MU2351." An orange ribbon cable is attached to the bottom of the board, used for connecting the camera to a Raspberry Pi. The Raspberry Pi logo, a white raspberry outline, is visible on the left side of the board.

Once the sensor has started streaming, the IMX500 operates as a standard Bayer image sensor, much like the one on Raspberry Pi Camera Module 3. An integrated Image Signal Processor (ISP) performs basic image processing steps on the sensor frame (principally Bayer-to-RGB conversion and cropping/rescaling), and feeds the processed frame directly into the AI accelerator. Once the neural network model has processed the frame, its output is transferred to the host Raspberry Pi together with the Bayer frame over the CSI-2 camera bus.

This image shows a clean, organized desk setup. At the center, there is a laptop with a screen displaying data analysis or machine learning model results, with performance metrics shown in percentages. The laptop is identified as taking up 73% of the image.

On the left side, there's a small potted plant (50%) inside a decorative, geometric-patterned vase (43%). A computer mouse (50%) rests beside the plant.

On the right side, a coffee mug (42%) sits alone, adding a simple personal touch to the workspace.

The overall vibe is minimalist and focused, with soft lighting and a light-colored background.

Integration with Raspberry Pi libcamera

A key benefit of the AI Camera is its seamless integration with our Raspberry Pi camera software stack. Under the hood, libcamera processes the Bayer frame using our own ISP, just as it would for any sensor.

We also parse the neural network results to generate an output tensor, and synchronise it with the processed Bayer frame. Both of these are returned to the application during libcamera’s request completion step.

This image shows a close-up of a Raspberry Pi board with an attached AI camera module. The Raspberry Pi board, a small green circuit board with various electronic components, is partially visible in the upper part of the image. Connected to it is a camera module with a lens, positioned in the lower portion of the image. A flat orange ribbon cable links the camera to the Raspberry Pi, allowing it to transmit data. The background is a plain, muted teal color, making the electronics the clear focus of the image.

The Raspberry Pi camera frameworks — Picamera2 and rpicam-apps, and indeed any libcamera-based application — can retrieve the output tensor, correctly synchronised with the sensor frame. Here’s an example of an object detection neural network model (MobileNet SSD) running under rpicam-apps and performing inference on a 1080p video at 30fps.

This demo uses the postprocessing framework in rpicam-apps to generate object bounding boxes from the output tensor and draw them on the image. This stage takes no more than 300 lines of code to implement. An equivalent application built using Python and Picamera2 requires many fewer lines of code.

Another example below shows a pose estimation neural network model (PoseNet) performing inference on a 1080p video at 30fps.

Although these examples were recorded using a Raspberry Pi 4, they run with the same inferencing performance on a Raspberry Pi Zero!

Together with Sony, we have released a number of popular visual neural network models optimised for the AI Camera in our model zoo, along with visualisation example scripts using Picamera2.

Which product should I buy?

Should you buy a Raspberry Pi AI Kit, or a Raspberry Pi AI Camera? The AI Kit has higher theoretical performance than the AI Camera, and can support a broader range of models, but is only compatible with Raspberry Pi 5. The AI Camera is more compact, has a lower total cost if you don’t already own a camera, and is compatible with all models of Raspberry Pi.

Ultimately, both products provide great acceleration performance for common models, and both have been optimised to work smoothly with our camera software stack.

Getting started and going further

Check out our Getting Started Guide. There you’ll find instructions on installing the AI Camera hardware, setting up the software environment, and running the examples and neural networks in our model zoo.

This image shows a Raspberry Pi AI Camera Module connected to a long, flat, orange ribbon cable. The camera module is small, square-shaped, and green with a black lens in the center. There are yellow mounting holes at each corner of the module. The orange ribbon cable attached to the module has white regulatory symbols and logos printed on it, such as "UKCA," "CE," "FCC," and the Raspberry Pi logo. The cable appears to be flexible and designed for connecting the camera to a Raspberry Pi. The camera is resting on a light gray background.

Sony’s AITRIOS Developer site has more technical resources on the IMX500 sensor, in particular the IMX500 Converter and IMX500 Package documentation, which will be useful for users who want to run custom-trained networks on the AI Camera.

We’ve been inspired by the incredible AI projects you’ve built over the years with Raspberry Pi, and your hard work and inventiveness encourages us to invest in the tools that will help you go further. The arrival of first the AI Kit, and now the AI Camera, opens up a whole new world of opportunities for high-resolution, high-frame rate, high-quality visual AI: we don’t know what you’re going to build with them, but we’re sure it will be awesome.

The post Raspberry Pi AI Camera on sale now at $70 appeared first on Raspberry Pi.

❌