Reading view

There are new articles available, click to refresh the page.

The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects

We are enormously proud to reveal The Official Raspberry Pi Camera Module Guide (2nd edition), which is out now. David Plowman, a Raspberry Pi engineer specialising in camera software, algorithms, and image-processing hardware, authored this official guide.

The Official Raspberry Pi Camera Guide 2nd Edition cover

This detailed book walks you through all the different types of Camera Module hardware, including Raspberry Pi Camera Module 3, High Quality Camera, Global Shutter Camera, and older models; discover how to attach them to Raspberry Pi and integrate vision technology with your projects. This edition also covers new code libraries, including the latest PiCamera2 Python library and rpicam command-line applications, as well as integration with the new Raspberry Pi AI Kit.

Camera Guide - Getting Started page preview

Save time with our starter guide

Our starter guide has clear diagrams explaining how to connect various Camera Modules to the new Raspberry Pi boards. It also explains how to fit custom lenses to HQ and GS Camera Modules using C-CS adaptors. Everything is outlined in step-by-step tutorials with diagrams and photographs, making it quick and easy to get your camera up and running.

Camera Guide - connecting Raspberry Pi pages

Test your camera properly

You’ll discover how to connect your camera to a Raspberry Pi and test it using the new rpicam command-line applications — these replace the older libcam applications. The guide also covers the new PiCamera2 Python library, for integrating Camera Module technology with your software.

Camera Guide - Raw images and Camera Tuning pages

Get more from your images

Discover detailed information about how Camera Module works, and how to get the most from your images. You’ll learn how to use RAW formats and tuning files, HDR modes, and preview windows; custom resolutions, encoders, and file formats; target exposure and autofocus; shutter speed, and gain, enabling you to get the very best out of your imaging hardware.

Camera Guide - Get started with Raspberry Pi AI kit pages

Build smarter projects with AI Kit integration

A new chapter covers the integration of the AI Kit with Raspberry Pi Camera Modules to create smart imaging applications. This adds neural processing to your projects, enabling fast inference of objects captured by the camera.

Camera Guide - Time-lapse capture pages

Boost your skills with pre-built projects

The Official Raspberry Pi Camera Module Guide is packed with projects. Take selfies and stop-motion videos, experiment with high-speed and time-lapse photography, set up a security camera and smart door, build a bird box and wildlife camera trap, take your camera underwater, and much more! All of the code is tested and updated for the latest Raspberry Pi OS, and is available on GitHub for inspection.

Click here to pick up your copy of The Official Raspberry Pi Camera Module Guide (2nd edition).

The post The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects appeared first on Raspberry Pi.

Using Python with virtual environments | The MagPi #148

Raspberry Pi OS comes with Python pre-installed, and you need to use its virtual environments to install packages. The latest issue of The MagPi, out today, features this handy tutorial, penned by our documentation lead Nate Contino, to get you started.

Raspberry Pi OS comes with Python 3 pre-installed. Interfering with the system Python installation can cause problems for your operating system. When you install third-party Python libraries, always use the correct package-management tools.

On Linux, you can install python dependencies in two ways:

  • use apt to install pre-configured system packages
  • use pip to install libraries using Python’s dependency manager in a virtual environment
It is possible to create virtual environments inside Thonny as well as from the command line

Install Python packages using apt

Packages installed via apt are packaged specifically for Raspberry Pi OS. These packages usually come pre-compiled, so they install faster. Because apt manages dependencies for all packages, installing with this method includes all of the sub-dependencies needed to run the package. And apt ensures that you don’t break other packages if you uninstall.

For instance, to install the Python 3 library that supports the Raspberry Pi Build HAT, run the following command:

$ sudo apt install python3-build-hat

To find Python packages distributed with apt, use apt search. In most cases, Python packages use the prefix python- or python3-: for instance, you can find the numpy package under the name python3-numpy.

Install Python libraries using pip

In older versions of Raspberry Pi OS, you could install libraries directly into the system version of Python using pip. Since Raspberry Pi OS Bookworm, users cannot install libraries directly into the system version of Python.

Attempting to install packages with pip causes an error in Raspberry Pi OS Bookworm

Instead, install libraries into a virtual environment (venv). To install a library at the system level for all users, install it with apt.

Attempting to install a Python package system-wide outputs an error similar to the following:

$ pip install buildhat
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    For more information visit http://rptl.io/venv

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Python users have long dealt with conflicts between OS package managers like apt and Python-specific package management tools like pip. These conflicts include both Python-level API incompatibilities and conflicts over file ownership.

Starting in Raspberry Pi OS Bookworm, packages installed via pip must be installed into a Python virtual environment (venv). A virtual environment is a container where you can safely install third-party modules so they won’t interfere with your system Python.

Use pip with virtual environments

To use a virtual environment, create a container to store the environment. There are several ways you can do this depending on how you want to work with Python:

per-project environments

Create a virtual environment in a project folder to install packages local to that project

Many users create separate virtual environments for each Python project. Locate the virtual environment in the root folder of each project, typically with a shared name like env. Run the following command from the root folder of each project to create a virtual environment configuration folder:

$ python -m venv env

Before you work on a project, run the following command from the root of the project to start using the virtual environment:

$ source env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

When you finish working on a project, run the following command from any directory to leave the virtual environment:

$ deactivate

per-user environments

Instead of creating a virtual environment for each of your Python projects, you can create a single virtual environment for your user account. Activate that virtual environment before running any of your Python code. This approach can be more convenient for workflows that share many libraries across projects.

When creating a virtual environment for multiple projects across an entire user account, consider locating the virtual environment configuration files in your home directory. Store your configuration in a folder whose name begins with a period to hide the folder by default, preventing it from cluttering your home folder.

Add a virtual environment to your home directory to use it in multiple projects and share the packages

Use the following command to create a virtual environment in a hidden folder in the current user’s home directory:

$ python -m venv ~/.env

Run the following command from any directory to start using the virtual environment:

$ source ~/.env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

To leave the virtual environment, run the following command from any directory:

$ deactivate

Create a virtual environment

Run the following command to create a virtual environment configuration folder, replacing <env-name> with the name you would like to use for the virtual environment (e.g. env):

$ python -m venv <env-name>

Enter a virtual environment

Then, execute the bin/activate script in the virtual environment configuration folder to enter the virtual environment:

$ source <env-name>/bin/activate

You should then see a prompt similar to the following:

$ (<env-name>) $

The (<env-name>) command prompt prefix indicates that the current terminal session is in a virtual environment named <env-name>.

To check that you’re in a virtual environment, use pip list to view the list of installed packages:

$ (<env-name>) $ pip list
Package    Version
---------- -------
pip        23.0.1
setuptools 66.1.1

The list should be much shorter than the list of packages installed in your system Python. You can now safely install packages with pip. Any packages you install with pip while in a virtual environment only install to that virtual environment. In a virtual environment, the python or python3 commands automatically use the virtual environment’s version of Python and installed packages instead of the system Python.

Top Tip
Pass the –system-site-packages flag before the folder name to preload all of the currently installed packages in your system Python installation into the virtual environment.

Exit a virtual environment

To leave a virtual environment, run the following command:

$ (<env-name>) $ deactivate

Use the Thonny editor

We recommend Thonny for editing Python code on the Raspberry Pi.

By default, Thonny uses the system Python. However, you can switch to using a Python virtual environment by clicking on the interpreter menu in the bottom right of the Thonny window. Select a configured environment or configure a new virtual environment with Configure interpreter.

The MagPi #148 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Using Python with virtual environments | The MagPi #148 appeared first on Raspberry Pi.

Bringing real-time edge AI applications to developers

In this guest post, Ramona Rayner from our partner Sony shows you how to quickly explore different models and AI capabilities, and how you can easily build applications on top of the Raspberry Pi AI Camera.

The recently launched Raspberry Pi AI Camera is an extremely capable piece of hardware, enabling you to build powerful AI applications on your Raspberry Pi. By offloading the AI inference to the IMX500 accelerator chip, more computational resources are available to handle application logic right on the edge! We are very curious to see what you will be creating and we are keen to give you more tools to do so. This post will cover how to quickly explore different models and AI capabilities, and how to easily build applications on top of the Raspberry Pi AI Camera.

If you didn’t have the chance to go through the Getting Started guide, make sure to check that out first to verify that your AI Camera is set up correctly.

Explore pre-trained models

A great way to start exploring the possibilities of the Raspberry Pi AI Camera is to try out some of the pre-trained models that are available in the IMX500 Model Zoo. To simplify the exploration process, consider using a GUI Tool, designed to quickly upload different models and see the real-time inference results on the AI Camera.

In order to start the GUI Tool, make sure to have Node.js installed. (Verify Node.js is installed by running node --version in the terminal.) And build and run the tool by running the following commands in the root of the repository:

make build
./dist/run.sh

The GUI Tool will be accessible on http://127.0.0.1:3001. To see a model in action:

  • Add a custom model by clicking the ADD button located at the top right corner of the interface.
  • Provide the necessary details to add a custom network and upload the network.rpk file, and the (optional) labels.txt file.
  • Select the model and navigate to Camera Preview to see the model in action!

Here are just a few of the models available in the IMX500 Model Zoo:

Network NameNetwork TypePost ProcessorColor FormatPreserve Aspect RatioNetwork FileLabels File
mobilenet_v2packagedClassificationRGBTruenetwork.rpkimagenet_labels.txt
efficientdet_lite0_pppackagedObject Detection (EfficientDet Lite0)RGBTruenetwork.rpkcoco_labels.txt
deeplabv3pluspackagedSegmentationRGBFalsenetwork.rpk
posenetpackagedPose EstimationRGBFalsenetwork.rpk

Exploring the different models gives you insight into the camera’s capabilities and enables you to identify the model that best suits your requirements. When you think you’ve found it, it’s time to build an application.

Building applications

Plenty of CPU is available to run applications on the Raspberry Pi while model inference is taking place on the IMX500. To demonstrate this we’ll run a Workout Monitoring sample application.

The goal is to count real-time exercise repetitions by detecting and tracking people performing common exercises like pull-ups, push-ups, ab workouts and squats. The app will count repetitions for each person in the frame, making sure multiple people can work out simultaneously and compete while getting automated rep counting.

To run the example, clone the sample apps repository and make sure to download the HigherHRNet model from the Raspberry Pi IMX500 Model Zoo.

Make sure you have OpenCV with Qt available:

sudo apt install python3-opencv

And from the root of the repository run:

python3 -m venv venv --system-site-packages
source venv/bin/activate
cd examples/workout-monitor/
pip install -e .

Switching between exercises is straightforward; simply provide the appropriate --exercise argument as one of pullup, pushup, abworkout or squat.

workout-monitor --model /path/to/imx500_network_higherhrnet_coco.rpk
 --exercise pullup

Note that this application is running:

  • Model post-processing to interpret the model’s output tensor into bounding boxes and skeleton keypoints
  • A tracker module (ByteTrack) to give the detected people a unique ID so that you can count individual people’s exercise reps
  • A matcher module to increase the accuracy of the tracker results, by matching people over frames so as not to lose their IDs
  • CV2 visualisation to visualise the results of the detections and see the results of the application

And all of this in real time, on the edge, while the IMX500 is taking care of the AI inference!

Now both you and the AI Camera are testing out each other’s limits. How many pull-ups can you do?

We hope by this point you’re curious to explore further; you can discover more sample applications on GitHub.

The post Bringing real-time edge AI applications to developers appeared first on Raspberry Pi.

Raspberry Pi Connect: new native panel plugin and connectivity testing

By: Paul

The latest release of Raspberry Pi OS includes an all-new, native panel plugin for Raspberry Pi Connect, our secure remote access solution that allows you to connect to your Raspberry Pi desktop and command line directly from your web browser.

Since the launch of our public beta with screen sharing back in May, and the addition of remote shell access and support for older Raspberry Pi devices in June, we’ve been working on improving support and performance on as many Raspberry Pi devices as possible — from Raspberry Pi Zero to Raspberry Pi 5 — both when using Raspberry Pi OS with desktop and our Lite version.

By default, Raspberry Pi Connect will be installed but disabled, only becoming active for your current user if you choose ‘Turn On Raspberry Pi Connect’ from the menu bar, or by running rpi-connect on from the terminal.

If this is your first time trying the service, using the menu bar will open your browser to sign up for a free Raspberry Pi Connect account; alternatively, you can run rpi-connect signin from the terminal to print a unique URL that you can open on any device you like. Once signed up and signed in, you can then connect to your device either via screen sharing (if you’re using Raspberry Pi desktop) or via remote shell from your web browser on any computer.

You can now stop and disable the service for your current user by choosing ‘Turn Off Raspberry Pi Connect’ or running rpi-connect off from the terminal.

With the latest release of 2.1.0 (available via software update), we now include a new rpi-connect doctor command that runs a series of connectivity tests to check the service can establish connections properly. We make every effort to ensure you can connect to your device without having to make any networking changes or open ports in your firewall — but if you’re having issues, run the command like so:

$ rpi-connect doctor
✓ Communication with Raspberry Pi Connect API
✓ Authentication with Raspberry Pi Connect API
✓ Peer-to-peer connection candidate via STUN
✓ Peer-to-peer connection candidate via TURN

Full documentation for Raspberry Pi Connect can be found on our website, or via man rpi-connect in the terminal when installed on your device.

Updates on updates

We’ve heard from lots of users about the features they’d most like to see next, and we’ve tried to prioritise the things that will bring the largest improvements in functionality to the largest number of users. Keep an eye on this blog to see our next updates.

The post Raspberry Pi Connect: new native panel plugin and connectivity testing appeared first on Raspberry Pi.

Raspberry Pi AI Kit projects

By: Phil King

This #MagPiMonday, we’re hoping to inspire you to add artificial intelligence to your Raspberry Pi designs with this feature by Phil King, from the latest issue of The MagPi.

With their powerful AI accelerator modules, Raspberry Pi’s Camera Module and AI Kit open up exciting possibilities in computer vision and machine learning. The versatility of the Raspberry Pi platform, combined with AI capabilities, opens up a world of new possibilities for innovative smart projects. From creative experiments to practical applications like smart pill dispensers, makers are harnessing the kit’s potential to push the boundaries of AI. In this feature, we explore some standout projects, and hope they inspire you to embark on your own.

Peeper Pam boss detector

By VEEB Projects

AI computer vision can identify objects within a live camera view. In this project, VEEB’s Martin Spendiff and Vanessa Bradley have used it to detect humans in the frame, so you can tell if your boss is approaching behind you as you sit at your desk!

The project comprises two parts. A Raspberry Pi 5 equipped with a Camera Module and AI Kit handles the image recognition and also acts as a web server. This uses web sockets to send messages wirelessly to the ‘detector’ part — a Raspberry Pi Pico W and a voltmeter whose needle moves to indicate the level of AI certainty for the ID.

Having got their hands on an AI Kit — “a nice intro into computer vision” — it took the pair just three days to create Peeper Pam. “The most challenging bit was that we’d not used sockets — more efficient than the Pico constantly asking Raspberry Pi ‘do you see anything?’,” says Martin. “Raspberry Pi does all the heavy lifting, while Pico just listens for an ‘I’ve seen something’ signal.”

While he notes that you could get Raspberry Pi 5 to serve both functions, the two-part setup means you can place the camera in a different position to monitor a spot you can’t see. Also, by adapting the code from the project’s GitHub repo, there are lots of other uses if you get the AI to deter other objects. “Pigeons in the window box is one that we want to do,” Martin says.

Monster AI Pi PC

By Jeff Geerling

Never one to do things by halves, Jeff Geerling went overboard with Raspberry Pi AI Kit and built a Monster AI Pi PC with a total of eight neural processors. In fact, with 55 TOPS (trillions of operations per second), it’s faster than the latest AMD, Qualcomm, and Apple Silicon processors!

The NPU chips — including the AI Kit’s Hailo-8L — are connected to a large 12× PCIe slot card with a PEX 8619 switch capable of handling 16 PCI Express Gen 2 lanes. The card is then mounted on a Raspberry Pi 5 via a Pineboards uPCIty Lite HAT, which has an additional 12V PSU to supply the extra wattage needed for all those processors.

With a bit of jiggery-pokery with the firmware and drivers on Raspberry Pi, Jeff managed to get it working.

Car detection & tracking system

By Naveen

As a proof of concept, Japanese maker Naveen aimed to implement an automated system for identifying and monitoring cars at toll plazas to get an accurate tally of the vehicles entering and exiting.

With the extra processing power provided by a Raspberry AI Kit, the project uses Edge Impulse computer vision to detect and count cars in the view from a Camera Module Wide. “We opted for a wide lens because it can capture a larger area,” he says, “allowing the camera to monitor multiple lanes simultaneously.” He also needed to train and test a YOLOv5 machine learning model. All the details can be found on the project page via the link above, which could prove useful for learning how to train custom ML models for your own AI project.

Safety helmet detection system

By Shakhizat Nurgaliyev

Wearing a safety helmet on a building site is essential and could save your life. This computer vision project uses Raspberry Pi AI Kit with the advanced YOLOv8 machine learning model to quickly and accurately identify objects within the camera view, running at an impressive inference speed of 30fps.

The project page has a guide showing how to make use of Raspberry Pi AI Kit to achieve efficient AI inferencing for safety helmet detection. This includes details of the software installation and model training process, for which the maker has provided a link to a dataset of 5000 images with bounding box annotations for three classes: helmet, person, and head.

Accelerating MediaPipe models

By Mario Bergeron

Google’s MediaPipe is an open-source framework developed for building machine learning pipelines, especially useful for working with videos and images.

Having used MediaPipe on other platforms, Mario Bergeron decided to experiment with it on a Raspberry Pi AI Kit. On the project page (linked above) he details the process, including using his Python demo application with options to detect hands/palms, faces, or poses.

Mario’s test results show how much better the AI Kit’s Hailo-8L AI accelerator module performs compared to running reference TensorFlow Lite models on Raspberry Pi 5 alone: up to 5.8 times faster. With three models running for hand and landmarks detection, the frame rate is 26–28fps with one hand detected, and 22–25fps for two.

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Raspberry Pi AI Kit projects appeared first on Raspberry Pi.

Raspberry Pi USB 3 Hub on sale now at $12

Most Raspberry Pi single-board computers, with the exception of the Raspberry Pi Zero and A+ form factors, incorporate an on-board USB hub to fan out a single USB connection from the core silicon, and provide multiple downstream USB Type-A ports. But no matter how many ports we provide, sometimes you just need more peripherals than we have ports. And with that in mind, today we’re launching the official Raspberry Pi USB 3 Hub, a high-quality four-way USB 3.0 hub for use with your Raspberry Pi or other, lesser, computer.

Key features include:

  • A single upstream USB 3.0 Type-A connector on an 8 cm captive cable
  • Four downstream USB 3.0 Type-A ports
  • Aggregate data transfer speeds up to 5 Gbps
  • USB-C socket for optional external 3A power supply (sold separately)

Race you to the bottom

Why design our own hub? Well, we’d become frustrated with the quality and price of the hubs available online. Either you pay a lot of money for a nicely designed and reliable product, which works well with a broad range of hosts and peripherals; or you cheap out and get something much less compatible, or unreliable, or ugly, or all three. Sometimes you spend a lot of money and still get a lousy product.

It felt like we were trapped in a race to the bottom, where bad quality drives out good, and marketplaces like Amazon end up dominated by the cheapest thing that can just about answer to the name “hub”.

So, we worked with our partners at Infineon to source a great piece of hub silicon, CYUSB3304, set Dominic to work on the electronics and John to work on the industrial design, and applied our manufacturing and distribution capabilities to make it available at the lowest possible price. The resulting product works perfectly with all models of Raspberry Pi computer, and it bears our logo because we’re proud of it: we believe it’s the best USB 3.0 hub on the market today.

Grab one and have a play: we think you’ll like it.

The post Raspberry Pi USB 3 Hub on sale now at $12 appeared first on Raspberry Pi.

Meet Kari Lawler: Classic computer and retro gaming enthusiast

Meet Kari Lawler, a YouTuber with a passion for collecting and fixing classic computers, as well as retro gaming. This interview first appeared in issue 147 of The MagPi magazine.

Kari Lawler has a passion for retro tech — and despite being 21, her idea of retro fits with just about everyone’s definition, as she collects and restores old Commodore 64s, Amiga A500s, and Atari 2600s. Stuff from before even Features Editor Rob was born, and he’s rapidly approaching 40. Kari has been involved in the tech scene for ten years though, doing much more than make videos on ’80s computers.

“I got my break into tech at around 11 years old, when I hacked together my very own virtual assistant and gained some publicity,” Kari says. “This inspired me to learn more, especially everything I could about artificial intelligence. Through this, I created my very own youth programme called Youth4AI, in which I engaged with and taught thousands of young people about AI. As well as my youth programme, I was lucky enough to work on many AI projects and branch out into government advisory work as well. Culminating, at 18 years old, in being entered into the West Midlands Women in Tech Hall of Fame, with a Lifetime Achievement Award of all things.”

What’s your history with making?

“Being brought up in a family of makers, I suppose it was inevitable I got the bug as well. From an early age, I fondly remember being surrounded by arts and crafts, and attending many sessions. From sewing to pottery and basic electronics to soldering, I enjoyed everything I did. Which resulted in me creating many projects, from a working flux capacitor (well, it lit up) for school homework, to utilising what I learned to make fun projects to share with others when I volunteered at my local Raspberry Pi Jam. Additionally, at around the age of 12 I was introduced to the wonderful world of 3D printing, and I’ve utilised that knowledge in many of the projects I’ve shared online. Starting with the well-received ’24 makes for Christmas’ I did over on X [formerly Twitter] in 2017, aged 14, which featured everything from coding Minecraft to robot sprouts. And I’ve been sharing what I make over on my socials ever since.”

Fun fact: The code listings in The MagPi are inspired by magazines from the 1980s, which also printed code listings. Although you can download all of ours as well

How did you get into retro gaming?

“Both my uncle and dad had a computer store in the ’90s, the PS1/N64 era, and while they have never displayed any of it, what was left of the shop was packed up and put into storage. And, me being me, I was quite interested in learning more about what was in those boxes. Additionally, I grew up with a working BBC Micro in the house, so have fond memories playing various games on it, especially Hangman — I think I was really into spelling bees at that point. So, with that and the abundance of being surrounded by old tech, I really got into learning about the history of computing and gaming. Which led me to getting the collecting bug, and to start adding to the collection myself so I could experience more and more tech from the past.”

One of Kari’s more recent projects was fixing a PSOne, the smaller release of the original PlayStation but with a screen attached

What’s your favourite video that you’ve made?

“Now that’s a hard one to answer. But if I go back to one of my first videos, Coding games like it’s the ’80s, it’s one that resonates with how I got my first interest in programming. My dad introduced me to Usborne computer books from the 1980s, just after I started learning Python, and said ‘try and convert some of these’. I accepted that challenge, and that’s what got me fascinated with ’80s programming books, hence the video I made. With the Usborne books specifically, there is artwork and a back story for each game. And while technically not great games, I just love how they explain the code and challenge the reader to improve. For which, I’m sure some of my viewers will be pleased to hear, I have in the works more videos exploring programming books/magazine type-in listings from the ’80s.”

Recreating classic NES Tetrinomoes with a 3D printer to make cool geometric magnets

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Meet Kari Lawler: Classic computer and retro gaming enthusiast appeared first on Raspberry Pi.

Raspberry Pi Touch Display 2 on sale now at $60

Way back in 2015, we launched the Raspberry Pi Touch Display, a 7″ 800×480-pixel LCD panel supporting multi-point capacitive touch. It remains one of our most popular accessories, finding a home in countless maker projects and embedded products. Today, we’re excited to announce Raspberry Pi Touch Display 2, at the same low price of $60, offering both a higher 720×1280-pixel resolution and a slimmer form factor.

A Raspberry Pi Touch Display 2 lying on a flat surface, displaying the Raspberry Pi OS Desktop

Key features of Raspberry Pi Touch Display 2 include:

  • 7″ diagonal display
  • 88mm × 155mm active area
  • 720 (RGB) × 1280 pixels
  • True multi-touch capacitive panel, supporting five-finger touch
  • Fully supported by Raspberry Pi OS
  • Powered from the host Raspberry Pi

Simple setup

Touch Display 2 is powered from your Raspberry Pi, and is compatible with all Raspberry Pi computers from Raspberry Pi 1B+ onwards, except for the Raspberry Pi Zero series which lack the necessary DSI port. It attaches securely to your Raspberry Pi with four screws, and ships with power and data cables compatible with both standard and mini FPC connector formats. Unlike its predecessor, Touch Display 2 integrates the display driver PCB into the display enclosure itself, delivering a much slimmer form factor.

A Raspberry Pi Touch Display 2 upright on a blue cutting board on a bench, facing away from the viewer so the reverse of the display with the host Raspberry Pi 5, ribbon data cable, and power cables are visible. A soldering iron and a benchtop multimeter with crocodile leads are also on the bench.

Like its predecessor, Touch Display 2 is fully supported by Raspberry Pi OS, which provides drivers to support five-finger touch and an on-screen keyboard. This gives you full functionality without the need for a keyboard or mouse. While it is a native portrait-format 720×1280-pixel panel, Raspberry Pi OS supports screen rotation for users who would prefer to use it in landscape orientation.

Consistent with our commitment to long product availability lifetimes, the original Touch Display will remain in production for the foreseeable future, though it is no longer recommended for new designs. Touch Display 2 will remain in production until 2030 at the earliest, allowing our embedded and industrial customers to build it into their products and installations with confidence.

We’ve never gone nine years between refreshes of a significant accessory before. But we took the time to get this one just right, and are looking forward to seeing how you use Touch Display 2 in your projects and products over the next nine years and beyond.

The post Raspberry Pi Touch Display 2 on sale now at $60 appeared first on Raspberry Pi.

Raspberry Pi product series explained

As our product line expands, it can get confusing trying to keep track of all the different Raspberry Pi boards out there. Here is a high-level breakdown of Raspberry Pi models, including our flagship series, Zero series, Compute Module series, and Pico microcontrollers.

Raspberry Pi makes computers in several different series:

  • The flagship series, often referred to by the shorthand ‘Raspberry Pi’, offers high-performance hardware, a full Linux operating system, and a variety of common ports in a form factor roughly the size of a credit card.
  • The Zero series offers a full Linux operating system and essential ports at an affordable price point in a minimal form factor with low power consumption.
  • The Compute Module series, often referred to by the shorthand ‘CM’, offers high-performance hardware and a full Linux operating system in a minimal form factor suitable for industrial and embedded applications. Compute Module models feature hardware equivalent to the corresponding flagship models but with fewer ports and no on-board GPIO pins. Instead, users should connect Compute Modules to a separate baseboard that provides the ports and pins required for a given application.

Additionally, Raspberry Pi makes the Pico series of tiny, versatile microcontroller boards. Pico models do not run Linux or allow for removable storage, but instead allow programming by flashing a binary onto on-board flash storage.

Flagship series

Model B indicates the presence of an Ethernet port. Model A indicates a lower-cost model in a smaller form factor with no Ethernet port, reduced RAM, and fewer USB ports to limit board height.

ModelSoCMemoryGPIOConnectivity
Raspberry Pi Model BRaspberry Pi Model BBCM2835256MB, 512MB26-pin GPIO headerHDMI, 2 × USB 2.0, CSI camera port, DSI display port, 3.5mm audio jack, RCA composite video, Ethernet (100Mb/s), SD card slot, micro USB power
Raspberry Pi Model ARaspberry Pi Model ABCM2835256MB26-pin GPIO headerHDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm audio jack, RCA composite video, SD card slot, micro USB power
Raspberry Pi Model B+Raspberry Pi Model B+BCM2835512MB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), microSD card slot, micro USB power
Raspberry Pi Model A+Raspberry Pi Model A+BCM2835256MB, 512MB40-pin GPIO headerHDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, microSD card slot, micro USB power
Raspberry Pi 2 Model BRaspberry Pi 2 Model BBCM2836 (in version 1.2, switched to BCM2837)1 GB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), microSD card slot, micro USB power
Raspberry Pi 3 Model BRaspberry Pi 3 Model BBCM28371 GB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), 2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.1, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
Raspberry Pi 3 Model B+Raspberry Pi 3 Model B+BCM2837b01GB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, PoE-capable Ethernet (300Mb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (100Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
Raspberry Pi 3 Model A+Raspberry Pi 3 Model A+BCM2837b0512 MB40-pin GPIO headerHDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, 2.4/5GHz dual-band 802.11ac Wi-Fi (100Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
Raspberry Pi 4 Model BRaspberry Pi 4 Model BBCM27111GB, 2GB, 4GB, 8GB40-pin GPIO header2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, CSI camera port, DSI display port, 3.5 mm AV jack, PoE-capable Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (120Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 3A (15W))
Raspberry Pi 400Raspberry Pi 400BCM27114GB40-pin GPIO header2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (120Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 3A (15W))
Raspberry Pi 5Raspberry Pi 5BCM2712 (2GB version uses BCM2712D0)2GB, 4GB, 8GB40-pin GPIO header2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, 2 ×  CSI camera/DSI display ports, single-lane PCIe FFC connector, UART connector, RTC battery connector, four-pin JST-SH PWM fan connector, PoE+-capable Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi 5 (300Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 5A (25W) or 5V, 3A (15W) with a 600mA peripheral limit)

For more information about the ports on the Raspberry Pi flagship series, see the Schematics and mechanical drawings.

Zero series

Models with the H suffix have header pins pre-soldered to the GPIO header. Models that lack the H suffix do not come with header pins attached to the GPIO header; the user must solder pins manually or attach a third-party pin kit.

All Zero models have the following connectivity:

  • a microSD card slot
  • a CSI camera port (version 1.3 of the original Zero introduced this port)
  • a mini HDMI port
  • 2 × micro USB ports (one for input power, one for external devices)
ModelSoCMemoryGPIOWireless Connectivity
Raspberry Pi ZeroRaspberry Pi ZeroBCM2835512MB40-pin GPIO header (unpopulated)none
Raspberry Pi Zero WRaspberry Pi Zero WBCM2835512MB40-pin GPIO header (unpopulated)2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.0, Bluetooth Low Energy (BLE)
Raspberry Pi Zero WHRaspberry Pi Zero WHBCM2835512MB40-pin GPIO header2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.0, Bluetooth Low Energy (BLE)
Raspberry Pi Zero 2 WRaspberry Pi Zero 2 WRP3A0512MB40-pin GPIO header (unpopulated)2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE)

Compute Module series

ModelSoCMemoryStorageForm factorWireless Connectivity
Raspberry Pi Compute Module 1Raspberry Pi Compute Module 1BCM2835512MB4GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 3Raspberry Pi Compute Module 3BCM28371GB0GB (Lite), 4GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 3+Raspberry Pi Compute Module 3+BCM2837b01GB0GB (Lite), 8GB, 16GB, 32GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 4SRaspberry Pi Compute Module 4SBCM27111GB, 2GB, 4GB, 8GB0GB (Lite), 8GB, 16GB, 32GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 4Raspberry Pi Compute Module 4BCM27111GB, 2GB, 4GB, 8GB0GB (Lite), 8GB, 16GB, 32GBdual 100-pin high density connectorsoptional: 2.4/5GHz dual-band 802.11ac Wi-Fi 5 (300Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE)

For more information about Raspberry Pi Compute Modules, see the Compute Module documentation.

Pico microcontrollers

Models with the H suffix have header pins pre-soldered to the GPIO header. Models that lack the H suffix do not come with header pins attached to the GPIO header; the user must solder pins manually or attach a third-party pin kit.

ModelSoCMemoryStorageGPIOWireless Connectivity
Raspberry Pi PicoRaspberry Pi PicoRP2040264KB2MBtwo 20-pin GPIO headers (unpopulated)none
Raspberry Pi Pico HRaspberry Pi Pico HRP2040264KB2MBtwo 20-pin GPIO headersnone
Raspberry Pi Pico WRaspberry Pi Pico WRP2040264KB2MBtwo 20-pin GPIO headers (unpopulated)2.4GHz single-band 802.11n Wi-Fi (10Mb/s), Bluetooth 5.2, Bluetooth Low Energy (BLE)
Raspberry Pi Pico WHRaspberry Pi Pico WHRP2040264KB2MBtwo 20-pin GPIO headers2.4GHz single-band 802.11n Wi-Fi (10Mb/s), Bluetooth 5.2, Bluetooth Low Energy (BLE)
Raspberry Pi Pico 2Raspberry Pi Pico 2RP2350520KB4MBtwo 20-pin GPIO headers (unpopulated)none

For more information about Raspberry Pi Pico models, see the Pico documentation.

If you’re interested in schematics, mechanical drawings, and information on thermal control, visit our documentation page.

The post Raspberry Pi product series explained appeared first on Raspberry Pi.

DEC Flip-Chip tester | The MagPi #147

A brand new issue of The MagPi is out in the wild, and one of our favourite projects we read about involved rebuilding an old PDP-9 computer with a Raspberry Pi-based device that tests hundreds of components.

Anders Sandahl loves collecting old computers: “I really like to restore them and get them going again.” For this project, he wanted to build a kind of component tester for old DEC (Digital Equipment Corporation) Flip-Chip boards before he embarked on the lengthy task of restoring his 1966 PDP-9 computer — a two-foot-tall machine with six- to seven-hundred Flip-Chip boards inside — back to working order. 

DEC’s 1966 PDP-9 computer was two foot tall
Image credit: Wikipedia

His Raspberry Pi-controlled DEC Flip-Chip tester checks the power output of these boards using relay modules and signal clips, giving accurate information about each one’s power draw and output. Once he’s confident each component is working properly, Anders can begin to assemble the historic DEC PDP-9 computer, which Wikipedia advises is one of only 445 ever produced.

Logical approach

“Flip-Chip boards from this era implement simple logical functions, comparable to one 7400-series logic circuit,” Anders explains. “The tester uses Raspberry Pi and an ADC (analogue-to-digital converter) to measure and control analogue signals sent to the Flip-Chip, and digital signals used to control the tester’s circuits. PDP-7, PDP-8 (both 8/S and Straight-8), PDP-9, and PDP-10 (with the original KA processor) all use this generation of Flip-Chips. A testing device for one will work for all of them, which is pretty useful if you’re in the business of restoring old computers. 

The Flip-Chip tester uses Raspberry Pi 3B+, 4, or 5 to check the signal and relay the strength of each Flip-Chip by running a current across it, so restorers don’t attach a dud component

Rhode Island Computer Museum (RICM) is where The MagPi publisher Brian Jepson and friend Mike Thompson both volunteer. Mike is part of a twelve-year-project to rebuild RICM’s own DEC PDP-9 and, after working on a different Flip-Chip tester there, he got in touch with Anders about his Raspberry Pi-based version. He’s now busily helping write the user manual for the tester unit. 

Warning!
Frazzled Flip-Chips


Very old computers that use Flip-Chips have components operating at differing voltages, so there’s a high chance of shorting them. You need a level shifter to convert and step down voltages for safe operation. 

Mike explains: “Testing early transistor-only Flip-Chips is incredibly complicated because the voltages are all negative, and the Flip-Chips must be tested with varying input voltages and different loads on the outputs.” There are no integrated circuits, just discrete transistors. Getting such an old computer running again is “quite a task” because of the sheer number of broken components on each PCB, and Flip-Chip boards hold lots of transistors and diodes, “all of which are subject to failure after 55+ years”.

Anders previously used Raspberry Pi to recreate an old PDP-8 computer

Obstacles, of course

The Flip-Chip tester features 15 level-shifter boards. These step down the voltage so components with different power outputs and draws can operate alongside each other safely and without anything getting frazzled. Anders points out the disparity between the Flip-Chips’ 0 and -3V logic voltage levels and the +10 and -15V used as supply voltages. Huge efforts went into this level conversion to make it reliable and failsafe. Anders wrote the testing software himself, and built the hardware “from scratch” using parts from Mouser and custom-designed circuit boards. The project took around two years and cost around $500, of which the relays were a major part. 

This photo from the user manual shows just how huge the PDP-9 could get

Anders favours Raspberry Pi because “it offers a complete OS, file system, and networking in a neat and well-packaged way”, and says it is “a very good software platform that you really just have to do minor tweaks on to get right”. He’s run the tester on Raspberry Pi 3B, 4, and 5. He says it should also run on Raspberry Pi Zero as well, “but having Ethernet and the extra CPU power makes life easier”.

Although this is a fairly niche project for committed computer restorers, Anders believes his Flip-Chip tester can be built by anyone who can solder fairly small SMD components. Documenting the project so others can build it was quite a task, so it was quite helpful when Mike got in touch and was able to assist with the write-up. As a fellow computer restorer, Mike says the tester means getting RICM’s PDP-9 working again “won’t be such an overwhelming task. With the tester we can test and repair each of the boards instead of trying to diagnose a very broken computer as a whole.” 

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post DEC Flip-Chip tester | The MagPi #147 appeared first on Raspberry Pi.

A new release of Raspberry Pi OS

labwc – a new Wayland compositor

Today we are releasing a new version of Raspberry Pi OS. This version includes a significant change, albeit one that we hope most people won’t even notice. So we thought we’d better tell you about it to make sure you do…

First, a brief history lesson. Linux desktops, like their Unix predecessors, have for many years used the X Window system. This is the underlying technology which displays the desktop, handles windows, moves the mouse, and all that other stuff that you don’t really think about because it (usually) just works. X is prehistoric in computing terms, serving us well since the early 80s. But after 40 years, cracks are beginning to show in the design of X.

As a result, many Linux distributions are moving to a new windowing technology called Wayland. Wayland has many advantages over X, particularly performance. Under X, two separate applications help draw a window:

  • the display server creates windows on the screen and gives applications a place to draw their content
  • the window manager positions windows relative to each other and decorates windows with title bars and frames.

Wayland combines these two functions into a single application called the compositor. Applications running on a Wayland system only need to talk to one thing, instead of two, to display a window. As you might imagine, this is a much more efficient way to draw application windows.

Wayland also provides a security advantage. Under X, all applications communicated back and forth with the display server; consequently, any application could observe any other application. Wayland isolates applications at the compositor level, so applications cannot observe each other.

We first started thinking about Wayland at Raspberry Pi around ten years ago; at that time, it was nowhere near ready to use. Over the last few years, we have taken cautious steps towards Wayland. When we released Bullseye back in 2021, we switched to a new X window manager, mutter, which could also be used as a Wayland compositor. We included the option to switch it to Wayland mode to see how it worked.

With the release of Bookworm in 2023, we replaced mutter with a new dedicated Wayland compositor called wayfire and made Wayland the default mode of operation for Raspberry Pi 4 and 5, while continuing to run X on lower-powered models. We spent a lot of time optimising wayfire for Raspberry Pi hardware, but it still didn’t run well enough on older Pis, so we couldn’t switch to it everywhere.

All of this was a learning experience – we learned more about Wayland, how it interacted with our hardware, and what we needed to do to get the best out of it. As we continued to work with wayfire, we realised it was developing in a direction that would make it less compatible with our hardware. At this point, we knew it wasn’t the best choice to provide a good Wayland experience for Raspberry Pis. So we started looking at alternatives.

This search eventually led us to a compositor called labwc. Our initial experiments were encouraging: we were able to use it in Raspberry Pi OS after only a few hours of work. Closer investigation revealed labwc to be a much better fit for the Raspberry Pi graphics hardware than wayfire. We contacted the developers and found that their future direction very much aligned with our own.

labwc is built on top of a system called wlroots, a set of libraries which provide the basic functionality of a Wayland system. wlroots has been developed closely alongside the Wayland protocol. Using wlroots, anyone who wants to write a Wayland compositor doesn’t need to reinvent the wheel; we can take advantage of the experience of those who designed Wayland, since they know it best.

So we made the decision to switch. For most of this year, we have been working on porting labwc to the Raspberry Pi Desktop. This has very much been a collaborative process with the developers of both labwc and wlroots: both have helped us immensely with their support as we contribute features and optimisations needed for our desktop.

After much optimisation for our hardware, we have reached the point where labwc desktops run just as fast as X on older Raspberry Pi models. Today, we make the switch with our latest desktop image: Raspberry Pi Desktop now runs Wayland by default across all models.

When you update an existing installation of Bookworm, you will see a prompt asking to switch to labwc the next time you reboot:

We recommend that most people switch to labwc.

Existing Pi 4 or 5 Bookworm installations running wayfire shouldn’t change in any noticeable way, besides the loss of a couple of animations which we haven’t yet implemented in labwc. Because we will no longer support wayfire with updates on Raspberry Pi OS, it’s best to adopt labwc as soon as possible.

Older Pis that currently use X should also switch to labwc. To ensure backwards compatibility with older applications, labwc includes a library called Xwayland, which provides a virtual X implementation running on top of Wayland. labwc provides this virtual implementation automatically for any application that isn’t compatible with Wayland. With Xwayland, you can continue to use older applications that you rely on while benefiting from the latest security and performance updates.

As with any software update, we cannot possibly test all possible configurations and applications. If you switch to labwc and experience an issue, you can always switch back to X. To do this, open a terminal window and type:

sudo raspi-config 

This launches the command-line Raspberry Pi Configuration application. Use the arrow keys to select “6 Advanced Options” and hit ‘enter’ to open the menu. Select “A6 Wayland” and choose “W1 X11 Openbox window manager with X11 backend”. Hit ‘escape’ to exit the application; when you restart your device, your desktop should restart with X.

We don’t expect this to be necessary for many people, but the option is there, just in case! Of course, if you prefer to stick with wayfire or X for any reason, the upgrade prompt offers you the option to do so – this is not a compulsory upgrade, just one that we recommend.

Improved touch screen support

While labwc is the biggest change to the OS in this release, it’s not the only one. We have also significantly improved support for using the Desktop with a touch screen. Specifically, Raspberry Pi Desktop now automatically shows and hides the virtual keyboard, and supports right-click and double-click equivalents for touch displays.

This change comes as a result of integrating the Squeekboard virtual keyboard. When the system detects a touch display, the virtual keyboard automatically displays at the bottom of the screen whenever it is possible to enter text. The keyboard also automatically hides when no text entry is possible.

This auto show and hide should work with most applications, but it isn’t supported by everything. For applications which do not support it, you can instead use the keyboard icon at the right end of the taskbar to manually toggle the keyboard on and off.

If you don’t want to use the virtual keyboard with a touch screen, or you want to use it without a touch screen and click on it with the mouse, you can turn it on or off in the Display tab of Raspberry Pi Configuration. The new virtual keyboard only works with labwc; it’s not compatible with wayfire or X.

In addition to the virtual keyboard, we added long press detection on touch screens to generate the equivalent of a right-click with a mouse. You can use this to launch context-sensitive menus anywhere in the taskbar and the file manager.

We also added double-tap detection on touch screens to generate a double-click. While this previously worked on X, it didn’t work in wayfire. Double-tap to double-click is now supported in labwc.

Better Raspberry Pi Connect integration

We’ve had a lot of very positive feedback about Raspberry Pi Connect, our remote access software that allows you to control your Raspberry Pi from any computer anywhere in the world. This release integrates Connect into the Desktop.

By default, you will now see the Connect icon in the taskbar at all times. Previously, this indicated that Connect was running. Now, the icon indicates that Connect is installed and ready to use, but is not necessarily running. Hovering the mouse over the icon brings up a tooltip displaying the current status.

You can now enable or disable Connect directly from the menu which pops up when the icon is clicked. Previously, this was an option in Raspberry Pi Configuration, but that option has been removed. Now, all the options to control Connect live in the icon menu.

If you don’t plan to use Connect, you can uninstall it from Recommended Software, or you can remove the icon from the taskbar by right-clicking the taskbar and choosing “Add / Remove Plugins…”.

Other things

This release includes some other small changes worth mentioning:

  • We rewrote the panel application for the taskbar at the top of the screen. In the previous version, even if you removed a plugin from the panel, it remained in memory. Now, when you remove a plugin, the panel never loads it into memory at all. Rather than all the individual plugins being part of a single application, each plugin is now a separate library. The panel only loads the libraries for the plugins that you choose to display on your screen. This won’t make much difference to many people, but can save you a bit of RAM if you remove several plugins. This also makes it easier to develop new plugins, both for us and third parties.
  • We introduced a new Screen Configuration tool, raindrop. This works exactly the same as the old version, arandr, and even looks similar. Under the hood, we rewrote the old application in C to improve support for labwc and touch screens. Because the new tool is native, performance should be snappier! Going forward, we’ll only maintain the new native version.

How to get it

The new release is available today in apt, Raspberry Pi Imager, or as a download from the software page on raspberrypi.com.

Black screen on boot issue (resolved)

We did have some issues on the initial release yesterday, whereby some people found that the switch to labwc caused the desktop to fail to start. Fortunately, the issue has now been fixed. It is safe to update according to the process below, so we have reinstated the update prompt described above.

If you experience problems updating and see a black screen instead of a desktop, there’s a simple fix. At the black screen, press Ctrl + Alt + F2. Authenticate at the prompt and run the following command:

sudo apt install labwc

Finally, reboot with sudo reboot. This should restore a working desktop. We apologise to anyone who was affected by this.

To update an existing Raspberry Pi OS Bookworm install to this release, run the following commands:

sudo apt update
sudo apt full-upgrade

When you next reboot, you will see the prompt described above which offers the switch to labwc.

To switch to the new Screen Configuration tool, run the following commands:

sudo apt purge arandr
sudo apt install raindrop

The new on-screen keyboard can either be installed from Recommended Software – it’s called Squeekboard – or from the command line with:

sudo apt install squeekboard wfplug-squeek

We hope you like the new desktop experience. Or perhaps more accurately, we hope you won’t notice much difference! As always, your comments are very welcome below.

The post A new release of Raspberry Pi OS appeared first on Raspberry Pi.

Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS

Following the successful launch of the Raspberry Pi AI Kit and AI Camera, we are excited to introduce the newest addition to our AI product line: the Raspberry Pi AI HAT+.

The AI HAT+ features the same best-in-class Hailo AI accelerator technology as our AI Kit, but now with a choice of two performance options: the 13 TOPS (tera-operations per second) model, priced at $70 and featuring the same Hailo-8L accelerator as the AI Kit, and the more powerful 26 TOPS model at $110, equipped with the Hailo-8 accelerator.

The image you uploaded shows a Raspberry Pi single-board computer with an attached AI accelerator module, likely the Raspberry Pi AI Hat. This hat includes a green circuit board with a central chip that appears to be from Hailo, a company that specializes in artificial intelligence (AI) processors. The board is connected to the Raspberry Pi via the GPIO pins, and it has several components related to AI processing and other features to enable high-performance machine learning on the device. This configuration is designed for AI applications like real-time image processing, neural network acceleration, and other computationally intensive tasks. The text "26 TOPS" refers to the AI hat's ability to perform 26 trillion operations per second, which is a significant performance specification for AI applications.

Designed to conform to our HAT+ specification, the AI HAT+ automatically switches to PCIe Gen 3.0 mode to maximise the full 26 TOPS of compute power available in the Hailo-8 accelerator.

Unlike the AI Kit, which utilises an M.2 connector, the Hailo accelerator chip is directly integrated onto the main PCB. This change not only simplifies setup but also offers improved thermal dissipation, allowing the AI HAT+ to handle demanding AI workloads more efficiently.

What can you do with the 26 TOPS model over the 13 TOPS model? The same, but more… You can run more sophisticated neural networks in real time, achieving better inference performance. The 26 TOPS model also allows you to run multiple networks simultaneously at high frame rates. For instance, you can perform object detection, pose estimation, and subject segmentation simultaneously on a live camera feed using the 26 TOPS AI HAT+:

Both versions of the AI HAT+ are fully backward compatible with the AI Kit. Our existing Hailo accelerator integration in the camera software stack works in exactly the same way with the AI HAT+. Any neural network model compiled for the Hailo-8L will run smoothly on the Hailo-8; while models specifically built for the Hailo-8 may not work on the Hailo-8L, alternative versions with lower performance are generally available, ensuring flexibility across different use cases.

After an exciting few months of AI product releases, we now offer an extensive range of options for running inferencing workloads on Raspberry Pi. Many such workloads – particularly those that are sparse, quantised, or intermittent – run natively on Raspberry Pi platforms; for more demanding workloads, we aim to be the best possible embedded host for accelerator hardware such as our AI Camera and today’s new Raspberry Pi AI HAT+. We are eager to discover what you make with it.

The post Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS appeared first on Raspberry Pi.

Raspberry Pi SSDs and SSD Kits on sale now

To help you get the best out of your Raspberry Pi 5, today we’re launching a range of Raspberry Pi-branded NVMe SSDs. They are available both on their own and bundled with our M.2 HAT+ as ready-to-use SSD Kits.

The image depicts a Raspberry Pi single-board computer with an M.2 HAT attached. The HAT (Hardware Attached on Top) is labeled "Raspberry Pi M.2 HAT," and it supports an M.2 key slot, which is a standardized slot for attaching M.2 SSDs or other compatible devices.

In the image, there is an M.2 module connected to the HAT, likely providing additional storage or interfacing with specific hardware for performance enhancements. This setup is often used to expand storage capabilities or add custom hardware accelerators for applications like fast booting, large file storage, or computational purposes when connected to a Raspberry Pi. The M.2 module can be seen secured to the HAT via screws and has various regulatory markings visible.

The Raspberry Pi itself, underneath the HAT, is partially obscured, but you can see key features such as the HDMI ports, USB ports, and Ethernet jack. This combination is useful for those looking to boost their Raspberry Pi's storage capacity or performance through M.2 SSDs.

When we launched Raspberry Pi 5, almost exactly a year ago, I thought the thing people would get most excited about was the three-fold increase in performance over 2019’s Raspberry Pi 4. But very quickly it became clear that it was the other new features – the power button (!), and the PCI Express port – that had captured people’s imagination.

We’ve seen everything from Ethernet adapters, to AI accelerators, to regular PC graphics cards attached to the PCI Express port. We offer our own low-cost M.2 HAT+, which converts from our FPC standard to the standard M.2 M-key format, and there are a wide variety of third-party adapters which do basically the same thing. We’ve also released an AI Kit, which bundles the M.2 HAT+ with an AI inference accelerator from our friends at Hailo.

512GB variant
256GB variant

But the most popular use case for the PCI Express port on Raspberry Pi 5 is to attach an NVMe solid-state disk (SSD). SSDs are fast; faster even than our branded A2-class SD cards. If no-compromises performance is your goal, you’ll want to run Raspberry Pi OS from an SSD, and Raspberry Pi SSDs are the perfect choice.

This image shows a Raspberry Pi setup on a wooden surface, with a Raspberry Pi connected to an M.2 HAT (labeled "Raspberry Pi M.2 HAT M Key") and an attached M.2 SSD module. The Raspberry Pi is powered via a red USB cable, plugged into one of the ports.

The setup includes a white Raspberry Pi keyboard and a matching red and white mouse, connected to the Raspberry Pi, suggesting this is a complete desktop computing setup. The keyboard and mouse are placed on the desk next to the Raspberry Pi, and the mouse sits on a mousepad featuring a bright red and blue design that complements the color scheme of the peripherals.

The setup is wired with multiple connections, including two white USB cables going into the Raspberry Pi, likely connecting other peripherals like the keyboard and mouse, and a red Ethernet or power cable.

This configuration is typically used for light computing tasks, educational purposes, and experimenting with hardware or software using a Raspberry Pi with expandable M.2 storage for enhanced functionality.

The entry-level 256GB drive is priced at $30 on its own, or $40 as a kit; its 512GB big brother is priced at $45 on its own, or $55 as a kit. Both densities offer minimum 4KB random read and write performance of 40k IOPS and 70k IOPS respectively. The 256GB SSD and SSD Kit are available to buy today, while the 512GB variants are available to pre-order now for shipping by the end of November.

So, there you have it: a cost-effective way to squeeze even more performance out of your Raspberry Pi 5. Enjoy!

The post Raspberry Pi SSDs and SSD Kits on sale now appeared first on Raspberry Pi.

How to get started with your Raspberry Pi AI Camera

If you’ve got your hands on the Raspberry Pi AI Camera that we launched a few weeks ago, you might be looking for a bit of help to get up and running with it – it’s a bit different from our other camera products. We’ve raided our documentation to bring you this Getting started guide. If you work through the steps here you’ll have your camera performing object detection and pose estimation, even if all this is new to you. Then you can dive into the rest of our AI Camera documentation to take things further.

This image shows a Raspberry Pi setup on a wooden surface, featuring a Raspberry Pi board connected to an AI camera module via an orange ribbon cable. The Raspberry Pi board is attached to several cables: a red one on the left for power and a white HDMI cable on the right. The camera module sits in the lower right corner, with its lens facing up. Part of a white and red keyboard is visible on the right side of the image, and a small plant in a white pot is partially visible on the left. The scene suggests a Raspberry Pi project setup in progress.

Here we describe how to run the pre-packaged MobileNet SSD (object detection) and PoseNet (pose estimation) neural network models on the Raspberry Pi AI Camera.

Prerequisites

We’re assuming that you’re using the AI Camera attached to either a Raspberry Pi 4 or a Raspberry Pi 5. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+.

First, make sure that your Raspberry Pi runs the latest software. Run the following command to update:

sudo apt update && sudo apt full-upgrade

The AI Camera has an integrated RP2040 chip that handles neural network model upload to the camera, and we’ve released a new RP2040 firmware that greatly improves upload speed. AI Cameras shipping from now onwards already have this update, and if you have an earlier unit, you can update it yourself by following the firmware update instructions in this forum post. This should take no more than one or two minutes, but please note before you start that it’s vital nothing disrupts the process. If it does – for example, if the camera becomes disconnected, or if your Raspberry Pi loses power – the camera will become unusable and you’ll need to return it to your reseller for a replacement. Cameras with the earlier firmware are entirely functional, and their performance is identical in every respect except for model upload speed.

Install the IMX500 firmware

In addition to updating the RP2040 firmware if required, the AI camera must download runtime firmware onto the IMX500 sensor during startup. To install these firmware files onto your Raspberry Pi, run the following command:

sudo apt install imx500-all

This command:

  • installs the /lib/firmware/imx500_loader.fpk and /lib/firmware/imx500_firmware.fpk firmware files required to operate the IMX500 sensor
  • places a number of neural network model firmware files in /usr/share/imx500-models/
  • installs the IMX500 post-processing software stages in rpicam-apps
  • installs the Sony network model packaging tools

NOTE: The IMX500 kernel device driver loads all the firmware files when the camera starts, and this may take several minutes if the neural network model firmware has not been previously cached. The demos we’re using here display a progress bar on the console to indicate firmware loading progress.

Reboot

Now that you’ve installed the prerequisites, restart your Raspberry Pi:

sudo reboot
The image shows a Raspberry Pi AI Camera Module. It's a small, square-shaped green circuit board with four yellow mounting holes at each corner. In the center, there's a black camera lens marked with "MU2351." An orange ribbon cable is attached to the bottom of the board, used for connecting the camera to a Raspberry Pi. The Raspberry Pi logo, a white raspberry outline, is visible on the left side of the board.

Run example applications

Once all the system packages are updated and firmware files installed, we can start running some example applications. As mentioned earlier, the Raspberry Pi AI Camera integrates fully with libcamera, rpicam-apps, and Picamera2. This blog post concentrates on rpicam-apps, but you’ll find more in our AI Camera documentation.

rpicam-apps

The rpicam-apps camera applications include IMX500 object detection and pose estimation stages that can be run in the post-processing pipeline. For more information about the post-processing pipeline, see the post-processing documentation.

The examples on this page use post-processing JSON files located in /usr/share/rpicam-assets/.

Object detection

The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. imx500_mobilenet_ssd.json contains the configuration parameters for the IMX500 object detection post-processing stage using the MobileNet SSD neural network.

imx500_mobilenet_ssd.json declares a post-processing pipeline that contains two stages:

  1. imx500_object_detection, which picks out bounding boxes and confidence values generated by the neural network in the output tensor
  2. object_detect_draw_cv, which draws bounding boxes and labels on the image

The MobileNet SSD tensor requires no significant post-processing on your Raspberry Pi to generate the final output of bounding boxes. All object detection runs directly on the AI Camera.

The following command runs rpicam-hello with object detection post-processing:

rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30

After running the command, you should see a viewfinder that overlays bounding boxes on objects recognised by the neural network:

To record video with object detection overlays, use rpicam-vid instead:

rpicam-vid -t 10s -o output.264 --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --width 1920 --height 1080 --framerate 30

You can configure the imx500_object_detection stage in many ways.

For example, max_detections defines the maximum number of objects that the pipeline will detect at any given time. threshold defines the minimum confidence value required for the pipeline to consider any input as an object.

The raw inference output data of this network can be quite noisy, so this stage also performs some temporal filtering and applies hysteresis. To disable this filtering, remove the temporal_filter config block.

Pose estimation

The PoseNet neural network performs pose estimation, labelling key points on the body associated with joints and limbs. imx500_posenet.json contains the configuration parameters for the IMX500 pose estimation post-processing stage using the PoseNet neural network.

imx500_posenet.json declares a post-processing pipeline that contains two stages:

  1. imx500_posenet, which fetches the raw output tensor from the PoseNet neural network
  2. plot_pose_cv, which draws line overlays on the image

The AI Camera performs basic detection, but the output tensor requires additional post-processing on your host Raspberry Pi to produce final output.

The following command runs rpicam-hello with pose estimation post-processing:

rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_posenet.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30

You can configure the imx500_posenet stage in many ways.

For example, max_detections defines the maximum number of bodies that the pipeline will detect at any given time. threshold defines the minimum confidence value required for the pipeline to consider input as a body.

Picamera2

For examples of image classification, object detection, object segmentation, and pose estimation using Picamera2, see the picamera2 GitHub repository.

Most of the examples use OpenCV for some additional processing. To install the dependencies required to run OpenCV, run the following command:

sudo apt install python3-opencv python3-munkres

Now download the picamera2 repository to your Raspberry Pi to run the examples. You’ll find example files in the root directory, with additional information in the README.md file.

Run the following script from the repository to run YOLOv8 object detection:

python imx500_object_detection_demo.py --model /usr/share/imx500-models/imx500_network_yolov8n_pp.rpk --ignore-dash-labels -r

To try pose estimation in Picamera2, run the following script from the repository:

python imx500_pose_estimation_higherhrnet_demo.py

To explore further, including how things work under the hood and how to convert existing models to run on the Raspberry Pi AI Camera, see our documentation.

The post How to get started with your Raspberry Pi AI Camera appeared first on Raspberry Pi.

Get started with Raspberry Pi Pico-series and VS Code

In the latest issue of The MagPi, Raspberry Pi Documentation Lead Nate Contino shows you how to attach a Raspberry Pi Pico-series device and start development with the new VS Code extension.

The following tutorial assumes that you are using a Pico-series device; some details may differ if you use a different Raspberry Pi microcontroller-based board. Pico-series devices are built around microcontrollers designed by Raspberry Pi itself. Development on the boards is fully supported with both a C/C++ SDK, and an official MicroPython port. This article talks about how to get started with the SDK, and walks you through how to build, install, and work with the SDK toolchain.

VS Code running on a Raspberry Pi computer. This IDE (integrated development environment) has an extension for Pico-series computers

To install Visual Studio Code (known as VS Code for short) on Raspberry Pi OS or Linux, run the following commands: 

$ sudo apt update
$ sudo apt install code

On macOS and Windows, you can install VS Code from magpi.cc/vscode. On macOS, you can also install VS Code with brew using the following command: 

$ brew install --cask visual-studio-code

The Raspberry Pi Pico VS Code extension helps you create, develop, run, and debug projects in Visual Studio Code. It includes a project generator with many templating options, automatic toolchain management, one-click project compilation, and offline documentation of the Pico SDK. The VS Code extension supports all Raspberry Pi Pico-series devices.

Creating a project in VS Code

Install dependencies

On Raspberry Pi OS and Windows no dependencies are needed.

Most Linux distributions come preconfigured with all of the dependencies needed to run the extension. However, some distributions may require additional dependencies.

The extension requires the following: 

  • Python 3.9 or later 
  • Git 
  • Tar 
  • A native C and C++ compiler (the extension supports GCC) 

You can install these with: 

$ sudo apt install python3 git tar build-essential

On macOS 

To install all requirements for the extension on macOS, run the following command: 

$ xcode-select --install

This installs the following dependencies:

  • Git
  • Tar 
  • A native C and C++ compiler (the extension supports GCC and Clang)

Install the extension 

You can find the extension in the VS Code Extensions Marketplace. Search for the Raspberry Pi Pico extension, published by Raspberry Pi. Click the Install button to add it to VS Code.

You can find the store entry at magpi.cc/vscodeext. You can find the extension source code and release downloads at magpi.cc/picovscodegit. When installation completes, check the Activity sidebar (by default, on the left side of VS Code). If installation was successful, a new sidebar section appears with a Raspberry Pi Pico icon, labelled “Raspberry Pi Pico Project”.

Create code to blink the LED on a Pico 2 board

Load and debug a project 

The VS Code extension can create projects based on the examples provided by Pico Examples. For an example, we’ll walk you through how to create a project that blinks the LED on your Pico-series device: 

  1. In the VS Code left sidebar, select the Raspberry Pi Pico icon, labelled Raspberry Pi Pico Project. 
  2. Select New Project from Examples. 
  3. In the Name field, select the blink example. 
  4. Choose the board type that matches your device. 
  5. Specify a folder where the extension can generate files. VS Code will create the new project in a sub-folder of the selected folder. 
  6. Click Create to create the project. The extension will now download the SDK and the toolchain, install them locally, and generate the new project. The first project may take five to ten minutes to install the toolchain. VS Code will ask you whether you trust the authors because we’ve automatically generated the .vscode directory for you. Select yes.

The CMake Tools extension may display some notifications at this point. Ignore and close them. 

two raspberry pi pico 2 boards on a yellow and a blue cutting board. both are connected via usb c with raspberry red cables
Pico’s Micro USB connector makes sending code easy

On the left Explorer sidebar in VS Code, you should now see a list of files. Open blink.c to view the blink example source code in the main window. The Raspberry Pi Pico extension adds some capabilities to the status bar at the bottom right of the screen:

  • Compile. Compiles the sources and builds the target UF2 file. You can copy this binary onto your device to program it. 
  • Run. Finds a connected device, flashes the code into it, and runs that code.

The extension sidebar also contains some quick access functions. Click on the Pico icon in the side menu and you’ll see Compile Project. Hit Compile Project and a terminal tab will open at the bottom of the screen displaying the compilation progress.

Compile and run blink 

To run the blink example: 

  1. Hold down the BOOTSEL button on your Pico-series device while plugging it into your development device using a Micro USB cable to force it into USB Mass Storage Mode. 
  2. Press the Run button in the status bar or the Run Project button in the sidebar. You should see the terminal tab at the bottom of the window open. It will display information concerning the upload of the code. Once the code uploads, the device will reboot, and you should see the following output:
The device was rebooted to start the application.

Your blink code is now running. If you look at your device, the LED should blink twice every second. 

The image shows a Raspberry Pi Pico 2, a small, rectangular microcontroller development board designed by the Raspberry Pi Foundation. This board is similar in appearance to the Raspberry Pi Pico, but it includes wireless connectivity features, as denoted by the "W" in its name. Here are some details visible in the image: Board Design: The board has a green printed circuit board (PCB) with gold-plated connections along the edges. These are pin headers designed for easy connection to other components or breadboards. Microcontroller: In the center of the board is a black integrated circuit chip, which is likely the RP2040 microcontroller. It features the Raspberry Pi logo on it. Labeling: The board is labeled "Pico" with the number "2" alongside it, indicating it is a Pico W model, a second version of the original Pico with enhanced features. The Raspberry Pi logo, a stylized raspberry with a leaf, is printed near the labeling. USB Connector: At the top end of the board, there is a micro USB connector, which is used for power supply and data transfer. Other Components: Visible components include a white reset button, small surface-mounted components like resistors and capacitors, and a black square component which is likely the wireless module for Wi-Fi connectivity. The board includes a small square chip next to the microcontroller, which could be the antenna or additional circuitry for the wireless module. Form Factor: The board has a compact form factor, designed for embedding into various projects, especially in IoT applications that require Wi-Fi connectivity. The Raspberry Pi Pico 2 is used for a wide range of applications, from simple programming projects to complex IoT systems, due to its small size, affordability, and ease of use with the MicroPython programming environment.
The new Raspberry Pi Pico 2 has upgraded capabilities over the original model

Make a code change and re-run 

To check that everything is working correctly, click on the blink.c file in VS Code. Navigate to the definition of LED_DELAY_MS at the top of the code: 

#ifndef LED_DELAY_MS
#define LED_DELAY_MS 250
#endif LED_DELAY_MS

Change the 250 (in ms, a quarter of a second) to 100 (a tenth of a second): #ifndef LED_DELAY_MS #define LED_DELAY_MS 100 #endif LED_DELAY_MS 

  1. Disconnect your device, then reconnect while holding the BOOTSEL button just as you did before. 
  2. Press the Run button in the status bar or the Run Project button in the sidebar. You should see the terminal tab at the bottom of the window open. It will display information concerning the upload of the code. Once the code uploads, the device will reboot, and you should see the following output: 
The device was rebooted to start the application.

Your blink code is now running. If you look at your device, the LED should flash faster, five times every second.

Top tip

Read the online guide

This tutorial also features in the Raspberry Pi Datasheet: Getting started with Pico. It also features information on using Raspberry Pi’s Debug Probe.

The post Get started with Raspberry Pi Pico-series and VS Code appeared first on Raspberry Pi.

Building a Raspberry Pi Pico 2-powered drone from scratch

The summer, and Louis Wood’s internship with our Maker in Residence, was creeping to a close without his final build making it off the ground. But as if by magic, on his very last day, Louis got his handmade drone flying.

3D-printed CAD design

The journey of building a custom drone began with designing in CAD software. My initial design was fully 3D-printed with an enclosed structure and cantilevered arms to support point forces. The honeycomb lid provided cooling, and the enclosure allowed for embedded XT-60 and MR-30 connections, creating a clean and integrated look. Inside, I ensured all electrical components were rigidly mounted to avoid unwanted movement that could destabilise the flight.

Testing quickly revealed that 3D-printed frames were brittle, often breaking during crashes. Moreover, the limitations of my printer’s build area meant that motor placement was cramped. To overcome these issues, I CNC-routed a new frame from 4 mm carbon fibre, increasing the wheelbase for better stability. Using Carveco software, I generated toolpaths and cut the frame on a WorkBee CNC in our Maker Lab. After two hours, I had a sturdy, assembled frame ready for electronics.

Not one, not two, but three Raspberry Pis

For the drone’s brain, I used a Raspberry Pi Pico 2 connected to an MPU6050 gyroscope for real-time orientation data and an IBUS protocol receiver for streamlined control inputs. Initially, I faced issues with signal processing due to the delay of handling five separate PWM signals. Switching to IBUS sped up the loop frequency by tenfold, which greatly improved flight response. The Pico handled PID (Proportional-Integral-Derivative) calculations for stability, and a 4-in-1 ESC managed the motor signals. The drone also carries a Raspberry Pi Zero with a Camera Module 2 and an analogue VTX for real-time FPV (first-person view) flying.

All coming together in the Maker Lab at Pi Towers

Programming was based on Tim Hanewich’s Scout flight controller code, implementing a ‘rate’ mode controller that uses PID values to maintain desired angular velocities. Fine-tuning the PID gains was essential; improper settings could lead to instability and dangerous oscillations. I followed a careful tuning process, starting with low values for each parameter and slowly increasing them.

To make the process safer, I constructed a testing rig to isolate each axis and simulate flight conditions. This allowed me to achieve a rough tune before moving on to actual flight tests, ultimately ensuring the drone’s safe and stable performance.

The post Building a Raspberry Pi Pico 2-powered drone from scratch appeared first on Raspberry Pi.

Track Asian hornets with VespAI | #MagPiMonday

AI models are adept at distinguishing one winged creature from another. This #MagPiMonday, Rosie Hattersley goes beyond the buzz.

Once attracted to liquid in a Petri dish, VespAI identifies any Asian hornets and automatically alerts researchers who trace them back to their nest
Once attracted to liquid in a Petri dish, VespAI identifies any Asian hornets and automatically alerts researchers who trace them back to their nest

Fun fact that might get you a point in the local pub quiz: Vespa, Piaggio’s iconic scooter, is Italian for wasp, which its buzzing engine sounds a bit like. Less fun fact: nature’s counterpart to the speedy two-wheeler has an aggressive variant that has been seen in increasing numbers across western Europe and which is a direct threat to bees, which are one of their key food sources. Bees are great for biodiversity; Asian hornets (the largest type of eusocial wasp) are not. But it’s only particular hornet species that pose such a threat. Most citizen reports of Asian hornets are native species, and a key issue is ensuring that existing hornet species are not being destroyed on this mistaken assumption. To combat misinformation and alarm at the so-called ‘killer’ hornet (itself a subset of wasp), academics at the University of Exeter have developed a VespAI detector that presents a positive identification system showing where new colonies of the invasive hornet Vespa velutina nigrithorax have begun to spread. The system works by drawing the insects to a pad that is impregnated with tasty (to wasps) smelling foodstuffs.

Dr Thomas O’Shea-Weller, Juliet Osborne, and Peter Kennedy

Considerate response

VespAI provides a nonharmful alternative to traditional trapping surveys and can also be used for monitoring hornet behaviour and mapping distributions of both the Asian hornet (Vespa velutina) and European hornet (Vespa crabro), which is protected in some countries. “Live hornets can be caught and tracked back to the nest, which is the only effective way to destroy them,” explains the team’s research paper.

VespAI crosschecks a potential hornet against its 33,000-strong image database
Non-Asian hornets are discounted, meaning non-invasive native species are not destroyed in a bid to eradicate the destructive newcomers

Creepy feeling

VespAI features a camera positioned above a bait station that detects insects as they land to feed and gets to work establishing whether the curious mite is, in fact, an Asian hornet. The Exeter team developed the AI algorithm in Python, using YOLO image detection models. These identify whether Asian hornets are present and, if so, send an alert to users. Raspberry Pi proved a great choice because of its compact size, ability to run the hornet recognition algorithm, real-time clock, and support for peripherals such as an external battery. The prototype bait station design was made with items that the team had at hand in their lab, including a squirrel baffle for the weather shield, Petri dishes and sponges to hold hornet attractant, and a beehive stand for the monitor to rest on.

The VespAI system is inactive unless an insect of the correct size is detected on the bait station
The system is inactive unless an insect of the correct size is detected on the bait station

Design challenges included optimising the hornet detection algorithm for use on Raspberry Pi. “An AI algorithm may work well during training or when validated in the lab. However, field deployment is essential to expose it to potentially unforeseen scenarios that may return errors”, they note. The project also involved developing a monitor with an integrated camera, processor, and peripherals while minimising power consumption. To this end, the VespAI team is currently optimising their software to run on Raspberry Pi Zero, having watched footage of the AntVideoRecord device monitoring leafcutter ant (Acromyrmex lundi) foraging trails and been impressed by its ability to run for extended periods remotely due to its low power consumption.

As this interactive map shows, Asian hornets have quickly made inroads across Western Europe.

Asian hornets have rapidly spread from southern Europe and are now increasing in numbers in the UK

The Raspberry Pi-enabled setup is “intended to support national surveillance efforts, thus limiting hornet incursions into new regions,” explains Dr Thomas O’Shea-Wheller, a research fellow in the university’s Environment and Sustainability Institute. He and his colleagues have been working on the AI project since 2022, conducting additional fieldwork this summer with the National Bee Unit and the Government of Jersey (Channel Islands) mapping new locations and fine-tuning its accessibility to potential users ahead of a planned commercial version. 

Given Raspberry Pi’s extensive and enthusiastic users, they hope sharing their code on GitHub will help expand the number of VespAI detection stations and improve surveillance and reporting of hornet species.

This article originally featured in issue 146 of The MagPi magazine.

The MagPi #146 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

The image you provided is the cover of "The MagPi" magazine, issue 146, from October 2024. This magazine is dedicated to Raspberry Pi enthusiasts. The cover design is orange with black and white elements, featuring a retro horror theme. Some of the key elements on the cover include: The main headline, "PLAY RETRO HORROR CLASSICS ON RASPBERRY PI 5," likely highlighting a feature on retro horror games. The text "Police Line Do Not Cross" in several places, adding to the spooky, horror theme, possibly in reference to crime or mystery-themed games. The imagery of a crow, a spooky-looking house, a cassette tape, and various retro gaming motifs, reinforcing the horror and retro gaming aesthetic. Additional highlights like "LEGO Card Shuffler," "Top 10 Spooky Projects," and "Recycle a Fighter Jet Joystick," suggesting other tech and DIY projects featured in this issue. The bottom of the cover mentions "TURN IT UP TO 11 WITH AUDIO UPGRADES," hinting at content related to enhancing audio experiences. The overall theme seems focused on retro horror gaming and tech projects for Raspberry Pi.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Track Asian hornets with VespAI | #MagPiMonday appeared first on Raspberry Pi.

Book of Making 2025 on sale now: build superb projects from plant monitors to rockets

Learn how to recreate all of the best projects from HackSpace magazine with the Book of Making 2025, on sale now at £14.

book of making front cover

The image is a cover for the Book of Making 2025, a guide for DIY enthusiasts, makers, and hackers. It features a variety of electronics components and tools on a dark background, including a Raspberry Pi board, a Raspberry Pi Pico, a screwdriver, a wrench, pliers, an SD card, a soldering iron, a battery, and a breadboard.

The cover promotes several project ideas such as:

Building a flat-pack rocket
Making electronic music with a Raspberry Pi Pico
Creating a connected plant monitor
Building smart home lighting
At the top, it reads "Projects for Makers & Hackers," and at the bottom, it notes that the book is "From the Makers of HackSpace Magazine." The color scheme primarily features green, white, and black with vibrant highlights, giving it a tech-savvy, hands-on vibe.

We had so much fun making HackSpace magazine (and we hope you had fun reading it). It’s been a couple of months now since we incorporated HackSpace into a bigger, brighter, better version of The MagPi. While the standalone magazine may have gone from the shelves, it’s still on the immortal internet, where you can download every issue for free. And if that’s not enough to cater for your desire to make semi-useful things out of home electronics, microcontrollers, 3D printers and the like, there’s the Book of Making 2025 to scratch your itch, on sale today in all good bookshops and online from the Raspberry Pi Press store.

Book of Making 2025 distills the essence of HackSpace magazine down to our favourite maker projects. Whether you want to build a rocket or hot air balloon, learn 3D-printed mechanical engineering, or control the world around you with a Raspberry Pi Pico, there’s something for you here.

This book is full of projects perfect for an hour, afternoon, or weekend; be inspired by the amazing community projects you’ll find in its pages and make your own creations using step-by-step guides.

You’ll learn how to:

  • Work with microcontrollers and electronic circuits
  • Design for 2D and 3D fabrication methods and make them a reality
  • Create amazing things with everyday items
  • …and loads more!

Hackspaces and makerspaces have exploded in popularity the world over, as more and more people want to make things and learn in the process. Written by makers for makers, this book features a diverse range of projects to sink your teeth into. Grab some duct tape, fire up a microcontroller, ready a 3D printer, and hack the world around you!

The post Book of Making 2025 on sale now: build superb projects from plant monitors to rockets appeared first on Raspberry Pi.

Pilet: Mini Pi 5 modular computer

The new and improved MagPi magazine now houses one of my favourite sections of the late great HackSpace magazine: Top Projects. The feature showcases five or six spectacular builds using Raspberry Pi, and this was our favourite from the latest issue.

Do you want a portable mini modular computer based on Raspberry Pi 5? If so, you’re in luck. A small outfit (boasting one-and-a-half people) called Soulcircuit is working on one right now, called the Pilet (it was called Consolo, but is now called Pilet, which according to the maker “reflects the project’s aim to appeal to a wider global audience”). 

Two 8000mAh batteries give the device a claimed seven-hour lifespan, which if true will put a lot of computing power in your pocket for a productive day’s work. The basic unit houses a Raspberry Pi 5 and a touchscreen, running a full-fat version of the Linux operating system (it looks like Debian with a KDE desktop, which wouldn’t really have been practical with any model of Raspberry Pi until now). 

Soulcircuit claims that the Pilet is “built by open-source software for the open-source community,” and credits KiCad, FreeCAD, Blender, Linux, Raspberry Pi, and KDE. As we’ve seen so many times though, it’s not enough just to have the right software; a device this good takes expertise and imagination, and if it can come in at the expected price of under $200, we’re sure it’ll be popular with open-source geeks who want to get work done but also quite like leaving the house every now and then.

The MagPi #146 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

The image you provided is the cover of "The MagPi" magazine, issue 146, from October 2024. This magazine is dedicated to Raspberry Pi enthusiasts. The cover design is orange with black and white elements, featuring a retro horror theme. Some of the key elements on the cover include: The main headline, "PLAY RETRO HORROR CLASSICS ON RASPBERRY PI 5," likely highlighting a feature on retro horror games. The text "Police Line Do Not Cross" in several places, adding to the spooky, horror theme, possibly in reference to crime or mystery-themed games. The imagery of a crow, a spooky-looking house, a cassette tape, and various retro gaming motifs, reinforcing the horror and retro gaming aesthetic. Additional highlights like "LEGO Card Shuffler," "Top 10 Spooky Projects," and "Recycle a Fighter Jet Joystick," suggesting other tech and DIY projects featured in this issue. The bottom of the cover mentions "TURN IT UP TO 11 WITH AUDIO UPGRADES," hinting at content related to enhancing audio experiences. The overall theme seems focused on retro horror gaming and tech projects for Raspberry Pi.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Pilet: Mini Pi 5 modular computer appeared first on Raspberry Pi.

Raspberry Pi SD Cards and the Raspberry Pi Bumper: your new favourite accessories

By: jdb

Today we’re happy to announce a couple of new accessories that we think will make a big difference to your experience with Raspberry Pi. With the latest release of Raspberry Pi OS, Raspberry Pi 5 can make use of the extra performance available from Class A2 SD cards; to help you take advantage of this, we are introducing our own range of high-quality, low-cost Raspberry Pi SD Cards. And we’re releasing the Raspberry Pi Bumper, a cute little silicone cover to protect the base and edges of your Raspberry Pi 5.

The image shows a microSD card partially inserted into a Raspberry Pi case. The card is branded with the Raspberry Pi logo and labeled as a 32 GB microSD card, which is commonly used to store the operating system and files for a Raspberry Pi device. The case has ventilation slots and appears to be designed to protect the Raspberry Pi while allowing easy access to the microSD card slot. The setup is likely intended for a compact and accessible way to operate and manage the Raspberry Pi.

Raspberry Pi SD Cards

As many of you will know first-hand, your choice of SD card makes a huge difference to your Raspberry Pi experience. Historically, we’ve worked with our Approved Reseller partners to test and endorse third-party SD cards. But as cards have become more sophisticated, and particularly with the advent of Class A2 cards, this process has become increasingly cumbersome.

To ensure you have the best possible experience at the lowest possible cost, we’ve worked with our partner Longsys to develop a range of branded Raspberry Pi SD Cards. These Class A2 cards offer exceptional random read and write throughput across the entire range of Raspberry Pi computers, and when used on Raspberry Pi 5 support command queueing for even higher performance.

From today, our Approved Resellers will only promote Raspberry Pi SD Cards alongside Raspberry Pi computers, and you can be assured of their quality.

Class A2 SD Cards: harder, better, faster, stronger

SD cards which support Application Performance Class A2, such as our new Raspberry Pi SD Cards, enable faster read and write operations, and Raspberry Pi 5 incorporates hardware features which allow it to make the most of this extra performance. To enable these features, you will need to use the latest release of Raspberry Pi OS, or update your Raspberry Pi OS install with the latest packages. Run the following command to update:

sudo apt update && sudo apt full-upgrade

How exactly do Class A2 cards achieve better performance? Read on!

What is CQHCI?

The SD Host Controller Interface (SDHCI) specification standardises the piece of hardware (the host controller) which controls communication with the SD card. On Raspberry Pi computers, the host controller lives inside the Broadcom application processor. The Command Queueing Host Controller Interface (CQHCI) extends SDHCI with an extra set of control registers, and a CQ engine which takes over from the legacy host controller when a suitable card is detected.

Cards must be explicitly put into command queueing (CQ) mode, after which a new set of SD commands becomes available and many of the existing SD commands become invalid. The new commands decouple the request to read or write a card sector from the response of the card. Each read or write operation is tagged, with up to 32 tags in use across both reads and writes. The card can choose the order in which it returns responses to the commands, and may optionally buffer write data rather than committing it immediately to flash.

By allowing it to effectively “see into the future”, command queueing lets the flash controller hide more of the latency associated with accessing disparate NAND flash pages. This results — at least in theory — in better throughput for random I/O workloads of the sort generated by Raspberry Pi OS.

CQ support first landed in eMMC devices with JEDEC specification JESD84-B51, in 2015. The SD specification equivalent landed some time later with SD v6.00, in 2017. However, at the time of the Raspberry Pi 5 launch in 2023, Linux only supported CQHCI on eMMC devices — so we were leaving performance on the table.

In early 2024 I set about implementing the missing CQ support for SD cards.

How do you use CQHCI?

Carefully parsing the SD specification led me to develop a dependency chain of optional card features that all needed to be supported if CQ mode is to be used. These are, in order:

  • The card must support Extension Register access, which is a generic method of accessing optional features over 512-byte pages, each with a type identifying to what feature extension the page refers
  • The card must support the Performance Enhancement extension registers
  • In the Performance Enhancement extension, the card must support Write Caching
  • As a consequence of Write Caching support, the card must also support the Power extension registers and at a minimum support Power-Off notifications
  • The card must declare the queue depth required to meet Class A2 performance — from 2 to 32 tags

As Linux already supported CQ with eMMC cards, all I had to do was to find out where the SD implementation differed — and there were a few of these cases.

During normal operation the host operating system sometimes needs to issue “meta-ops” that don’t directly transfer data but do related things, such as recalibrating the host-to-card data path delays, requesting card status as a proxy for card removal, and doing flash maintenance operations such as signalling block discard.

For eMMC devices, most meta-ops are performed by issuing command CMD6 with a 32-bit argument. CQHCI supports injecting these while in CQ mode by designating the “top” tag in the controller for performing DCMDs (direct commands). However, with SD cards, the set of commands performing meta-ops generally require us to halt the CQ engine, and issue a non-CQ command using the regular SD host controller registers.

Once these differences were ironed out, I had a workable Linux driver, which was pushed to rpi-update. I created a testing thread in the forums for the adventurous, and set about evaluating my extensive collection of retail cards.

How well do SD cards implement CQ mode?

In a very hit-and-miss fashion.

SanDisk cards, in particular the Extreme and Extreme Pro product lines, were my first choice — and they performed well. However, other manufacturers’ offerings suffered from one or more of a number of common deficiencies that precluded CQ mode operation, or caused them to flake out in use:

  • Not declaring Power-Off notification support despite implementing the extension
  • Hanging on receipt of a cache flush request after CQ mode had been activated then deactivated
  • Cards not correctly implementing the “CQ enable” expansion register bit — if I wrote a 1, I would still read back 0 forever

There was even one type of card that claimed Class A2 support but ignored any request to read the expansion registers to probe for any of these features!

The Raspberry Pi kernel filters out cards that fail these tests, either during feature probing or with an explicit quirk that matches the card identifier. If you find an A2-branded card that misbehaves on a Raspberry Pi 5, then please report it in the above-mentioned forum thread.

Write caching + surprise removal = badness

One potential pitfall of enabling CQ mode was that it provides cards with new opportunities to corrupt your filesystem if power is removed unexpectedly. In CQ mode, hosts should honour the requirement to maintain the card’s power supply, and only remove it after a Power-Off notification is sent; this provides an opportunity for the flash controller to commit all outstanding writes to flash. For battery-powered hosts with concealed SD slots such as a phone, that is an easy contract to fulfil — requesting device shutdown or uncovering the slot can trigger a Power-Off notification. Raspberry Pi, with its exposed SD slot and pluggable PSU, has a harder time providing this guarantee.

With multiple writes in flight, or multiple posted notifications of pending writes, we can no longer guarantee the order in which writes get committed to flash. If power is removed unexpectedly, an arbitrary collection of recent writes may not have been committed, rather than strictly the n most recent writes; this greatly complicates the task of making the filesystem resilient to corruption. The Raspberry Pi kernel sidesteps this problem by limiting the maximum number of posted writes in CQ mode to one. While in theory this may result in lower sequential write throughput, the cards I’ve tested see at most a 2–3% percent reduction in performance.

Introducing Longsys

Once it became clear that Class A2 SD cards offer a significant performance uplift when operating in CQ mode on Raspberry Pi 5, we started discussions with several card OEMs, with the goal of qualifying a cost-effective offering that would work well across every generation of Raspberry Pi computer.

We settled on Longsys as our vendor after working with their engineering team to align their cards’ declared feature sets with our requirements; to prove that the cards were robust by automatically performing over 100,000 surprise power cycles under I/O heavy load; and to tune the cards to get the best out of Raspberry Pi 5.

While best performance on Raspberry Pi 5 was our primary goal, the non-CQ performance of these cards is still stonkingly fast, and you will generally see a significant uplift in performance on older Raspberry Pi computers.

Raspberry Pi Bumper for Raspberry Pi 5

Today’s other accessory launch brings you the Raspberry Pi Bumper: the simple casing solution you never knew you needed, and already a firm favourite here at Pi Towers. It’s a snap-on silicone base that unfussily protects the base and edges of your Raspberry Pi 5, and the surface you’re putting it down on, and also makes it easier to use the power button. It’s compatible with the Raspberry Pi Active Cooler, and will set you back a meagre $3.

And there you are. Two unglamorous, yet excellent, accessories that we wonder how we managed without. We hope you like them.

The post Raspberry Pi SD Cards and the Raspberry Pi Bumper: your new favourite accessories appeared first on Raspberry Pi.

❌