Reading view

There are new articles available, click to refresh the page.

Valve’s Steam Link on Raspberry Pi

Earlier this year we released Raspberry Pi Connect, which lets you access your Raspberry Pi from anywhere, either through a remote shell interface or by screen sharing. But perhaps, occasionally, you might need to screen share some other computer; what if you want to screen share your big PC, with its gaming graphics capabilities, around your house? Is it possible to use it to play your games from anywhere? Happily, thanks to Valve’s hugely popular Steam Link product, the answer is yes. With Steam Link, our kids can — OK, we can — play PC games on any computer in the house, without having to lug the PC around. And now, you can run Steam Link on your Raspberry Pi 5!

steam link running on Jeff Geerling's set up
Thanks for the image, Jeff Geerling!

Steam Link is actually tackling some quite difficult challenges to enable us to play graphics-heavy games remotely. Firstly, screen sharing is not normally optimised for sending high quality images, since you have to work quite hard to keep both the bitrate and the latency down; you also don’t normally transmit audio as well as video, and you need to do a bit of magic to talk to game controllers. So the smart folks at Valve have successfully solved quite a few hard problems to bring this into being.

Even better, Sam Lantinga from Valve — who is also the developer of SDL, a simple multimedia programming library — has been working for a little while on getting Steam Link to run on Raspberry Pi 5. The previous method used to run Steam Link on Raspberry Pi OS no longer worked very well after we moved away from the closed-source Broadcom multimedia libraries, and with Wayland, a different approach was needed. Sam has been working with the Raspberry Pi software team to use our hardware in the most efficient way possible.

Valve’s announcement of Steam Link v1.3.13 shows that Sam has been able to get Steam Link working at some amazing rates on Raspberry Pi 5, including 4kp60 and even 1080p240 (obviously you’ll need a suitable monitor for that!).

Jeff running Steam Link on Raspberry Pi 5

To install Steam Link yourself, grab yourself an up-to-date Raspberry Pi OS image and type:

sudo apt update
sudo apt upgrade
sudo apt install steamlink
steamlink

Enjoy!

The post Valve’s Steam Link on Raspberry Pi appeared first on Raspberry Pi.

Deploying Ultralytics YOLO models on Raspberry Pi devices

In this guest post, Ultralytics, creators of the popular YOLO (You Only Look Once) family of convolutional neural networks, share their insights on deploying and running their powerful AI models on Raspberry Pi devices, offering solutions for a wide range of real-world problems.

Computer vision is redefining industries by enabling machines to process and understand visual data like images and videos. To truly grasp the impact of vision AI, consider this: Ultralytics YOLO models, such as Ultralytics YOLOv8 and the newly launched Ultralytics YOLO11, which support computer vision tasks like object detection and image classification, have been used over 100 billion times. There are 500 to 600 million uses every day and thousands of uses every second across applications like robotics, agriculture, and more.

YOLO can be used in the agriculture sector

To take this a step further, Ultralytics has partnered with Raspberry Pi to bring vision AI to one of the most accessible and versatile computing platforms. This collaboration makes it possible to deploy YOLO models directly on Raspberry Pi, enabling real-time computer vision applications in a compact, cost-effective, and easy-to-use way.

By supporting such integrations, Ultralytics aims to enhance model compatibility across diverse deployment environments. For instance, the Sony IMX500, the intelligent vision sensor with on-sensor AI processing capabilities included in the Raspberry Pi AI Camera, works with Raspberry Pi to run YOLO models, enabling advanced edge AI applications.

In this article, we’ll explore how YOLO models can be deployed on Raspberry Pi devices, look at real-world use cases, and highlight the benefits of this exciting collaboration for vision AI projects. Let’s get started!

Enabling edge AI solutions with Raspberry Pi and Ultralytics YOLO

Raspberry Pi is an affordable and widely used device, making it a great choice for deploying vision AI models like YOLO. Running Ultralytics YOLO models on Raspberry Pi enables real-time computer vision capabilities, such as object detection, directly on the device, eliminating the need for cloud resources. Local processing reduces latency and improves privacy, making it ideal for applications where speed and data security are essential.

Ultralytics offers optimized models, like YOLO11, that can run efficiently on relatively resource-constrained devices, with the Nano and Small model variants providing the best performance on lower-power hardware. Leveraging these optimized models on Raspberry Pi devices is easy with the Ultralytics Python API or CLI, ensuring smooth deployment and operation. In addition to this, Ultralytics also supports automated testing for Raspberry Pi devices on GitHub Actions to regularly check for bugs and ensure the models are ready for deployment.

Another interesting feature of the Ultralytics YOLO models is that they can be exported in various formats (as shown in the image below), including NCNN (Neural Network Compression and Optimization). Designed for devices with relatively constrained computing power, such as Raspberry Pi’s ARM64 architecture, NCNN ensures faster inference times by optimizing model weights and activations through techniques like quantization.

Benchmarking Ultralytics YOLO11 inferencing On Raspberry Pi

Raspberry Pi, Sony IMX500, and YOLO for real-time AI applications

The Raspberry Pi AI Camera is a perfect example of how this integration helps support compatibility across a range of deployment environments. Its IMX500 intelligent vision sensor comes with on-sensor AI processing, allowing it to analyze visual data directly and output metadata rather than raw images. While the IMX500 is powerful on its own, it needs to be paired with a device like Raspberry Pi to run YOLO models effectively. In this setup, a Raspberry Pi acts as the host device, facilitating communication with the AI Camera and enabling real-time AI applications powered by YOLO.

Raspberry Pi AI Camera incorporates the Sony IMX500

Real-world examples of YOLO applications on Raspberry Pi

Raspberry Pi, combined with the Ultralytics YOLO models, unlocks countless possibilities for real-world applications. This collaboration bridges the gap between experimental AI setups and production-ready solutions, offering an affordable, scalable, and practical tool for a wide range of industries. 

Here are a few impactful use cases:

  • Robotics: YOLO can enable robots to navigate environments, recognize objects, and perform tasks with precision, making them more autonomous and efficient
  • Drones: With YOLO running on Raspberry Pi, drones can detect obstacles, track objects, and perform surveillance in real-time, enhancing their capabilities in industries like delivery and security
  • Quality control in manufacturing: YOLO can help identify defects in production lines, ensuring higher quality standards with automated inspection
  • Smart farming: By using YOLO to monitor crop health and detect pests, farmers can make data-driven decisions, improving yields and reducing resource waste

Benefits of running Ultralytics YOLO models on Raspberry Pi for edge AI

There are many advantages to deploying YOLO models on Raspberry Pi, making it a practical and affordable option for edge AI applications. For instance, performance can be boosted by using hardware accelerators like Google Coral Edge TPU, enabling faster and more efficient real-time processing.

Coral Edge TPU connected to a Raspberry Pi

Here are some of the other key benefits:

  • Scalability: The setup can be extended to multiple devices, making it a great choice for larger projects such as factory automation or smart city systems
  • Flexibility: YOLO’s compatibility ensures that developers can create solutions that work seamlessly across a variety of hardware setups, offering versatility for different applications
  • Community and support: With extensive resources, tutorials, and an active community, Ultralytics provides the support needed for smooth deployment and troubleshooting of YOLO models on Raspberry Pi

To the edge and beyond with Ultralytics YOLO and Raspberry Pi

YOLO and Raspberry Pi are making edge AI applications more accessible, impactful, and transformative than ever before. By putting together the advanced capabilities of Ultralytics YOLO models with the cost-effectiveness and flexibility of Raspberry Pi, this partnership allows developers, researchers, and hobbyists to bring innovative ideas to life. 

With support for devices like the Raspberry Pi AI Camera and scalable hardware options, this collaboration unlocks opportunities across industries, from robotics and agriculture to manufacturing and beyond.

Explore the possibilities of AI with Ultralytics: visit the Ultralytics GitHub repository to discover how vision AI is making a change in sectors like healthcare and self-driving cars, and join the Ultralytics community to be part of the future of vision AI.

The post Deploying Ultralytics YOLO models on Raspberry Pi devices appeared first on Raspberry Pi.

Compute Module 5 on sale now from $45

Today we’re happy to announce the much-anticipated launch of Raspberry Pi Compute Module 5, the modular version of our flagship Raspberry Pi 5 single-board computer, priced from just $45.

An unexpected journey

We founded the Raspberry Pi Foundation back in 2008 with a mission to give today’s young people access to the sort of approachable, programmable, affordable computing experience that I benefitted from back in the 1980s. The Raspberry Pi computer was, in our minds, a spiritual successor to the BBC Micro, itself the product of the BBC’s Computer Literacy Project.

But just as the initially education-focused BBC Micro quickly found a place in the wider commercial computing marketplace, so Raspberry Pi became a platform around which countless companies, from startups to multi-billion-dollar corporations, chose to innovate. Today, between seventy and eighty percent of Raspberry Pi units go into industrial and embedded applications.

While many of our commercial customers continue to use the “classic” single-board Raspberry Pi form factor, there are those whose needs aren’t met by that form factor, or by the default set of peripherals that we choose to include on the SBC product. So, in 2014 we released the first Raspberry Pi Compute Module, providing just the core functionality of Raspberry Pi 1 – processor, memory, non-volatile storage and power regulation – in an easy-to-integrate SODIMM module.

Compute Modules make it easier than ever for embedded customers to build custom products which benefit from our enormous investments in the Raspberry Pi hardware and software platform. Every subsequent generation of Raspberry Pi, except for Raspberry Pi 2, has spawned a Compute Module derivative. And today, we’re happy to announce the launch of Compute Module 5, the modular version of our flagship Raspberry Pi 5 SBC.

Meet Compute Module 5

Compute Module 5 gives you everything you love about Raspberry Pi 5, but in a smaller package:

  • A 2.4GHz quad-core 64-bit Arm Cortex-A76 CPU
  • A VideoCore VII GPU, supporting OpenGL ES 3.1 and Vulkan 1.3
  • Dual 4Kp60 HDMI® display output
  • A 4Kp60 HEVC decoder
  • Optional dual-band 802.11ac Wi-Fi® and Bluetooth 5.0
  • 2 × USB 3.0 interfaces, supporting simultaneous 5Gbps operation
  • Gigabit Ethernet, with IEEE 1588 support
  • 2 × 4-lane MIPI camera/display transceivers
  • A PCIe 2.0 x1 interface for fast peripherals
  • 30 GPIOs, supporting 1.8V or 3.3V operation
  • A rich selection of peripherals (UART, SPI, I2C, I2S, SDIO, and PWM)

It is available with 2GB, 4GB, or 8GB of LPDDR4X-4267 SDRAM, and with 16GB, 32GB, or 64GB of MLC eMMC non-volatile memory. 16GB SDRAM variants are expected to follow in 2025.

Compute Module 5 is mechanically compatible with its predecessor, Compute Module 4, exposing all signals through a pair of high-density perpendicular connectors, which attach to corresponding parts on the customer’s carrier board. Additional stability is provided by four M2.5 mounting holes arranged at the corners of the board.

There are a small number of changes to the pin-out and electrical behaviour of the module, mostly associated with the removal of the two two-lane MIPI interfaces, and the addition of two USB 3.0 interfaces. A detailed summary of these changes can be found in the Compute Module 5 datasheet.

Accessories accessorise

But Compute Module 5 is only part of the story. Alongside it, we’re offering a range of new accessories to help you get the most out of our new modular platform.

IO Board

Every generation of Compute Module has been accompanied by an IO board, and Compute Module 5 is no exception.

The Raspberry Pi Compute Module 5 IO Board breaks out every interface from a Compute Module 5. It serves both as a development platform and as reference baseboard (with design files in KiCad format), reducing the time to market for your Compute Module 5-based designs.

The IO Board features:

  • A standard 40-pin GPIO connector
  • 2 × full-size HDMI 2.0 connectors
  • 2 × 4-lane MIPI DSI/CSI-2 FPC connectors (22-pin, 0.5mm pitch cable)
  • 2 × USB 3.0 connectors
  • A Gigabit Ethernet jack with PoE+ support (requires a separate Raspberry Pi PoE+ HAT+)
  • An M.2 M-key PCIe socket (for 2230, 2242, 2260 and 2280 modules)
  • A microSD card socket (for use with Lite modules)
  • An RTC battery socket
  • A 4-pin fan connector

Power is provided by a USB-C power supply (sold separately).

IO Case

As in previous generations, we expect some users to deploy the IO Board and Compute Module combination as a finished product in its own right: effectively an alternative Raspberry Pi form factor with all the connectors on one side. To support this, we are offering a metal case which turns the IO Board into a complete encapsulated industrial-grade computer. The Raspberry Pi IO Case for Raspberry Pi Compute Module 5 includes an integrated fan, which can be connected to the 4-pin fan connector on the IO Board to improve thermal performance.

Cooler

While Compute Module 5 is our most efficient modular product yet in terms of energy consumed per instruction executed, like all electronic products it gets warm under load. The Raspberry Pi Cooler for Raspberry Pi Compute Module 5 is a finned aluminium heatsink, designed to fit on a Compute Module 5, and including thermal pads to optimise heat transfer from the CPU, memory, wireless module and eMMC.

Antenna Kit

Wireless-enabled variants of Compute Module 5 provide both an onboard PCB antenna, and a UFL connector for an external antenna. Use of the Raspberry Pi Antenna Kit (identical to that already offered for use with Compute Module 4) with Compute Module 5 is covered by our FCC modular compliance.

Development Kit

The Raspberry Pi Development Kit for Raspberry Pi Compute Module 5 comprises a Compute Module 5, an IO Board, and all the other accessories you need to start building your own design:

  • CM5104032 (Compute Module 5, with wireless, 4GB RAM, 32GB eMMC storage)
  • IO Case for Compute Module 5
  • Compute Module 5 IO Board
  • Cooler for Compute Module 5
  • Raspberry Pi 27W USB-C PD Power Supply (local variant as applicable)
  • Antenna Kit
  • 2 × Raspberry Pi standard HDMI to HDMI Cable
  • Raspberry Pi USB-A to USB-C Cable

Early adopters

Today’s launch is accompanied by announcements of Compute Module 5-based products from our friends at KUNBUS and TBS, who have built successful products on previous Raspberry Pi Compute Modules and whom we have supported to integrate our new module into their latest designs. Other customers are preparing to announce their own Compute Module 5-powered solutions over the next weeks and months. The world is full of innovative engineering companies of every scale, and we’re excited to discover the uses to which they’ll put our powerful new module. Try Compute Module 5 for yourself and let us know what you build with it.

The post Compute Module 5 on sale now from $45 appeared first on Raspberry Pi.

Powering industrial innovation: Compute Module 5 meets Revolution Pi

By: Dave Lee

Revolution Pi has been designing and manufacturing successful products with Raspberry Pi Compute Modules for years. In this guest post, they talk about why they continue to choose Raspberry Pi technology, and discuss their experience designing with our brand-new Compute Module 5.

Revolution Pi has been building flexible industrial devices with Raspberry Pi Compute Modules since the very beginning. As a long-time partner, we have witnessed their impressive evolution from the first to the fifth generation over the past ten years.

Technical advancements that matter

Raspberry Pi Compute Module 5’s enhancements directly address industrial requirements: it provides quad-core CPU performance up to 2.4GHz, a built-in USB 3.2 controller, and an improved PCIe controller. Raspberry Pi’s continuous integration of more interfaces directly on the Compute Module advances its capabilities while freeing up valuable space on our carrier board. These well-integrated interfaces within the Raspberry Pi ecosystem enable more flexible hardware designs. This allowed us to equip the RevPi Connect 5 with up to four multi-Gigabit Ethernet ports, letting industrial users connect multiple industrial fieldbuses and other networks with low latency.

The RevPi Connect 5 consists of two PCBs with a big bolted-on heat sink

Collaborative development process

Working with Raspberry Pi on this has been exceptional. They understand what industrial developers need. We received early samples to test with, which was critical. It allowed us to iterate and optimise our design solutions, especially when developing a custom heat sink. Managing the heat generated by the powerful new Compute Module in a DIN rail enclosure was an important part of the design process. Having real hardware to test with made all the difference.

Systematic thermal management

Maintaining Compute Module 5’s operating temperature below 85°C under heavy load required a methodical development process. We started with thermal simulation analysis to identify hotspots at full operating capacity. This analysis formed the basis for our practical prototyping. Through iterative testing under extreme conditions, we optimised the heatsink design before conducting extensive testing with the final housing inside our climatic chamber. The entire process culminated in establishing precise manufacturing standards with rigorous quality control.

Analysis of simulated airflow in the heatsink

Seamless software integration

On the software side, working with Raspberry Pi’s platform enables smooth integration. When we hit technical challenges, their engineering team was right there to support us. Their unified kernel approach across all products allowed us to focus on integrating new features like the CAN FD interfaces instead of wrestling with compatibility issues. This standardisation benefits Revolution Pi users as well — they can use our industrialised Raspberry Pi OS-based image consistently across all Revolution Pi devices.

A typical Revolution Pi system configuration, consisting of a RevPi Connect 5 and several expansion modules

A proven partnership

From the first Compute Module to now, Raspberry Pi has shown growing commitment to industrial computing. Compute Module 5, purpose-built for products like Revolution Pi, demonstrates what’s possible when combining Raspberry Pi’s innovation with our industrial-grade engineering. We’re excited to continue pushing the boundaries of industrial automation and IIoT applications together.

The post Powering industrial innovation: Compute Module 5 meets Revolution Pi appeared first on Raspberry Pi.

Raspberry Pi Christmas shopping guide

It’s the most wonderful time of the year… to give someone on your gift list something (or all things) Raspberry Pi. The past year has seen many exciting new releases, so we understand if you’re sat scratching your head at what to buy your favourite Raspberry Pi fanatic. But look no further! For the sake of your peace, and in a show of our goodwill, we elves have gone and done all the work for you. Good tidings we bring.

This image features a Raspberry Pi AI Camera Module connected to a long, curved orange ribbon cable. The small, square-shaped green circuit board has a black camera lens at its center and yellow mounting holes at each corner. The ribbon cable is flexed into a loop and prominently displays white text that reads "Raspberry Pi Camera Cable Standard – Mini – 200mm." The cable is designed to connect the camera to a Raspberry Pi device, and the image is set against a plain gray background.

Our newest stuff

If it’s a Raspberry Pi superfan you’ve got on your list, you might want to plump for one of our latest hardware releases to really impress them. After all, what do you get someone who has everything? The newest, shiniest thing they haven’t managed to get their hands on yet.

Raspberry Pi Pico 2 W

Launched just a couple of days ago, Raspberry Pi Pico 2 W is the wireless variant of Pico 2, giving you even more flexibility in your connected projects. It’s on sale now for just $7.

Raspberry Pi Touch Display 2

We also upgraded our touch display this year. Raspberry Pi Touch Display 2 is a seven-inch 720×1280px touchscreen display for Raspberry Pi. It’s ideal for interactive projects such as tablets, entertainment systems, and information dashboards, and it’s available for $60.

Raspberry Pi AI HAT+

For the more confident Raspberry Pi user, you might want something to tempt them to broaden their skills into the field of AI. The Raspberry Pi AI HAT+ features a built-in neural network accelerator, turning your Raspberry Pi 5 into a high-performance, accessible, and power-efficient AI machine. The Raspberry Pi AI HAT+ allows you to build a wide range of AI-powered applications for process control, home automation, research, and more. It’s on sale now from $70.

The image you uploaded shows a Raspberry Pi single-board computer with an attached AI accelerator module, likely the Raspberry Pi AI Hat. This hat includes a green circuit board with a central chip that appears to be from Hailo, a company that specializes in artificial intelligence (AI) processors. The board is connected to the Raspberry Pi via the GPIO pins, and it has several components related to AI processing and other features to enable high-performance machine learning on the device. This configuration is designed for AI applications like real-time image processing, neural network acceleration, and other computationally intensive tasks. The text "26 TOPS" refers to the AI hat's ability to perform 26 trillion operations per second, which is a significant performance specification for AI applications.

Raspberry Pi AI Camera

For more easy-to-deploy vision AI applications and neural network models, we’d recommend our new Raspberry Pi AI Camera, which takes advantage of Sony’s IMX500 Intelligent Vision Sensor. It’s available now for $70, and it works with any model of Raspberry Pi — including the super low-cost Zero family.

This image features a Raspberry Pi AI Camera Module connected to a long, curved orange ribbon cable. The small, square-shaped green circuit board has a black camera lens at its center and yellow mounting holes at each corner. The ribbon cable is flexed into a loop and prominently displays white text that reads "Raspberry Pi Camera Cable Standard – Mini – 200mm." The cable is designed to connect the camera to a Raspberry Pi device, and the image is set against a plain gray background.
This image shows a Raspberry Pi setup on a wooden surface, featuring a Raspberry Pi board connected to an AI camera module via an orange ribbon cable. The Raspberry Pi board is attached to several cables: a red one on the left for power and a white HDMI cable on the right. The camera module sits in the lower right corner, with its lens facing up. Part of a white and red keyboard is visible on the right side of the image, and a small plant in a white pot is partially visible on the left. The scene suggests a Raspberry Pi project setup in progress.

Stocking stuffers

If you’re looking for some smaller-but-still-mighty bits to fit in a stocking, we have some great affordable options too. Below is a list of some of the very latest, including a recent fan favourite, the…

Raspberry Pi Bumper

Protect and secure your Raspberry Pi 5 with the Raspberry Pi Bumper, a snap-on silicone cover that protects the bottom and edges of the board. This is a lovely, affordable, and super useful gift for any Raspberry Pi user, and it costs just $3.

Raspberry Pi SD Cards

2024 saw the release of our first-party Raspberry Pi SD Cards. Rigorously tested to ensure optimal performance on Raspberry Pi computers, these Class A2 microSD cards help ensure you get the smoothest user experience from your device. They are available in three different capacities to fit your needs.

32GB
64GB
128GB

Raspberry Pi SSD Kit

With a Raspberry Pi M.2 HAT+ and a Raspberry Pi NVMe SSD bundled together, the Raspberry Pi SSD Kit lets you unlock outstanding performance for I/O intensive applications on your Raspberry Pi 5 — including super-fast startup when booting from SSD. The Kit is available now, in 256GB or 512GB capacities, from $40.

You can also grab the SSDs on their own, starting from $30.

Raspberry Pi USB 3 Hub

Our Raspberry Pi USB 3 Hub is the solution to your need for more peripherals than you have ports: it provides extra connectivity for your devices by turning one USB-A port into four, and is compatible with all Raspberry Pi devices. We think it’s the best you can buy. You can get one now for just $12.

Mugs, stickers, and badges

If you’re looking for something super fun and easy, check out our Raspberry Pi-branded merchandise, available to buy online from your local Approved Reseller. If you’re in Cambridge, UK, a trip to the Raspberry Pi Store would put stickers, mugs, water bottles, t-shirts, and more in your hands right away. (More on that below.)

Books, books, and more books

A personal favourite of mine this Christmas, and certainly your dearest retro gamer’s, is Code the Classics Volume II (£24.99), which shows you how to create your own video games inspired by some of the seminal games of the 1980s.

The Official Raspberry Pi Camera Guide 2nd Edition cover

If you were thinking of getting your favourite tinkering photographer a Raspberry Pi Camera, it might also be a good idea to pick up a copy of The Official Raspberry Pi Camera Guide (£14.99) — we released an updated second edition just last week.

That’s not the only new title to hit the Raspberry Pi Press store this year. If it’s our newest releases you’re interested in, you have titles such as the Book of Making 2025 and The Official Raspberry Pi Handbook 2025 (both originally priced at £14) to choose from. A special 30% discount will be applied at checkout if you choose either of these books.

If you’d like to purchase a gift that keeps on giving all year round, you can subscribe to receive a brand new edition of the official Raspberry Pi magazine, The MagPi, on your doorstep each month. You’ll also get a free Raspberry Pi Pico W if you sign up to a six- or twelve-month subscription.

The Raspberry Pi Store

If you’d like to get out into the twinkling streets of Cambridge at Christmas time, the Raspberry Pi Store in the Grand Arcade (we’re upstairs!) has stock of everything above and much, much more. We’ve also picked some excellently knowledgeable staff who can help you choose something if you’re not sure what you’re looking for.

The image depicts the exterior of a Raspberry Pi store. Here are the key details: Store Details: The store prominently displays the Raspberry Pi name and logo above its entrance. Through a large glass window, we can glimpse the well-lit interior with various items on display. Blurred figures of people are seen walking in front of the store, suggesting motion. A metal railing separates the walking area from a lower level in the mall. The architecture features beige-colored walls and pillars. Raspberry Pi: The store specializes in products related to Raspberry Pi, a popular single-board computer used for various projects and educational purposes.

The post Raspberry Pi Christmas shopping guide appeared first on Raspberry Pi.

Raspberry Pi Pico 2 W on sale now at $7

Update: In advance of official MicroPython support for Pico 2 W, you can download our unofficial MicroPython build here; you’ll find the README here.

Today our epic autumn of product launches continues with Raspberry Pi Pico 2 W, the wireless-enabled variant of this summer’s Pico 2. Built around our brand new RP2350 microcontroller, featuring the tried and tested wireless modem from the original Pico W, and priced at just $7, it’s the perfect centrepiece for your connected Internet of Things projects.

raspberry pi pico 2 w hero

RP2350: the connoisseur’s microcontroller, redux

When we launched our debut microcontroller, RP2040, way back in 2021, we couldn’t have imagined the incredible range of products that would be built around it, or the uses that the community would put them to. Combining a symmetric pair of fast integer cores; a large, banked, on-chip memory; rich support for high-level languages; and our patented programmable I/O (PIO) subsystem, it quickly became the go-to device for enthusiasts and professional engineers seeking high-performance, deterministic interfacing at a low price point.

close up raspberry pi pico 2 w

RP2350 builds on this legacy, offering faster cores, more memory, floating point support, on-chip OTP, optimised power consumption, and a rich security model built around Arm’s TrustZone for Cortex-M. It debuted in August on Pico 2, on the DEF CON 32 badge (designed by our friends at Entropic Engineering, with firmware and a gonzo sidewalk badge presentation by the redoubtable Dmitry Grinberg), and on a wide variety of development boards and other products from our early-access partners.

Wireless things

Many of the projects and products that people build on top of our platforms — whether that’s our Linux-capable Raspberry Pi computers, our microcontroller boards, or our silicon products — answer to the general description “Internet of Things”. They combine local compute, storage, and interfacing to the real world with connectivity back to the cloud.

Raspberry Pi Pico 2 W brings all the power of RP2350 to these IoT projects. The on-board CYW43439 modem from our friends at Infineon provides 2.4GHz 802.11n wireless LAN and Bluetooth 5.2 connectivity, and is supported by C and MicroPython libraries. Enthusiasts benefit from the breadboard-friendly Pico form factor, while our upcoming RM2 radio module (already in use on Pimoroni’s Pico Plus 2 W) provides a route to scale for professional products which have been prototyped on the platform.

lifestyle raspberry pi pico 2 w

More to come

We’re very pleased with how Pico 2 W has turned out. And, where the Pico 1 series ended with Pico W, we have a few more ideas in mind for the Pico 2 series. Keep an eye out for more news in early 2025.

The post Raspberry Pi Pico 2 W on sale now at $7 appeared first on Raspberry Pi.

The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects

We are enormously proud to reveal The Official Raspberry Pi Camera Module Guide (2nd edition), which is out now. David Plowman, a Raspberry Pi engineer specialising in camera software, algorithms, and image-processing hardware, authored this official guide.

The Official Raspberry Pi Camera Guide 2nd Edition cover

This detailed book walks you through all the different types of Camera Module hardware, including Raspberry Pi Camera Module 3, High Quality Camera, Global Shutter Camera, and older models; discover how to attach them to Raspberry Pi and integrate vision technology with your projects. This edition also covers new code libraries, including the latest PiCamera2 Python library and rpicam command-line applications, as well as integration with the new Raspberry Pi AI Kit.

Camera Guide - Getting Started page preview

Save time with our starter guide

Our starter guide has clear diagrams explaining how to connect various Camera Modules to the new Raspberry Pi boards. It also explains how to fit custom lenses to HQ and GS Camera Modules using C-CS adaptors. Everything is outlined in step-by-step tutorials with diagrams and photographs, making it quick and easy to get your camera up and running.

Camera Guide - connecting Raspberry Pi pages

Test your camera properly

You’ll discover how to connect your camera to a Raspberry Pi and test it using the new rpicam command-line applications — these replace the older libcam applications. The guide also covers the new PiCamera2 Python library, for integrating Camera Module technology with your software.

Camera Guide - Raw images and Camera Tuning pages

Get more from your images

Discover detailed information about how Camera Module works, and how to get the most from your images. You’ll learn how to use RAW formats and tuning files, HDR modes, and preview windows; custom resolutions, encoders, and file formats; target exposure and autofocus; shutter speed, and gain, enabling you to get the very best out of your imaging hardware.

Camera Guide - Get started with Raspberry Pi AI kit pages

Build smarter projects with AI Kit integration

A new chapter covers the integration of the AI Kit with Raspberry Pi Camera Modules to create smart imaging applications. This adds neural processing to your projects, enabling fast inference of objects captured by the camera.

Camera Guide - Time-lapse capture pages

Boost your skills with pre-built projects

The Official Raspberry Pi Camera Module Guide is packed with projects. Take selfies and stop-motion videos, experiment with high-speed and time-lapse photography, set up a security camera and smart door, build a bird box and wildlife camera trap, take your camera underwater, and much more! All of the code is tested and updated for the latest Raspberry Pi OS, and is available on GitHub for inspection.

Click here to pick up your copy of The Official Raspberry Pi Camera Module Guide (2nd edition).

The post The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects appeared first on Raspberry Pi.

Using Python with virtual environments | The MagPi #148

Raspberry Pi OS comes with Python pre-installed, and you need to use its virtual environments to install packages. The latest issue of The MagPi, out today, features this handy tutorial, penned by our documentation lead Nate Contino, to get you started.

Raspberry Pi OS comes with Python 3 pre-installed. Interfering with the system Python installation can cause problems for your operating system. When you install third-party Python libraries, always use the correct package-management tools.

On Linux, you can install python dependencies in two ways:

  • use apt to install pre-configured system packages
  • use pip to install libraries using Python’s dependency manager in a virtual environment
It is possible to create virtual environments inside Thonny as well as from the command line

Install Python packages using apt

Packages installed via apt are packaged specifically for Raspberry Pi OS. These packages usually come pre-compiled, so they install faster. Because apt manages dependencies for all packages, installing with this method includes all of the sub-dependencies needed to run the package. And apt ensures that you don’t break other packages if you uninstall.

For instance, to install the Python 3 library that supports the Raspberry Pi Build HAT, run the following command:

$ sudo apt install python3-build-hat

To find Python packages distributed with apt, use apt search. In most cases, Python packages use the prefix python- or python3-: for instance, you can find the numpy package under the name python3-numpy.

Install Python libraries using pip

In older versions of Raspberry Pi OS, you could install libraries directly into the system version of Python using pip. Since Raspberry Pi OS Bookworm, users cannot install libraries directly into the system version of Python.

Attempting to install packages with pip causes an error in Raspberry Pi OS Bookworm

Instead, install libraries into a virtual environment (venv). To install a library at the system level for all users, install it with apt.

Attempting to install a Python package system-wide outputs an error similar to the following:

$ pip install buildhat
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    For more information visit http://rptl.io/venv

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Python users have long dealt with conflicts between OS package managers like apt and Python-specific package management tools like pip. These conflicts include both Python-level API incompatibilities and conflicts over file ownership.

Starting in Raspberry Pi OS Bookworm, packages installed via pip must be installed into a Python virtual environment (venv). A virtual environment is a container where you can safely install third-party modules so they won’t interfere with your system Python.

Use pip with virtual environments

To use a virtual environment, create a container to store the environment. There are several ways you can do this depending on how you want to work with Python:

per-project environments

Create a virtual environment in a project folder to install packages local to that project

Many users create separate virtual environments for each Python project. Locate the virtual environment in the root folder of each project, typically with a shared name like env. Run the following command from the root folder of each project to create a virtual environment configuration folder:

$ python -m venv env

Before you work on a project, run the following command from the root of the project to start using the virtual environment:

$ source env/bin/activate

You should then see a prompt similar to the following:

(env) $

When you finish working on a project, run the following command from any directory to leave the virtual environment:

$ deactivate

per-user environments

Instead of creating a virtual environment for each of your Python projects, you can create a single virtual environment for your user account. Activate that virtual environment before running any of your Python code. This approach can be more convenient for workflows that share many libraries across projects.

When creating a virtual environment for multiple projects across an entire user account, consider locating the virtual environment configuration files in your home directory. Store your configuration in a folder whose name begins with a period to hide the folder by default, preventing it from cluttering your home folder.

Add a virtual environment to your home directory to use it in multiple projects and share the packages

Use the following command to create a virtual environment in a hidden folder in the current user’s home directory:

$ python -m venv ~/.env

Run the following command from any directory to start using the virtual environment:

$ source ~/.env/bin/activate

You should then see a prompt similar to the following:

(.env) $

To leave the virtual environment, run the following command from any directory:

$ deactivate

Create a virtual environment

Run the following command to create a virtual environment configuration folder, replacing <env-name> with the name you would like to use for the virtual environment (e.g. env):

$ python -m venv <env-name>

Enter a virtual environment

Then, execute the bin/activate script in the virtual environment configuration folder to enter the virtual environment:

$ source <env-name>/bin/activate

You should then see a prompt similar to the following:

(<env-name>) $

The (<env-name>) command prompt prefix indicates that the current terminal session is in a virtual environment named <env-name>.

To check that you’re in a virtual environment, use pip list to view the list of installed packages:

(<env-name>) $ pip list
Package    Version
---------- -------
pip        23.0.1
setuptools 66.1.1

The list should be much shorter than the list of packages installed in your system Python. You can now safely install packages with pip. Any packages you install with pip while in a virtual environment only install to that virtual environment. In a virtual environment, the python or python3 commands automatically use the virtual environment’s version of Python and installed packages instead of the system Python.

Top Tip
Pass the –system-site-packages flag before the folder name to preload all of the currently installed packages in your system Python installation into the virtual environment.

Exit a virtual environment

To leave a virtual environment, run the following command:

(<env-name>) $ deactivate

Use the Thonny editor

We recommend Thonny for editing Python code on the Raspberry Pi.

By default, Thonny uses the system Python. However, you can switch to using a Python virtual environment by clicking on the interpreter menu in the bottom right of the Thonny window. Select a configured environment or configure a new virtual environment with Configure interpreter.

The MagPi #148 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Using Python with virtual environments | The MagPi #148 appeared first on Raspberry Pi.

Bringing real-time edge AI applications to developers

In this guest post, Ramona Rayner from our partner Sony shows you how to quickly explore different models and AI capabilities, and how you can easily build applications on top of the Raspberry Pi AI Camera.

The recently launched Raspberry Pi AI Camera is an extremely capable piece of hardware, enabling you to build powerful AI applications on your Raspberry Pi. By offloading the AI inference to the IMX500 accelerator chip, more computational resources are available to handle application logic right on the edge! We are very curious to see what you will be creating and we are keen to give you more tools to do so. This post will cover how to quickly explore different models and AI capabilities, and how to easily build applications on top of the Raspberry Pi AI Camera.

If you didn’t have the chance to go through the Getting Started guide, make sure to check that out first to verify that your AI Camera is set up correctly.

Explore pre-trained models

A great way to start exploring the possibilities of the Raspberry Pi AI Camera is to try out some of the pre-trained models that are available in the IMX500 Model Zoo. To simplify the exploration process, consider using a GUI Tool, designed to quickly upload different models and see the real-time inference results on the AI Camera.

In order to start the GUI Tool, make sure to have Node.js installed. (Verify Node.js is installed by running node --version in the terminal.) And build and run the tool by running the following commands in the root of the repository:

make build
./dist/run.sh

The GUI Tool will be accessible on http://127.0.0.1:3001. To see a model in action:

  • Add a custom model by clicking the ADD button located at the top right corner of the interface.
  • Provide the necessary details to add a custom network and upload the network.rpk file, and the (optional) labels.txt file.
  • Select the model and navigate to Camera Preview to see the model in action!

Here are just a few of the models available in the IMX500 Model Zoo:

Network NameNetwork TypePost ProcessorColor FormatPreserve Aspect RatioNetwork FileLabels File
mobilenet_v2packagedClassificationRGBTruenetwork.rpkimagenet_labels.txt
efficientdet_lite0_pppackagedObject Detection (EfficientDet Lite0)RGBTruenetwork.rpkcoco_labels.txt
deeplabv3pluspackagedSegmentationRGBFalsenetwork.rpk
posenetpackagedPose EstimationRGBFalsenetwork.rpk

Exploring the different models gives you insight into the camera’s capabilities and enables you to identify the model that best suits your requirements. When you think you’ve found it, it’s time to build an application.

Building applications

Plenty of CPU is available to run applications on the Raspberry Pi while model inference is taking place on the IMX500. To demonstrate this we’ll run a Workout Monitoring sample application.

The goal is to count real-time exercise repetitions by detecting and tracking people performing common exercises like pull-ups, push-ups, ab workouts and squats. The app will count repetitions for each person in the frame, making sure multiple people can work out simultaneously and compete while getting automated rep counting.

To run the example, clone the sample apps repository and make sure to download the HigherHRNet model from the Raspberry Pi IMX500 Model Zoo.

Make sure you have OpenCV with Qt available:

sudo apt install python3-opencv

And from the root of the repository run:

python3 -m venv venv --system-site-packages
source venv/bin/activate
cd examples/workout-monitor/
pip install -e .

Switching between exercises is straightforward; simply provide the appropriate --exercise argument as one of pullup, pushup, abworkout or squat.

workout-monitor --model /path/to/imx500_network_higherhrnet_coco.rpk
 --exercise pullup

Note that this application is running:

  • Model post-processing to interpret the model’s output tensor into bounding boxes and skeleton keypoints
  • A tracker module (ByteTrack) to give the detected people a unique ID so that you can count individual people’s exercise reps
  • A matcher module to increase the accuracy of the tracker results, by matching people over frames so as not to lose their IDs
  • CV2 visualisation to visualise the results of the detections and see the results of the application

And all of this in real time, on the edge, while the IMX500 is taking care of the AI inference!

Now both you and the AI Camera are testing out each other’s limits. How many pull-ups can you do?

We hope by this point you’re curious to explore further; you can discover more sample applications on GitHub.

The post Bringing real-time edge AI applications to developers appeared first on Raspberry Pi.

Raspberry Pi Connect: new native panel plugin and connectivity testing

By: Paul

The latest release of Raspberry Pi OS includes an all-new, native panel plugin for Raspberry Pi Connect, our secure remote access solution that allows you to connect to your Raspberry Pi desktop and command line directly from your web browser.

Since the launch of our public beta with screen sharing back in May, and the addition of remote shell access and support for older Raspberry Pi devices in June, we’ve been working on improving support and performance on as many Raspberry Pi devices as possible — from Raspberry Pi Zero to Raspberry Pi 5 — both when using Raspberry Pi OS with desktop and our Lite version.

By default, Raspberry Pi Connect will be installed but disabled, only becoming active for your current user if you choose ‘Turn On Raspberry Pi Connect’ from the menu bar, or by running rpi-connect on from the terminal.

If this is your first time trying the service, using the menu bar will open your browser to sign up for a free Raspberry Pi Connect account; alternatively, you can run rpi-connect signin from the terminal to print a unique URL that you can open on any device you like. Once signed up and signed in, you can then connect to your device either via screen sharing (if you’re using Raspberry Pi desktop) or via remote shell from your web browser on any computer.

You can now stop and disable the service for your current user by choosing ‘Turn Off Raspberry Pi Connect’ or running rpi-connect off from the terminal.

With the latest release of 2.1.0 (available via software update), we now include a new rpi-connect doctor command that runs a series of connectivity tests to check the service can establish connections properly. We make every effort to ensure you can connect to your device without having to make any networking changes or open ports in your firewall — but if you’re having issues, run the command like so:

$ rpi-connect doctor
✓ Communication with Raspberry Pi Connect API
✓ Authentication with Raspberry Pi Connect API
✓ Peer-to-peer connection candidate via STUN
✓ Peer-to-peer connection candidate via TURN

Full documentation for Raspberry Pi Connect can be found on our website, or via man rpi-connect in the terminal when installed on your device.

Updates on updates

We’ve heard from lots of users about the features they’d most like to see next, and we’ve tried to prioritise the things that will bring the largest improvements in functionality to the largest number of users. Keep an eye on this blog to see our next updates.

The post Raspberry Pi Connect: new native panel plugin and connectivity testing appeared first on Raspberry Pi.

Raspberry Pi AI Kit projects

By: Phil King

This #MagPiMonday, we’re hoping to inspire you to add artificial intelligence to your Raspberry Pi designs with this feature by Phil King, from the latest issue of The MagPi.

With their powerful AI accelerator modules, Raspberry Pi’s Camera Module and AI Kit open up exciting possibilities in computer vision and machine learning. The versatility of the Raspberry Pi platform, combined with AI capabilities, opens up a world of new possibilities for innovative smart projects. From creative experiments to practical applications like smart pill dispensers, makers are harnessing the kit’s potential to push the boundaries of AI. In this feature, we explore some standout projects, and hope they inspire you to embark on your own.

Peeper Pam boss detector

By VEEB Projects

AI computer vision can identify objects within a live camera view. In this project, VEEB’s Martin Spendiff and Vanessa Bradley have used it to detect humans in the frame, so you can tell if your boss is approaching behind you as you sit at your desk!

The project comprises two parts. A Raspberry Pi 5 equipped with a Camera Module and AI Kit handles the image recognition and also acts as a web server. This uses web sockets to send messages wirelessly to the ‘detector’ part — a Raspberry Pi Pico W and a voltmeter whose needle moves to indicate the level of AI certainty for the ID.

Having got their hands on an AI Kit — “a nice intro into computer vision” — it took the pair just three days to create Peeper Pam. “The most challenging bit was that we’d not used sockets — more efficient than the Pico constantly asking Raspberry Pi ‘do you see anything?’,” says Martin. “Raspberry Pi does all the heavy lifting, while Pico just listens for an ‘I’ve seen something’ signal.”

While he notes that you could get Raspberry Pi 5 to serve both functions, the two-part setup means you can place the camera in a different position to monitor a spot you can’t see. Also, by adapting the code from the project’s GitHub repo, there are lots of other uses if you get the AI to deter other objects. “Pigeons in the window box is one that we want to do,” Martin says.

Monster AI Pi PC

By Jeff Geerling

Never one to do things by halves, Jeff Geerling went overboard with Raspberry Pi AI Kit and built a Monster AI Pi PC with a total of eight neural processors. In fact, with 55 TOPS (trillions of operations per second), it’s faster than the latest AMD, Qualcomm, and Apple Silicon processors!

The NPU chips — including the AI Kit’s Hailo-8L — are connected to a large 12× PCIe slot card with a PEX 8619 switch capable of handling 16 PCI Express Gen 2 lanes. The card is then mounted on a Raspberry Pi 5 via a Pineboards uPCIty Lite HAT, which has an additional 12V PSU to supply the extra wattage needed for all those processors.

With a bit of jiggery-pokery with the firmware and drivers on Raspberry Pi, Jeff managed to get it working.

Car detection & tracking system

By Naveen

As a proof of concept, Japanese maker Naveen aimed to implement an automated system for identifying and monitoring cars at toll plazas to get an accurate tally of the vehicles entering and exiting.

With the extra processing power provided by a Raspberry AI Kit, the project uses Edge Impulse computer vision to detect and count cars in the view from a Camera Module Wide. “We opted for a wide lens because it can capture a larger area,” he says, “allowing the camera to monitor multiple lanes simultaneously.” He also needed to train and test a YOLOv5 machine learning model. All the details can be found on the project page via the link above, which could prove useful for learning how to train custom ML models for your own AI project.

Safety helmet detection system

By Shakhizat Nurgaliyev

Wearing a safety helmet on a building site is essential and could save your life. This computer vision project uses Raspberry Pi AI Kit with the advanced YOLOv8 machine learning model to quickly and accurately identify objects within the camera view, running at an impressive inference speed of 30fps.

The project page has a guide showing how to make use of Raspberry Pi AI Kit to achieve efficient AI inferencing for safety helmet detection. This includes details of the software installation and model training process, for which the maker has provided a link to a dataset of 5000 images with bounding box annotations for three classes: helmet, person, and head.

Accelerating MediaPipe models

By Mario Bergeron

Google’s MediaPipe is an open-source framework developed for building machine learning pipelines, especially useful for working with videos and images.

Having used MediaPipe on other platforms, Mario Bergeron decided to experiment with it on a Raspberry Pi AI Kit. On the project page (linked above) he details the process, including using his Python demo application with options to detect hands/palms, faces, or poses.

Mario’s test results show how much better the AI Kit’s Hailo-8L AI accelerator module performs compared to running reference TensorFlow Lite models on Raspberry Pi 5 alone: up to 5.8 times faster. With three models running for hand and landmarks detection, the frame rate is 26–28fps with one hand detected, and 22–25fps for two.

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Raspberry Pi AI Kit projects appeared first on Raspberry Pi.

Raspberry Pi USB 3 Hub on sale now at $12

Most Raspberry Pi single-board computers, with the exception of the Raspberry Pi Zero and A+ form factors, incorporate an on-board USB hub to fan out a single USB connection from the core silicon, and provide multiple downstream USB Type-A ports. But no matter how many ports we provide, sometimes you just need more peripherals than we have ports. And with that in mind, today we’re launching the official Raspberry Pi USB 3 Hub, a high-quality four-way USB 3.0 hub for use with your Raspberry Pi or other, lesser, computer.

Key features include:

  • A single upstream USB 3.0 Type-A connector on an 8 cm captive cable
  • Four downstream USB 3.0 Type-A ports
  • Aggregate data transfer speeds up to 5 Gbps
  • USB-C socket for optional external 3A power supply (sold separately)

Race you to the bottom

Why design our own hub? Well, we’d become frustrated with the quality and price of the hubs available online. Either you pay a lot of money for a nicely designed and reliable product, which works well with a broad range of hosts and peripherals; or you cheap out and get something much less compatible, or unreliable, or ugly, or all three. Sometimes you spend a lot of money and still get a lousy product.

It felt like we were trapped in a race to the bottom, where bad quality drives out good, and marketplaces like Amazon end up dominated by the cheapest thing that can just about answer to the name “hub”.

So, we worked with our partners at Infineon to source a great piece of hub silicon, CYUSB3304, set Dominic to work on the electronics and John to work on the industrial design, and applied our manufacturing and distribution capabilities to make it available at the lowest possible price. The resulting product works perfectly with all models of Raspberry Pi computer, and it bears our logo because we’re proud of it: we believe it’s the best USB 3.0 hub on the market today.

Grab one and have a play: we think you’ll like it.

The post Raspberry Pi USB 3 Hub on sale now at $12 appeared first on Raspberry Pi.

Meet Kari Lawler: Classic computer and retro gaming enthusiast

Meet Kari Lawler, a YouTuber with a passion for collecting and fixing classic computers, as well as retro gaming. This interview first appeared in issue 147 of The MagPi magazine.

Kari Lawler has a passion for retro tech — and despite being 21, her idea of retro fits with just about everyone’s definition, as she collects and restores old Commodore 64s, Amiga A500s, and Atari 2600s. Stuff from before even Features Editor Rob was born, and he’s rapidly approaching 40. Kari has been involved in the tech scene for ten years though, doing much more than make videos on ’80s computers.

“I got my break into tech at around 11 years old, when I hacked together my very own virtual assistant and gained some publicity,” Kari says. “This inspired me to learn more, especially everything I could about artificial intelligence. Through this, I created my very own youth programme called Youth4AI, in which I engaged with and taught thousands of young people about AI. As well as my youth programme, I was lucky enough to work on many AI projects and branch out into government advisory work as well. Culminating, at 18 years old, in being entered into the West Midlands Women in Tech Hall of Fame, with a Lifetime Achievement Award of all things.”

What’s your history with making?

“Being brought up in a family of makers, I suppose it was inevitable I got the bug as well. From an early age, I fondly remember being surrounded by arts and crafts, and attending many sessions. From sewing to pottery and basic electronics to soldering, I enjoyed everything I did. Which resulted in me creating many projects, from a working flux capacitor (well, it lit up) for school homework, to utilising what I learned to make fun projects to share with others when I volunteered at my local Raspberry Pi Jam. Additionally, at around the age of 12 I was introduced to the wonderful world of 3D printing, and I’ve utilised that knowledge in many of the projects I’ve shared online. Starting with the well-received ’24 makes for Christmas’ I did over on X [formerly Twitter] in 2017, aged 14, which featured everything from coding Minecraft to robot sprouts. And I’ve been sharing what I make over on my socials ever since.”

Fun fact: The code listings in The MagPi are inspired by magazines from the 1980s, which also printed code listings. Although you can download all of ours as well

How did you get into retro gaming?

“Both my uncle and dad had a computer store in the ’90s, the PS1/N64 era, and while they have never displayed any of it, what was left of the shop was packed up and put into storage. And, me being me, I was quite interested in learning more about what was in those boxes. Additionally, I grew up with a working BBC Micro in the house, so have fond memories playing various games on it, especially Hangman — I think I was really into spelling bees at that point. So, with that and the abundance of being surrounded by old tech, I really got into learning about the history of computing and gaming. Which led me to getting the collecting bug, and to start adding to the collection myself so I could experience more and more tech from the past.”

One of Kari’s more recent projects was fixing a PSOne, the smaller release of the original PlayStation but with a screen attached

What’s your favourite video that you’ve made?

“Now that’s a hard one to answer. But if I go back to one of my first videos, Coding games like it’s the ’80s, it’s one that resonates with how I got my first interest in programming. My dad introduced me to Usborne computer books from the 1980s, just after I started learning Python, and said ‘try and convert some of these’. I accepted that challenge, and that’s what got me fascinated with ’80s programming books, hence the video I made. With the Usborne books specifically, there is artwork and a back story for each game. And while technically not great games, I just love how they explain the code and challenge the reader to improve. For which, I’m sure some of my viewers will be pleased to hear, I have in the works more videos exploring programming books/magazine type-in listings from the ’80s.”

Recreating classic NES Tetrinomoes with a 3D printer to make cool geometric magnets

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Meet Kari Lawler: Classic computer and retro gaming enthusiast appeared first on Raspberry Pi.

Raspberry Pi Touch Display 2 on sale now at $60

Way back in 2015, we launched the Raspberry Pi Touch Display, a 7″ 800×480-pixel LCD panel supporting multi-point capacitive touch. It remains one of our most popular accessories, finding a home in countless maker projects and embedded products. Today, we’re excited to announce Raspberry Pi Touch Display 2, at the same low price of $60, offering both a higher 720×1280-pixel resolution and a slimmer form factor.

A Raspberry Pi Touch Display 2 lying on a flat surface, displaying the Raspberry Pi OS Desktop

Key features of Raspberry Pi Touch Display 2 include:

  • 7″ diagonal display
  • 88mm × 155mm active area
  • 720 (RGB) × 1280 pixels
  • True multi-touch capacitive panel, supporting five-finger touch
  • Fully supported by Raspberry Pi OS
  • Powered from the host Raspberry Pi

Simple setup

Touch Display 2 is powered from your Raspberry Pi, and is compatible with all Raspberry Pi computers from Raspberry Pi 1B+ onwards, except for the Raspberry Pi Zero series which lack the necessary DSI port. It attaches securely to your Raspberry Pi with four screws, and ships with power and data cables compatible with both standard and mini FPC connector formats. Unlike its predecessor, Touch Display 2 integrates the display driver PCB into the display enclosure itself, delivering a much slimmer form factor.

A Raspberry Pi Touch Display 2 upright on a blue cutting board on a bench, facing away from the viewer so the reverse of the display with the host Raspberry Pi 5, ribbon data cable, and power cables are visible. A soldering iron and a benchtop multimeter with crocodile leads are also on the bench.

Like its predecessor, Touch Display 2 is fully supported by Raspberry Pi OS, which provides drivers to support five-finger touch and an on-screen keyboard. This gives you full functionality without the need for a keyboard or mouse. While it is a native portrait-format 720×1280-pixel panel, Raspberry Pi OS supports screen rotation for users who would prefer to use it in landscape orientation.

Consistent with our commitment to long product availability lifetimes, the original Touch Display will remain in production for the foreseeable future, though it is no longer recommended for new designs. Touch Display 2 will remain in production until 2030 at the earliest, allowing our embedded and industrial customers to build it into their products and installations with confidence.

We’ve never gone nine years between refreshes of a significant accessory before. But we took the time to get this one just right, and are looking forward to seeing how you use Touch Display 2 in your projects and products over the next nine years and beyond.

The post Raspberry Pi Touch Display 2 on sale now at $60 appeared first on Raspberry Pi.

Raspberry Pi product series explained

As our product line expands, it can get confusing trying to keep track of all the different Raspberry Pi boards out there. Here is a high-level breakdown of Raspberry Pi models, including our flagship series, Zero series, Compute Module series, and Pico microcontrollers.

Raspberry Pi makes computers in several different series:

  • The flagship series, often referred to by the shorthand ‘Raspberry Pi’, offers high-performance hardware, a full Linux operating system, and a variety of common ports in a form factor roughly the size of a credit card.
  • The Zero series offers a full Linux operating system and essential ports at an affordable price point in a minimal form factor with low power consumption.
  • The Compute Module series, often referred to by the shorthand ‘CM’, offers high-performance hardware and a full Linux operating system in a minimal form factor suitable for industrial and embedded applications. Compute Module models feature hardware equivalent to the corresponding flagship models but with fewer ports and no on-board GPIO pins. Instead, users should connect Compute Modules to a separate baseboard that provides the ports and pins required for a given application.

Additionally, Raspberry Pi makes the Pico series of tiny, versatile microcontroller boards. Pico models do not run Linux or allow for removable storage, but instead allow programming by flashing a binary onto on-board flash storage.

Flagship series

Model B indicates the presence of an Ethernet port. Model A indicates a lower-cost model in a smaller form factor with no Ethernet port, reduced RAM, and fewer USB ports to limit board height.

ModelSoCMemoryGPIOConnectivity
Raspberry Pi Model BRaspberry Pi Model BBCM2835256MB, 512MB26-pin GPIO headerHDMI, 2 × USB 2.0, CSI camera port, DSI display port, 3.5mm audio jack, RCA composite video, Ethernet (100Mb/s), SD card slot, micro USB power
Raspberry Pi Model ARaspberry Pi Model ABCM2835256MB26-pin GPIO headerHDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm audio jack, RCA composite video, SD card slot, micro USB power
Raspberry Pi Model B+Raspberry Pi Model B+BCM2835512MB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), microSD card slot, micro USB power
Raspberry Pi Model A+Raspberry Pi Model A+BCM2835256MB, 512MB40-pin GPIO headerHDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, microSD card slot, micro USB power
Raspberry Pi 2 Model BRaspberry Pi 2 Model BBCM2836 (in version 1.2, switched to BCM2837)1 GB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), microSD card slot, micro USB power
Raspberry Pi 3 Model BRaspberry Pi 3 Model BBCM28371 GB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), 2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.1, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
Raspberry Pi 3 Model B+Raspberry Pi 3 Model B+BCM2837b01GB40-pin GPIO headerHDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, PoE-capable Ethernet (300Mb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (100Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
Raspberry Pi 3 Model A+Raspberry Pi 3 Model A+BCM2837b0512 MB40-pin GPIO headerHDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, 2.4/5GHz dual-band 802.11ac Wi-Fi (100Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
Raspberry Pi 4 Model BRaspberry Pi 4 Model BBCM27111GB, 2GB, 4GB, 8GB40-pin GPIO header2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, CSI camera port, DSI display port, 3.5 mm AV jack, PoE-capable Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (120Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 3A (15W))
Raspberry Pi 400Raspberry Pi 400BCM27114GB40-pin GPIO header2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (120Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 3A (15W))
Raspberry Pi 5Raspberry Pi 5BCM2712 (2GB version uses BCM2712D0)2GB, 4GB, 8GB40-pin GPIO header2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, 2 ×  CSI camera/DSI display ports, single-lane PCIe FFC connector, UART connector, RTC battery connector, four-pin JST-SH PWM fan connector, PoE+-capable Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi 5 (300Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 5A (25W) or 5V, 3A (15W) with a 600mA peripheral limit)

For more information about the ports on the Raspberry Pi flagship series, see the Schematics and mechanical drawings.

Zero series

Models with the H suffix have header pins pre-soldered to the GPIO header. Models that lack the H suffix do not come with header pins attached to the GPIO header; the user must solder pins manually or attach a third-party pin kit.

All Zero models have the following connectivity:

  • a microSD card slot
  • a CSI camera port (version 1.3 of the original Zero introduced this port)
  • a mini HDMI port
  • 2 × micro USB ports (one for input power, one for external devices)
ModelSoCMemoryGPIOWireless Connectivity
Raspberry Pi ZeroRaspberry Pi ZeroBCM2835512MB40-pin GPIO header (unpopulated)none
Raspberry Pi Zero WRaspberry Pi Zero WBCM2835512MB40-pin GPIO header (unpopulated)2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.0, Bluetooth Low Energy (BLE)
Raspberry Pi Zero WHRaspberry Pi Zero WHBCM2835512MB40-pin GPIO header2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.0, Bluetooth Low Energy (BLE)
Raspberry Pi Zero 2 WRaspberry Pi Zero 2 WRP3A0512MB40-pin GPIO header (unpopulated)2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE)

Compute Module series

ModelSoCMemoryStorageForm factorWireless Connectivity
Raspberry Pi Compute Module 1Raspberry Pi Compute Module 1BCM2835512MB4GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 3Raspberry Pi Compute Module 3BCM28371GB0GB (Lite), 4GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 3+Raspberry Pi Compute Module 3+BCM2837b01GB0GB (Lite), 8GB, 16GB, 32GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 4SRaspberry Pi Compute Module 4SBCM27111GB, 2GB, 4GB, 8GB0GB (Lite), 8GB, 16GB, 32GBDDR2 SO-DIMMnone
Raspberry Pi Compute Module 4Raspberry Pi Compute Module 4BCM27111GB, 2GB, 4GB, 8GB0GB (Lite), 8GB, 16GB, 32GBdual 100-pin high density connectorsoptional: 2.4/5GHz dual-band 802.11ac Wi-Fi 5 (300Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE)

For more information about Raspberry Pi Compute Modules, see the Compute Module documentation.

Pico microcontrollers

Models with the H suffix have header pins pre-soldered to the GPIO header. Models that lack the H suffix do not come with header pins attached to the GPIO header; the user must solder pins manually or attach a third-party pin kit.

ModelSoCMemoryStorageGPIOWireless Connectivity
Raspberry Pi PicoRaspberry Pi PicoRP2040264KB2MBtwo 20-pin GPIO headers (unpopulated)none
Raspberry Pi Pico HRaspberry Pi Pico HRP2040264KB2MBtwo 20-pin GPIO headersnone
Raspberry Pi Pico WRaspberry Pi Pico WRP2040264KB2MBtwo 20-pin GPIO headers (unpopulated)2.4GHz single-band 802.11n Wi-Fi (10Mb/s), Bluetooth 5.2, Bluetooth Low Energy (BLE)
Raspberry Pi Pico WHRaspberry Pi Pico WHRP2040264KB2MBtwo 20-pin GPIO headers2.4GHz single-band 802.11n Wi-Fi (10Mb/s), Bluetooth 5.2, Bluetooth Low Energy (BLE)
Raspberry Pi Pico 2Raspberry Pi Pico 2RP2350520KB4MBtwo 20-pin GPIO headers (unpopulated)none

For more information about Raspberry Pi Pico models, see the Pico documentation.

If you’re interested in schematics, mechanical drawings, and information on thermal control, visit our documentation page.

The post Raspberry Pi product series explained appeared first on Raspberry Pi.

DEC Flip-Chip tester | The MagPi #147

A brand new issue of The MagPi is out in the wild, and one of our favourite projects we read about involved rebuilding an old PDP-9 computer with a Raspberry Pi-based device that tests hundreds of components.

Anders Sandahl loves collecting old computers: “I really like to restore them and get them going again.” For this project, he wanted to build a kind of component tester for old DEC (Digital Equipment Corporation) Flip-Chip boards before he embarked on the lengthy task of restoring his 1966 PDP-9 computer — a two-foot-tall machine with six- to seven-hundred Flip-Chip boards inside — back to working order. 

DEC’s 1966 PDP-9 computer was two foot tall
Image credit: Wikipedia

His Raspberry Pi-controlled DEC Flip-Chip tester checks the power output of these boards using relay modules and signal clips, giving accurate information about each one’s power draw and output. Once he’s confident each component is working properly, Anders can begin to assemble the historic DEC PDP-9 computer, which Wikipedia advises is one of only 445 ever produced.

Logical approach

“Flip-Chip boards from this era implement simple logical functions, comparable to one 7400-series logic circuit,” Anders explains. “The tester uses Raspberry Pi and an ADC (analogue-to-digital converter) to measure and control analogue signals sent to the Flip-Chip, and digital signals used to control the tester’s circuits. PDP-7, PDP-8 (both 8/S and Straight-8), PDP-9, and PDP-10 (with the original KA processor) all use this generation of Flip-Chips. A testing device for one will work for all of them, which is pretty useful if you’re in the business of restoring old computers. 

The Flip-Chip tester uses Raspberry Pi 3B+, 4, or 5 to check the signal and relay the strength of each Flip-Chip by running a current across it, so restorers don’t attach a dud component

Rhode Island Computer Museum (RICM) is where The MagPi publisher Brian Jepson and friend Mike Thompson both volunteer. Mike is part of a twelve-year-project to rebuild RICM’s own DEC PDP-9 and, after working on a different Flip-Chip tester there, he got in touch with Anders about his Raspberry Pi-based version. He’s now busily helping write the user manual for the tester unit. 

Warning!
Frazzled Flip-Chips


Very old computers that use Flip-Chips have components operating at differing voltages, so there’s a high chance of shorting them. You need a level shifter to convert and step down voltages for safe operation. 

Mike explains: “Testing early transistor-only Flip-Chips is incredibly complicated because the voltages are all negative, and the Flip-Chips must be tested with varying input voltages and different loads on the outputs.” There are no integrated circuits, just discrete transistors. Getting such an old computer running again is “quite a task” because of the sheer number of broken components on each PCB, and Flip-Chip boards hold lots of transistors and diodes, “all of which are subject to failure after 55+ years”.

Anders previously used Raspberry Pi to recreate an old PDP-8 computer

Obstacles, of course

The Flip-Chip tester features 15 level-shifter boards. These step down the voltage so components with different power outputs and draws can operate alongside each other safely and without anything getting frazzled. Anders points out the disparity between the Flip-Chips’ 0 and -3V logic voltage levels and the +10 and -15V used as supply voltages. Huge efforts went into this level conversion to make it reliable and failsafe. Anders wrote the testing software himself, and built the hardware “from scratch” using parts from Mouser and custom-designed circuit boards. The project took around two years and cost around $500, of which the relays were a major part. 

This photo from the user manual shows just how huge the PDP-9 could get

Anders favours Raspberry Pi because “it offers a complete OS, file system, and networking in a neat and well-packaged way”, and says it is “a very good software platform that you really just have to do minor tweaks on to get right”. He’s run the tester on Raspberry Pi 3B, 4, and 5. He says it should also run on Raspberry Pi Zero as well, “but having Ethernet and the extra CPU power makes life easier”.

Although this is a fairly niche project for committed computer restorers, Anders believes his Flip-Chip tester can be built by anyone who can solder fairly small SMD components. Documenting the project so others can build it was quite a task, so it was quite helpful when Mike got in touch and was able to assist with the write-up. As a fellow computer restorer, Mike says the tester means getting RICM’s PDP-9 working again “won’t be such an overwhelming task. With the tester we can test and repair each of the boards instead of trying to diagnose a very broken computer as a whole.” 

The MagPi #147 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post DEC Flip-Chip tester | The MagPi #147 appeared first on Raspberry Pi.

A new release of Raspberry Pi OS

labwc – a new Wayland compositor

Today we are releasing a new version of Raspberry Pi OS. This version includes a significant change, albeit one that we hope most people won’t even notice. So we thought we’d better tell you about it to make sure you do…

First, a brief history lesson. Linux desktops, like their Unix predecessors, have for many years used the X Window system. This is the underlying technology which displays the desktop, handles windows, moves the mouse, and all that other stuff that you don’t really think about because it (usually) just works. X is prehistoric in computing terms, serving us well since the early 80s. But after 40 years, cracks are beginning to show in the design of X.

As a result, many Linux distributions are moving to a new windowing technology called Wayland. Wayland has many advantages over X, particularly performance. Under X, two separate applications help draw a window:

  • the display server creates windows on the screen and gives applications a place to draw their content
  • the window manager positions windows relative to each other and decorates windows with title bars and frames.

Wayland combines these two functions into a single application called the compositor. Applications running on a Wayland system only need to talk to one thing, instead of two, to display a window. As you might imagine, this is a much more efficient way to draw application windows.

Wayland also provides a security advantage. Under X, all applications communicated back and forth with the display server; consequently, any application could observe any other application. Wayland isolates applications at the compositor level, so applications cannot observe each other.

We first started thinking about Wayland at Raspberry Pi around ten years ago; at that time, it was nowhere near ready to use. Over the last few years, we have taken cautious steps towards Wayland. When we released Bullseye back in 2021, we switched to a new X window manager, mutter, which could also be used as a Wayland compositor. We included the option to switch it to Wayland mode to see how it worked.

With the release of Bookworm in 2023, we replaced mutter with a new dedicated Wayland compositor called wayfire and made Wayland the default mode of operation for Raspberry Pi 4 and 5, while continuing to run X on lower-powered models. We spent a lot of time optimising wayfire for Raspberry Pi hardware, but it still didn’t run well enough on older Pis, so we couldn’t switch to it everywhere.

All of this was a learning experience – we learned more about Wayland, how it interacted with our hardware, and what we needed to do to get the best out of it. As we continued to work with wayfire, we realised it was developing in a direction that would make it less compatible with our hardware. At this point, we knew it wasn’t the best choice to provide a good Wayland experience for Raspberry Pis. So we started looking at alternatives.

This search eventually led us to a compositor called labwc. Our initial experiments were encouraging: we were able to use it in Raspberry Pi OS after only a few hours of work. Closer investigation revealed labwc to be a much better fit for the Raspberry Pi graphics hardware than wayfire. We contacted the developers and found that their future direction very much aligned with our own.

labwc is built on top of a system called wlroots, a set of libraries which provide the basic functionality of a Wayland system. wlroots has been developed closely alongside the Wayland protocol. Using wlroots, anyone who wants to write a Wayland compositor doesn’t need to reinvent the wheel; we can take advantage of the experience of those who designed Wayland, since they know it best.

So we made the decision to switch. For most of this year, we have been working on porting labwc to the Raspberry Pi Desktop. This has very much been a collaborative process with the developers of both labwc and wlroots: both have helped us immensely with their support as we contribute features and optimisations needed for our desktop.

After much optimisation for our hardware, we have reached the point where labwc desktops run just as fast as X on older Raspberry Pi models. Today, we make the switch with our latest desktop image: Raspberry Pi Desktop now runs Wayland by default across all models.

When you update an existing installation of Bookworm, you will see a prompt asking to switch to labwc the next time you reboot:

We recommend that most people switch to labwc.

Existing Pi 4 or 5 Bookworm installations running wayfire shouldn’t change in any noticeable way, besides the loss of a couple of animations which we haven’t yet implemented in labwc. Because we will no longer support wayfire with updates on Raspberry Pi OS, it’s best to adopt labwc as soon as possible.

Older Pis that currently use X should also switch to labwc. To ensure backwards compatibility with older applications, labwc includes a library called Xwayland, which provides a virtual X implementation running on top of Wayland. labwc provides this virtual implementation automatically for any application that isn’t compatible with Wayland. With Xwayland, you can continue to use older applications that you rely on while benefiting from the latest security and performance updates.

As with any software update, we cannot possibly test all possible configurations and applications. If you switch to labwc and experience an issue, you can always switch back to X. To do this, open a terminal window and type:

sudo raspi-config 

This launches the command-line Raspberry Pi Configuration application. Use the arrow keys to select “6 Advanced Options” and hit ‘enter’ to open the menu. Select “A6 Wayland” and choose “W1 X11 Openbox window manager with X11 backend”. Hit ‘escape’ to exit the application; when you restart your device, your desktop should restart with X.

We don’t expect this to be necessary for many people, but the option is there, just in case! Of course, if you prefer to stick with wayfire or X for any reason, the upgrade prompt offers you the option to do so – this is not a compulsory upgrade, just one that we recommend.

Improved touch screen support

While labwc is the biggest change to the OS in this release, it’s not the only one. We have also significantly improved support for using the Desktop with a touch screen. Specifically, Raspberry Pi Desktop now automatically shows and hides the virtual keyboard, and supports right-click and double-click equivalents for touch displays.

This change comes as a result of integrating the Squeekboard virtual keyboard. When the system detects a touch display, the virtual keyboard automatically displays at the bottom of the screen whenever it is possible to enter text. The keyboard also automatically hides when no text entry is possible.

This auto show and hide should work with most applications, but it isn’t supported by everything. For applications which do not support it, you can instead use the keyboard icon at the right end of the taskbar to manually toggle the keyboard on and off.

If you don’t want to use the virtual keyboard with a touch screen, or you want to use it without a touch screen and click on it with the mouse, you can turn it on or off in the Display tab of Raspberry Pi Configuration. The new virtual keyboard only works with labwc; it’s not compatible with wayfire or X.

In addition to the virtual keyboard, we added long press detection on touch screens to generate the equivalent of a right-click with a mouse. You can use this to launch context-sensitive menus anywhere in the taskbar and the file manager.

We also added double-tap detection on touch screens to generate a double-click. While this previously worked on X, it didn’t work in wayfire. Double-tap to double-click is now supported in labwc.

Better Raspberry Pi Connect integration

We’ve had a lot of very positive feedback about Raspberry Pi Connect, our remote access software that allows you to control your Raspberry Pi from any computer anywhere in the world. This release integrates Connect into the Desktop.

By default, you will now see the Connect icon in the taskbar at all times. Previously, this indicated that Connect was running. Now, the icon indicates that Connect is installed and ready to use, but is not necessarily running. Hovering the mouse over the icon brings up a tooltip displaying the current status.

You can now enable or disable Connect directly from the menu which pops up when the icon is clicked. Previously, this was an option in Raspberry Pi Configuration, but that option has been removed. Now, all the options to control Connect live in the icon menu.

If you don’t plan to use Connect, you can uninstall it from Recommended Software, or you can remove the icon from the taskbar by right-clicking the taskbar and choosing “Add / Remove Plugins…”.

Other things

This release includes some other small changes worth mentioning:

  • We rewrote the panel application for the taskbar at the top of the screen. In the previous version, even if you removed a plugin from the panel, it remained in memory. Now, when you remove a plugin, the panel never loads it into memory at all. Rather than all the individual plugins being part of a single application, each plugin is now a separate library. The panel only loads the libraries for the plugins that you choose to display on your screen. This won’t make much difference to many people, but can save you a bit of RAM if you remove several plugins. This also makes it easier to develop new plugins, both for us and third parties.
  • We introduced a new Screen Configuration tool, raindrop. This works exactly the same as the old version, arandr, and even looks similar. Under the hood, we rewrote the old application in C to improve support for labwc and touch screens. Because the new tool is native, performance should be snappier! Going forward, we’ll only maintain the new native version.

How to get it

The new release is available today in apt, Raspberry Pi Imager, or as a download from the software page on raspberrypi.com.

Black screen on boot issue (resolved)

We did have some issues on the initial release yesterday, whereby some people found that the switch to labwc caused the desktop to fail to start. Fortunately, the issue has now been fixed. It is safe to update according to the process below, so we have reinstated the update prompt described above.

If you experience problems updating and see a black screen instead of a desktop, there’s a simple fix. At the black screen, press Ctrl + Alt + F2. Authenticate at the prompt and run the following command:

sudo apt install labwc

Finally, reboot with sudo reboot. This should restore a working desktop. We apologise to anyone who was affected by this.

To update an existing Raspberry Pi OS Bookworm install to this release, run the following commands:

sudo apt update
sudo apt full-upgrade

When you next reboot, you will see the prompt described above which offers the switch to labwc.

To switch to the new Screen Configuration tool, run the following commands:

sudo apt purge arandr
sudo apt install raindrop

The new on-screen keyboard can either be installed from Recommended Software – it’s called Squeekboard – or from the command line with:

sudo apt install squeekboard wfplug-squeek

We hope you like the new desktop experience. Or perhaps more accurately, we hope you won’t notice much difference! As always, your comments are very welcome below.

The post A new release of Raspberry Pi OS appeared first on Raspberry Pi.

Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS

Following the successful launch of the Raspberry Pi AI Kit and AI Camera, we are excited to introduce the newest addition to our AI product line: the Raspberry Pi AI HAT+.

The AI HAT+ features the same best-in-class Hailo AI accelerator technology as our AI Kit, but now with a choice of two performance options: the 13 TOPS (tera-operations per second) model, priced at $70 and featuring the same Hailo-8L accelerator as the AI Kit, and the more powerful 26 TOPS model at $110, equipped with the Hailo-8 accelerator.

The image you uploaded shows a Raspberry Pi single-board computer with an attached AI accelerator module, likely the Raspberry Pi AI Hat. This hat includes a green circuit board with a central chip that appears to be from Hailo, a company that specializes in artificial intelligence (AI) processors. The board is connected to the Raspberry Pi via the GPIO pins, and it has several components related to AI processing and other features to enable high-performance machine learning on the device. This configuration is designed for AI applications like real-time image processing, neural network acceleration, and other computationally intensive tasks. The text "26 TOPS" refers to the AI hat's ability to perform 26 trillion operations per second, which is a significant performance specification for AI applications.

Designed to conform to our HAT+ specification, the AI HAT+ automatically switches to PCIe Gen 3.0 mode to maximise the full 26 TOPS of compute power available in the Hailo-8 accelerator.

Unlike the AI Kit, which utilises an M.2 connector, the Hailo accelerator chip is directly integrated onto the main PCB. This change not only simplifies setup but also offers improved thermal dissipation, allowing the AI HAT+ to handle demanding AI workloads more efficiently.

What can you do with the 26 TOPS model over the 13 TOPS model? The same, but more… You can run more sophisticated neural networks in real time, achieving better inference performance. The 26 TOPS model also allows you to run multiple networks simultaneously at high frame rates. For instance, you can perform object detection, pose estimation, and subject segmentation simultaneously on a live camera feed using the 26 TOPS AI HAT+:

Both versions of the AI HAT+ are fully backward compatible with the AI Kit. Our existing Hailo accelerator integration in the camera software stack works in exactly the same way with the AI HAT+. Any neural network model compiled for the Hailo-8L will run smoothly on the Hailo-8; while models specifically built for the Hailo-8 may not work on the Hailo-8L, alternative versions with lower performance are generally available, ensuring flexibility across different use cases.

After an exciting few months of AI product releases, we now offer an extensive range of options for running inferencing workloads on Raspberry Pi. Many such workloads – particularly those that are sparse, quantised, or intermittent – run natively on Raspberry Pi platforms; for more demanding workloads, we aim to be the best possible embedded host for accelerator hardware such as our AI Camera and today’s new Raspberry Pi AI HAT+. We are eager to discover what you make with it.

The post Introducing the Raspberry Pi AI HAT+ with up to 26 TOPS appeared first on Raspberry Pi.

Raspberry Pi SSDs and SSD Kits on sale now

To help you get the best out of your Raspberry Pi 5, today we’re launching a range of Raspberry Pi-branded NVMe SSDs. They are available both on their own and bundled with our M.2 HAT+ as ready-to-use SSD Kits.

The image depicts a Raspberry Pi single-board computer with an M.2 HAT attached. The HAT (Hardware Attached on Top) is labeled "Raspberry Pi M.2 HAT," and it supports an M.2 key slot, which is a standardized slot for attaching M.2 SSDs or other compatible devices.

In the image, there is an M.2 module connected to the HAT, likely providing additional storage or interfacing with specific hardware for performance enhancements. This setup is often used to expand storage capabilities or add custom hardware accelerators for applications like fast booting, large file storage, or computational purposes when connected to a Raspberry Pi. The M.2 module can be seen secured to the HAT via screws and has various regulatory markings visible.

The Raspberry Pi itself, underneath the HAT, is partially obscured, but you can see key features such as the HDMI ports, USB ports, and Ethernet jack. This combination is useful for those looking to boost their Raspberry Pi's storage capacity or performance through M.2 SSDs.

When we launched Raspberry Pi 5, almost exactly a year ago, I thought the thing people would get most excited about was the three-fold increase in performance over 2019’s Raspberry Pi 4. But very quickly it became clear that it was the other new features – the power button (!), and the PCI Express port – that had captured people’s imagination.

We’ve seen everything from Ethernet adapters, to AI accelerators, to regular PC graphics cards attached to the PCI Express port. We offer our own low-cost M.2 HAT+, which converts from our FPC standard to the standard M.2 M-key format, and there are a wide variety of third-party adapters which do basically the same thing. We’ve also released an AI Kit, which bundles the M.2 HAT+ with an AI inference accelerator from our friends at Hailo.

512GB variant
256GB variant

But the most popular use case for the PCI Express port on Raspberry Pi 5 is to attach an NVMe solid-state disk (SSD). SSDs are fast; faster even than our branded A2-class SD cards. If no-compromises performance is your goal, you’ll want to run Raspberry Pi OS from an SSD, and Raspberry Pi SSDs are the perfect choice.

This image shows a Raspberry Pi setup on a wooden surface, with a Raspberry Pi connected to an M.2 HAT (labeled "Raspberry Pi M.2 HAT M Key") and an attached M.2 SSD module. The Raspberry Pi is powered via a red USB cable, plugged into one of the ports.

The setup includes a white Raspberry Pi keyboard and a matching red and white mouse, connected to the Raspberry Pi, suggesting this is a complete desktop computing setup. The keyboard and mouse are placed on the desk next to the Raspberry Pi, and the mouse sits on a mousepad featuring a bright red and blue design that complements the color scheme of the peripherals.

The setup is wired with multiple connections, including two white USB cables going into the Raspberry Pi, likely connecting other peripherals like the keyboard and mouse, and a red Ethernet or power cable.

This configuration is typically used for light computing tasks, educational purposes, and experimenting with hardware or software using a Raspberry Pi with expandable M.2 storage for enhanced functionality.

The entry-level 256GB drive is priced at $30 on its own, or $40 as a kit; its 512GB big brother is priced at $45 on its own, or $55 as a kit. Both densities offer minimum 4KB random read and write performance of 40k IOPS and 70k IOPS respectively. The 256GB SSD and SSD Kit are available to buy today, while the 512GB variants are available to pre-order now for shipping by the end of November.

So, there you have it: a cost-effective way to squeeze even more performance out of your Raspberry Pi 5. Enjoy!

The post Raspberry Pi SSDs and SSD Kits on sale now appeared first on Raspberry Pi.

How to get started with your Raspberry Pi AI Camera

If you’ve got your hands on the Raspberry Pi AI Camera that we launched a few weeks ago, you might be looking for a bit of help to get up and running with it – it’s a bit different from our other camera products. We’ve raided our documentation to bring you this Getting started guide. If you work through the steps here you’ll have your camera performing object detection and pose estimation, even if all this is new to you. Then you can dive into the rest of our AI Camera documentation to take things further.

This image shows a Raspberry Pi setup on a wooden surface, featuring a Raspberry Pi board connected to an AI camera module via an orange ribbon cable. The Raspberry Pi board is attached to several cables: a red one on the left for power and a white HDMI cable on the right. The camera module sits in the lower right corner, with its lens facing up. Part of a white and red keyboard is visible on the right side of the image, and a small plant in a white pot is partially visible on the left. The scene suggests a Raspberry Pi project setup in progress.

Here we describe how to run the pre-packaged MobileNet SSD (object detection) and PoseNet (pose estimation) neural network models on the Raspberry Pi AI Camera.

Prerequisites

We’re assuming that you’re using the AI Camera attached to either a Raspberry Pi 4 or a Raspberry Pi 5. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+.

First, make sure that your Raspberry Pi runs the latest software. Run the following command to update:

sudo apt update && sudo apt full-upgrade

The AI Camera has an integrated RP2040 chip that handles neural network model upload to the camera, and we’ve released a new RP2040 firmware that greatly improves upload speed. AI Cameras shipping from now onwards already have this update, and if you have an earlier unit, you can update it yourself by following the firmware update instructions in this forum post. This should take no more than one or two minutes, but please note before you start that it’s vital nothing disrupts the process. If it does – for example, if the camera becomes disconnected, or if your Raspberry Pi loses power – the camera will become unusable and you’ll need to return it to your reseller for a replacement. Cameras with the earlier firmware are entirely functional, and their performance is identical in every respect except for model upload speed.

Install the IMX500 firmware

In addition to updating the RP2040 firmware if required, the AI camera must download runtime firmware onto the IMX500 sensor during startup. To install these firmware files onto your Raspberry Pi, run the following command:

sudo apt install imx500-all

This command:

  • installs the /lib/firmware/imx500_loader.fpk and /lib/firmware/imx500_firmware.fpk firmware files required to operate the IMX500 sensor
  • places a number of neural network model firmware files in /usr/share/imx500-models/
  • installs the IMX500 post-processing software stages in rpicam-apps
  • installs the Sony network model packaging tools

NOTE: The IMX500 kernel device driver loads all the firmware files when the camera starts, and this may take several minutes if the neural network model firmware has not been previously cached. The demos we’re using here display a progress bar on the console to indicate firmware loading progress.

Reboot

Now that you’ve installed the prerequisites, restart your Raspberry Pi:

sudo reboot
The image shows a Raspberry Pi AI Camera Module. It's a small, square-shaped green circuit board with four yellow mounting holes at each corner. In the center, there's a black camera lens marked with "MU2351." An orange ribbon cable is attached to the bottom of the board, used for connecting the camera to a Raspberry Pi. The Raspberry Pi logo, a white raspberry outline, is visible on the left side of the board.

Run example applications

Once all the system packages are updated and firmware files installed, we can start running some example applications. As mentioned earlier, the Raspberry Pi AI Camera integrates fully with libcamera, rpicam-apps, and Picamera2. This blog post concentrates on rpicam-apps, but you’ll find more in our AI Camera documentation.

rpicam-apps

The rpicam-apps camera applications include IMX500 object detection and pose estimation stages that can be run in the post-processing pipeline. For more information about the post-processing pipeline, see the post-processing documentation.

The examples on this page use post-processing JSON files located in /usr/share/rpicam-assets/.

Object detection

The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. imx500_mobilenet_ssd.json contains the configuration parameters for the IMX500 object detection post-processing stage using the MobileNet SSD neural network.

imx500_mobilenet_ssd.json declares a post-processing pipeline that contains two stages:

  1. imx500_object_detection, which picks out bounding boxes and confidence values generated by the neural network in the output tensor
  2. object_detect_draw_cv, which draws bounding boxes and labels on the image

The MobileNet SSD tensor requires no significant post-processing on your Raspberry Pi to generate the final output of bounding boxes. All object detection runs directly on the AI Camera.

The following command runs rpicam-hello with object detection post-processing:

rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30

After running the command, you should see a viewfinder that overlays bounding boxes on objects recognised by the neural network:

To record video with object detection overlays, use rpicam-vid instead:

rpicam-vid -t 10s -o output.264 --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --width 1920 --height 1080 --framerate 30

You can configure the imx500_object_detection stage in many ways.

For example, max_detections defines the maximum number of objects that the pipeline will detect at any given time. threshold defines the minimum confidence value required for the pipeline to consider any input as an object.

The raw inference output data of this network can be quite noisy, so this stage also performs some temporal filtering and applies hysteresis. To disable this filtering, remove the temporal_filter config block.

Pose estimation

The PoseNet neural network performs pose estimation, labelling key points on the body associated with joints and limbs. imx500_posenet.json contains the configuration parameters for the IMX500 pose estimation post-processing stage using the PoseNet neural network.

imx500_posenet.json declares a post-processing pipeline that contains two stages:

  1. imx500_posenet, which fetches the raw output tensor from the PoseNet neural network
  2. plot_pose_cv, which draws line overlays on the image

The AI Camera performs basic detection, but the output tensor requires additional post-processing on your host Raspberry Pi to produce final output.

The following command runs rpicam-hello with pose estimation post-processing:

rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_posenet.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30

You can configure the imx500_posenet stage in many ways.

For example, max_detections defines the maximum number of bodies that the pipeline will detect at any given time. threshold defines the minimum confidence value required for the pipeline to consider input as a body.

Picamera2

For examples of image classification, object detection, object segmentation, and pose estimation using Picamera2, see the picamera2 GitHub repository.

Most of the examples use OpenCV for some additional processing. To install the dependencies required to run OpenCV, run the following command:

sudo apt install python3-opencv python3-munkres

Now download the picamera2 repository to your Raspberry Pi to run the examples. You’ll find example files in the root directory, with additional information in the README.md file.

Run the following script from the repository to run YOLOv8 object detection:

python imx500_object_detection_demo.py --model /usr/share/imx500-models/imx500_network_yolov8n_pp.rpk --ignore-dash-labels -r

To try pose estimation in Picamera2, run the following script from the repository:

python imx500_pose_estimation_higherhrnet_demo.py

To explore further, including how things work under the hood and how to convert existing models to run on the Raspberry Pi AI Camera, see our documentation.

The post How to get started with your Raspberry Pi AI Camera appeared first on Raspberry Pi.

❌