Most Raspberry Pi single-board computers, with the exception of the Raspberry Pi Zero and A+ form factors, incorporate an on-board USB hub to fan out a single USB connection from the core silicon, and provide multiple downstream USB Type-A ports. But no matter how many ports we provide, sometimes you just need more peripherals than we have ports. And with that in mind, today we’re launching the official Raspberry Pi USB 3 Hub, a high-quality four-way USB 3.0 hub for use with your Raspberry Pi or other, lesser, computer.
Key features include:
A single upstream USB 3.0 Type-A connector on an 8 cm captive cable
Four downstream USB 3.0 Type-A ports
Aggregate data transfer speeds up to 5 Gbps
USB-C socket for optional external 3A power supply (sold separately)
Race you to the bottom
Why design our own hub? Well, we’d become frustrated with the quality and price of the hubs available online. Either you pay a lot of money for a nicely designed and reliable product, which works well with a broad range of hosts and peripherals; or you cheap out and get something much less compatible, or unreliable, or ugly, or all three. Sometimes you spend a lot of money and still get a lousy product.
It felt like we were trapped in a race to the bottom, where bad quality drives out good, and marketplaces like Amazon end up dominated by the cheapest thing that can just about answer to the name “hub”.
So, we worked with our partners at Infineon to source a great piece of hub silicon, CYUSB3304, set Dominic to work on the electronics and John to work on the industrial design, and applied our manufacturing and distribution capabilities to make it available at the lowest possible price. The resulting product works perfectly with all models of Raspberry Pi computer, and it bears our logo because we’re proud of it: we believe it’s the best USB 3.0 hub on the market today.
Grab one and have a play: we think you’ll like it.
Meet Kari Lawler, a YouTuber with a passion for collecting and fixing classic computers, as well as retro gaming.This interview first appeared in issue 147 of The MagPi magazine.
Kari Lawler has a passion for retro tech — and despite being 21, her idea of retro fits with just about everyone’s definition, as she collects and restores old Commodore 64s, Amiga A500s, and Atari 2600s. Stuff from before even Features Editor Rob was born, and he’s rapidly approaching 40. Kari has been involved in the tech scene for ten years though, doing much more than make videos on ’80s computers.
“I got my break into tech at around 11 years old, when I hacked together my very own virtual assistant and gained some publicity,” Kari says. “This inspired me to learn more, especially everything I could about artificial intelligence. Through this, I created my very own youth programme called Youth4AI, in which I engaged with and taught thousands of young people about AI. As well as my youth programme, I was lucky enough to work on many AI projects and branch out into government advisory work as well. Culminating, at 18 years old, in being entered into the West Midlands Women in Tech Hall of Fame, with a Lifetime Achievement Award of all things.”
What’s your history with making?
“Being brought up in a family of makers, I suppose it was inevitable I got the bug as well. From an early age, I fondly remember being surrounded by arts and crafts, and attending many sessions. From sewing to pottery and basic electronics to soldering, I enjoyed everything I did. Which resulted in me creating many projects, from a working flux capacitor (well, it lit up) for school homework, to utilising what I learned to make fun projects to share with others when I volunteered at my local Raspberry Pi Jam. Additionally, at around the age of 12 I was introduced to the wonderful world of 3D printing, and I’ve utilised that knowledge in many of the projects I’ve shared online. Starting with the well-received ’24 makes for Christmas’ I did over on X [formerly Twitter] in 2017, aged 14, which featured everything from coding Minecraft to robot sprouts. And I’ve been sharing what I make over on my socials ever since.”
How did you get into retro gaming?
“Both my uncle and dad had a computer store in the ’90s, the PS1/N64 era, and while they have never displayed any of it, what was left of the shop was packed up and put into storage. And, me being me, I was quite interested in learning more about what was in those boxes. Additionally, I grew up with a working BBC Micro in the house, so have fond memories playing various games on it, especially Hangman — I think I was really into spelling bees at that point. So, with that and the abundance of being surrounded by old tech, I really got into learning about the history of computing and gaming. Which led me to getting the collecting bug, and to start adding to the collection myself so I could experience more and more tech from the past.”
What’s your favourite video that you’ve made?
“Now that’s a hard one to answer. But if I go back to one of my first videos, Coding games like it’s the ’80s, it’s one that resonates with how I got my first interest in programming. My dad introduced me to Usborne computer books from the 1980s, just after I started learning Python, and said ‘try and convert some of these’. I accepted that challenge, and that’s what got me fascinated with ’80s programming books, hence the video I made. With the Usborne books specifically, there is artwork and a back story for each game. And while technically not great games, I just love how they explain the code and challenge the reader to improve. For which, I’m sure some of my viewers will be pleased to hear, I have in the works more videos exploring programming books/magazine type-in listings from the ’80s.”
The MagPi #147 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Way back in 2015, we launched the Raspberry Pi Touch Display, a 7″ 800×480-pixel LCD panel supporting multi-point capacitive touch. It remains one of our most popular accessories, finding a home in countless maker projects and embedded products. Today, we’re excited to announce Raspberry Pi Touch Display 2, at the same low price of $60, offering both a higher 720×1280-pixel resolution and a slimmer form factor.
Key features of Raspberry Pi Touch Display 2 include:
Touch Display 2 is powered from your Raspberry Pi, and is compatible with all Raspberry Pi computers from Raspberry Pi 1B+ onwards, except for the Raspberry Pi Zero series which lack the necessary DSI port. It attaches securely to your Raspberry Pi with four screws, and ships with power and data cables compatible with both standard and mini FPC connector formats. Unlike its predecessor, Touch Display 2 integrates the display driver PCB into the display enclosure itself, delivering a much slimmer form factor.
Like its predecessor, Touch Display 2 is fully supported by Raspberry Pi OS, which provides drivers to support five-finger touch and an on-screen keyboard. This gives you full functionality without the need for a keyboard or mouse. While it is a native portrait-format 720×1280-pixel panel, Raspberry Pi OS supports screen rotation for users who would prefer to use it in landscape orientation.
Consistent with our commitment to long product availability lifetimes, the original Touch Display will remain in production for the foreseeable future, though it is no longer recommended for new designs. Touch Display 2 will remain in production until 2030 at the earliest, allowing our embedded and industrial customers to build it into their products and installations with confidence.
We’ve never gone nine years between refreshes of a significant accessory before. But we took the time to get this one just right, and are looking forward to seeing how you use Touch Display 2 in your projects and products over the next nine years and beyond.
As our product line expands, it can get confusing trying to keep track of all the different Raspberry Pi boards out there. Here is a high-level breakdown of Raspberry Pi models, including our flagship series, Zero series, Compute Module series, and Pico microcontrollers.
Raspberry Pi makes computers in several different series:
The flagship series, often referred to by the shorthand ‘Raspberry Pi’, offers high-performance hardware, a full Linux operating system, and a variety of common ports in a form factor roughly the size of a credit card.
The Zero series offers a full Linux operating system and essential ports at an affordable price point in a minimal form factor with low power consumption.
The Compute Module series, often referred to by the shorthand ‘CM’, offers high-performance hardware and a full Linux operating system in a minimal form factor suitable for industrial and embedded applications. Compute Module models feature hardware equivalent to the corresponding flagship models but with fewer ports and no on-board GPIO pins. Instead, users should connect Compute Modules to a separate baseboard that provides the ports and pins required for a given application.
Additionally, Raspberry Pi makes the Pico series of tiny, versatile microcontroller boards. Pico models do not run Linux or allow for removable storage, but instead allow programming by flashing a binary onto on-board flash storage.
Flagship series
Model B indicates the presence of an Ethernet port. Model A indicates a lower-cost model in a smaller form factor with no Ethernet port, reduced RAM, and fewer USB ports to limit board height.
HDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, Ethernet (100Mb/s), 2.4GHz single-band 802.11n Wi-Fi (35Mb/s), Bluetooth 4.1, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
HDMI, 4 × USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, PoE-capable Ethernet (300Mb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (100Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
HDMI, USB 2.0, CSI camera port, DSI display port, 3.5mm AV jack, 2.4/5GHz dual-band 802.11ac Wi-Fi (100Mb/s), Bluetooth 4.2, Bluetooth Low Energy (BLE), microSD card slot, micro USB power
2 × micro HDMI, 2 × USB 2.0, 2 × USB 3.0, CSI camera port, DSI display port, 3.5 mm AV jack, PoE-capable Gigabit Ethernet (1Gb/s), 2.4/5GHz dual-band 802.11ac Wi-Fi (120Mb/s), Bluetooth 5, Bluetooth Low Energy (BLE), microSD card slot, USB-C power (5V, 3A (15W))
Models with the H suffix have header pins pre-soldered to the GPIO header. Models that lack the H suffix do not come with header pins attached to the GPIO header; the user must solder pins manually or attach a third-party pin kit.
All Zero models have the following connectivity:
a microSD card slot
a CSI camera port (version 1.3 of the original Zero introduced this port)
a mini HDMI port
2 × micro USB ports (one for input power, one for external devices)
Models with the H suffix have header pins pre-soldered to the GPIO header. Models that lack the H suffix do not come with header pins attached to the GPIO header; the user must solder pins manually or attach a third-party pin kit.
A brand new issue of The MagPi is out in the wild, and one of our favourite projects we read about involved rebuilding an old PDP-9 computer with a Raspberry Pi-based device that tests hundreds of components.
Anders Sandahl loves collecting old computers: “I really like to restore them and get them going again.” For this project, he wanted to build a kind of component tester for old DEC (Digital Equipment Corporation) Flip-Chip boards before he embarked on the lengthy task of restoring his 1966 PDP-9 computer — a two-foot-tall machine with six- to seven-hundred Flip-Chip boards inside — back to working order.
His Raspberry Pi-controlled DEC Flip-Chip tester checks the power output of these boards using relay modules and signal clips, giving accurate information about each one’s power draw and output. Once he’s confident each component is working properly, Anders can begin to assemble the historic DEC PDP-9 computer, which Wikipedia advises is one of only 445 ever produced.
Logical approach
“Flip-Chip boards from this era implement simple logical functions, comparable to one 7400-series logic circuit,” Anders explains. “The tester uses Raspberry Pi and an ADC (analogue-to-digital converter) to measure and control analogue signals sent to the Flip-Chip, and digital signals used to control the tester’s circuits. PDP-7, PDP-8 (both 8/S and Straight-8), PDP-9, and PDP-10 (with the original KA processor) all use this generation of Flip-Chips. A testing device for one will work for all of them, which is pretty useful if you’re in the business of restoring old computers.
Rhode Island Computer Museum (RICM) is where The MagPi publisher Brian Jepson and friend Mike Thompson both volunteer. Mike is part of a twelve-year-project to rebuild RICM’s own DEC PDP-9 and, after working on a different Flip-Chip tester there, he got in touch with Anders about his Raspberry Pi-based version. He’s now busily helping write the user manual for the tester unit.
Mike explains: “Testing early transistor-only Flip-Chips is incredibly complicated because the voltages are all negative, and the Flip-Chips must be tested with varying input voltages and different loads on the outputs.” There are no integrated circuits, just discrete transistors. Getting such an old computer running again is “quite a task” because of the sheer number of broken components on each PCB, and Flip-Chip boards hold lots of transistors and diodes, “all of which are subject to failure after 55+ years”.
Obstacles, of course
The Flip-Chip tester features 15 level-shifter boards. These step down the voltage so components with different power outputs and draws can operate alongside each other safely and without anything getting frazzled. Anders points out the disparity between the Flip-Chips’ 0 and -3V logic voltage levels and the +10 and -15V used as supply voltages. Huge efforts went into this level conversion to make it reliable and failsafe. Anders wrote the testing software himself, and built the hardware “from scratch” using parts from Mouser and custom-designed circuit boards. The project took around two years and cost around $500, of which the relays were a major part.
Anders favours Raspberry Pi because “it offers a complete OS, file system, and networking in a neat and well-packaged way”, and says it is “a very good software platform that you really just have to do minor tweaks on to get right”. He’s run the tester on Raspberry Pi 3B, 4, and 5. He says it should also run on Raspberry Pi Zero as well, “but having Ethernet and the extra CPU power makes life easier”.
Although this is a fairly niche project for committed computer restorers, Anders believes his Flip-Chip tester can be built by anyone who can solder fairly small SMD components. Documenting the project so others can build it was quite a task, so it was quite helpful when Mike got in touch and was able to assist with the write-up. As a fellow computer restorer, Mike says the tester means getting RICM’s PDP-9 working again “won’t be such an overwhelming task. With the tester we can test and repair each of the boards instead of trying to diagnose a very broken computer as a whole.”
The MagPi #147 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Today we are releasing a new version of Raspberry Pi OS. This version includes a significant change, albeit one that we hope most people won’t even notice. So we thought we’d better tell you about it to make sure you do…
First, a brief history lesson. Linux desktops, like their Unix predecessors, have for many years used the X Window system. This is the underlying technology which displays the desktop, handles windows, moves the mouse, and all that other stuff that you don’t really think about because it (usually) just works. X is prehistoric in computing terms, serving us well since the early 80s. But after 40 years, cracks are beginning to show in the design of X.
As a result, many Linux distributions are moving to a new windowing technology called Wayland. Wayland has many advantages over X, particularly performance. Under X, two separate applications help draw a window:
the display server creates windows on the screen and gives applications a place to draw their content
the window manager positions windows relative to each other and decorates windows with title bars and frames.
Wayland combines these two functions into a single application called the compositor. Applications running on a Wayland system only need to talk to one thing, instead of two, to display a window. As you might imagine, this is a much more efficient way to draw application windows.
Wayland also provides a security advantage. Under X, all applications communicated back and forth with the display server; consequently, any application could observe any other application. Wayland isolates applications at the compositor level, so applications cannot observe each other.
We first started thinking about Wayland at Raspberry Pi around ten years ago; at that time, it was nowhere near ready to use. Over the last few years, we have taken cautious steps towards Wayland. When we released Bullseye back in 2021, we switched to a new X window manager, mutter, which could also be used as a Wayland compositor. We included the option to switch it to Wayland mode to see how it worked.
With the release of Bookworm in 2023, we replaced mutter with a new dedicated Wayland compositor called wayfire and made Wayland the default mode of operation for Raspberry Pi 4 and 5, while continuing to run X on lower-powered models. We spent a lot of time optimising wayfire for Raspberry Pi hardware, but it still didn’t run well enough on older Pis, so we couldn’t switch to it everywhere.
All of this was a learning experience – we learned more about Wayland, how it interacted with our hardware, and what we needed to do to get the best out of it. As we continued to work with wayfire, we realised it was developing in a direction that would make it less compatible with our hardware. At this point, we knew it wasn’t the best choice to provide a good Wayland experience for Raspberry Pis. So we started looking at alternatives.
This search eventually led us to a compositor called labwc. Our initial experiments were encouraging: we were able to use it in Raspberry Pi OS after only a few hours of work. Closer investigation revealed labwc to be a much better fit for the Raspberry Pi graphics hardware than wayfire. We contacted the developers and found that their future direction very much aligned with our own.
labwc is built on top of a system called wlroots, a set of libraries which provide the basic functionality of a Wayland system. wlroots has been developed closely alongside the Wayland protocol. Using wlroots, anyone who wants to write a Wayland compositor doesn’t need to reinvent the wheel; we can take advantage of the experience of those who designed Wayland, since they know it best.
So we made the decision to switch. For most of this year, we have been working on porting labwc to the Raspberry Pi Desktop. This has very much been a collaborative process with the developers of both labwc and wlroots: both have helped us immensely with their support as we contribute features and optimisations needed for our desktop.
After much optimisation for our hardware, we have reached the point where labwc desktops run just as fast as X on older Raspberry Pi models. Today, we make the switch with our latest desktop image: Raspberry Pi Desktop now runs Wayland by default across all models.
When you update an existing installation of Bookworm, you will see a prompt asking to switch to labwc the next time you reboot:
We recommend that most people switch to labwc.
Existing Pi 4 or 5 Bookworm installations running wayfire shouldn’t change in any noticeable way, besides the loss of a couple of animations which we haven’t yet implemented in labwc. Because we will no longer support wayfire with updates on Raspberry Pi OS, it’s best to adopt labwc as soon as possible.
Older Pis that currently use X should also switch to labwc. To ensure backwards compatibility with older applications, labwc includes a library called Xwayland, which provides a virtual X implementation running on top of Wayland. labwc provides this virtual implementation automatically for any application that isn’t compatible with Wayland. With Xwayland, you can continue to use older applications that you rely on while benefiting from the latest security and performance updates.
As with any software update, we cannot possibly test all possible configurations and applications. If you switch to labwc and experience an issue, you can always switch back to X. To do this, open a terminal window and type:
sudo raspi-config
This launches the command-line Raspberry Pi Configuration application. Use the arrow keys to select “6 Advanced Options” and hit ‘enter’ to open the menu. Select “A6 Wayland” and choose “W1 X11 Openbox window manager with X11 backend”. Hit ‘escape’ to exit the application; when you restart your device, your desktop should restart with X.
We don’t expect this to be necessary for many people, but the option is there, just in case! Of course, if you prefer to stick with wayfire or X for any reason, the upgrade prompt offers you the option to do so – this is not a compulsory upgrade, just one that we recommend.
Improved touch screen support
While labwc is the biggest change to the OS in this release, it’s not the only one. We have also significantly improved support for using the Desktop with a touch screen. Specifically, Raspberry Pi Desktop now automatically shows and hides the virtual keyboard, and supports right-click and double-click equivalents for touch displays.
This change comes as a result of integrating the Squeekboard virtual keyboard. When the system detects a touch display, the virtual keyboard automatically displays at the bottom of the screen whenever it is possible to enter text. The keyboard also automatically hides when no text entry is possible.
This auto show and hide should work with most applications, but it isn’t supported by everything. For applications which do not support it, you can instead use the keyboard icon at the right end of the taskbar to manually toggle the keyboard on and off.
If you don’t want to use the virtual keyboard with a touch screen, or you want to use it without a touch screen and click on it with the mouse, you can turn it on or off in the Display tab of Raspberry Pi Configuration. The new virtual keyboard only works with labwc; it’s not compatible with wayfire or X.
In addition to the virtual keyboard, we added long press detection on touch screens to generate the equivalent of a right-click with a mouse. You can use this to launch context-sensitive menus anywhere in the taskbar and the file manager.
We also added double-tap detection on touch screens to generate a double-click. While this previously worked on X, it didn’t work in wayfire. Double-tap to double-click is now supported in labwc.
Better Raspberry Pi Connect integration
We’ve had a lot of very positive feedback about Raspberry Pi Connect, our remote access software that allows you to control your Raspberry Pi from any computer anywhere in the world. This release integrates Connect into the Desktop.
By default, you will now see the Connect icon in the taskbar at all times. Previously, this indicated that Connect was running. Now, the icon indicates that Connect is installed and ready to use, but is not necessarily running. Hovering the mouse over the icon brings up a tooltip displaying the current status.
You can now enable or disable Connect directly from the menu which pops up when the icon is clicked. Previously, this was an option in Raspberry Pi Configuration, but that option has been removed. Now, all the options to control Connect live in the icon menu.
If you don’t plan to use Connect, you can uninstall it from Recommended Software, or you can remove the icon from the taskbar by right-clicking the taskbar and choosing “Add / Remove Plugins…”.
Other things
This release includes some other small changes worth mentioning:
We rewrote the panel application for the taskbar at the top of the screen. In the previous version, even if you removed a plugin from the panel, it remained in memory. Now, when you remove a plugin, the panel never loads it into memory at all. Rather than all the individual plugins being part of a single application, each plugin is now a separate library. The panel only loads the libraries for the plugins that you choose to display on your screen. This won’t make much difference to many people, but can save you a bit of RAM if you remove several plugins. This also makes it easier to develop new plugins, both for us and third parties.
We introduced a new Screen Configuration tool, raindrop. This works exactly the same as the old version, arandr, and even looks similar. Under the hood, we rewrote the old application in C to improve support for labwc and touch screens. Because the new tool is native, performance should be snappier! Going forward, we’ll only maintain the new native version.
We did have some issues on the initial release yesterday, whereby some people found that the switch to labwc caused the desktop to fail to start. Fortunately, the issue has now been fixed. It is safe to update according to the process below, so we have reinstated the update prompt described above.
If you experience problems updating and see a black screen instead of a desktop, there’s a simple fix. At the black screen, press Ctrl + Alt + F2. Authenticate at the prompt and run the following command:
sudo apt install labwc
Finally, reboot with sudo reboot. This should restore a working desktop. We apologise to anyone who was affected by this.
To update an existing Raspberry Pi OS Bookworm install to this release, run the following commands:
sudo apt update
sudo apt full-upgrade
When you next reboot, you will see the prompt described above which offers the switch to labwc.
To switch to the new Screen Configuration tool, run the following commands:
sudo apt purge arandr
sudo apt install raindrop
The new on-screen keyboard can either be installed from Recommended Software – it’s called Squeekboard – or from the command line with:
sudo apt install squeekboard wfplug-squeek
We hope you like the new desktop experience. Or perhaps more accurately, we hope you won’t notice much difference! As always, your comments are very welcome below.
The AI HAT+ features the same best-in-class Hailo AI accelerator technology as our AI Kit, but now with a choice of two performance options: the 13 TOPS (tera-operations per second) model, priced at $70 and featuring the same Hailo-8L accelerator as the AI Kit, and the more powerful 26 TOPS model at $110, equipped with the Hailo-8 accelerator.
Designed to conform to our HAT+ specification, the AI HAT+ automatically switches to PCIe Gen 3.0 mode to maximise the full 26 TOPS of compute power available in the Hailo-8 accelerator.
Unlike the AI Kit, which utilises an M.2 connector, the Hailo accelerator chip is directly integrated onto the main PCB. This change not only simplifies setup but also offers improved thermal dissipation, allowing the AI HAT+ to handle demanding AI workloads more efficiently.
What can you do with the 26 TOPS model over the 13 TOPS model? The same, but more… You can run more sophisticated neural networks in real time, achieving better inference performance. The 26 TOPS model also allows you to run multiple networks simultaneously at high frame rates. For instance, you can perform object detection, pose estimation, and subject segmentation simultaneously on a live camera feed using the 26 TOPS AI HAT+:
Both versions of the AI HAT+ are fully backward compatible with the AI Kit. Our existing Hailo accelerator integration in the camera software stack works in exactly the same way with the AI HAT+. Any neural network model compiled for the Hailo-8L will run smoothly on the Hailo-8; while models specifically built for the Hailo-8 may not work on the Hailo-8L, alternative versions with lower performance are generally available, ensuring flexibility across different use cases.
After an exciting few months of AI product releases, we now offer an extensive range of options for running inferencing workloads on Raspberry Pi. Many such workloads – particularly those that are sparse, quantised, or intermittent – run natively on Raspberry Pi platforms; for more demanding workloads, we aim to be the best possible embedded host for accelerator hardware such as our AI Camera and today’s new Raspberry Pi AI HAT+. We are eager to discover what you make with it.
To help you get the best out of your Raspberry Pi 5, today we’re launching a range of Raspberry Pi-branded NVMe SSDs. They are available both on their own and bundled with our M.2 HAT+ as ready-to-use SSD Kits.
When we launched Raspberry Pi 5, almost exactly a year ago, I thought the thing people would get most excited about was the three-fold increase in performance over 2019’s Raspberry Pi 4. But very quickly it became clear that it was the other new features – the power button (!), and the PCI Express port – that had captured people’s imagination.
We’ve seen everything from Ethernet adapters, to AI accelerators, to regular PC graphics cards attached to the PCI Express port. We offer our own low-cost M.2 HAT+, which converts from our FPC standard to the standard M.2 M-key format, and there are a wide variety of third-party adapters which do basically the same thing. We’ve also released an AI Kit, which bundles the M.2 HAT+ with an AI inference accelerator from our friends at Hailo.
But the most popular use case for the PCI Express port on Raspberry Pi 5 is to attach an NVMe solid-state disk (SSD). SSDs are fast; faster even than our branded A2-class SD cards. If no-compromises performance is your goal, you’ll want to run Raspberry Pi OS from an SSD, and Raspberry Pi SSDs are the perfect choice.
The entry-level 256GB drive is priced at $30 on its own, or $40 as a kit; its 512GB big brother is priced at $45 on its own, or $55 as a kit. Both densities offer minimum 4KB random read and write performance of 40k IOPS and 70k IOPS respectively. The 256GB SSD and SSD Kit are available to buy today, while the 512GB variants are available to pre-order now for shipping by the end of November.
So, there you have it: a cost-effective way to squeeze even more performance out of your Raspberry Pi 5. Enjoy!
If you’ve got your hands on the Raspberry Pi AI Camera that we launched a few weeks ago, you might be looking for a bit of help to get up and running with it – it’s a bit different from our other camera products. We’ve raided our documentation to bring you this Getting started guide. If you work through the steps here you’ll have your camera performing object detection and pose estimation, even if all this is new to you. Then you can dive into the rest of our AI Camera documentation to take things further.
Here we describe how to run the pre-packaged MobileNet SSD (object detection) and PoseNet (pose estimation) neural network models on the Raspberry Pi AI Camera.
Prerequisites
We’re assuming that you’re using the AI Camera attached to either a Raspberry Pi 4 or a Raspberry Pi 5. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+.
First, make sure that your Raspberry Pi runs the latest software. Run the following command to update:
sudo apt update && sudo apt full-upgrade
The AI Camera has an integrated RP2040 chip that handles neural network model upload to the camera, and we’ve released a new RP2040 firmware that greatly improves upload speed. AI Cameras shipping from now onwards already have this update, and if you have an earlier unit, you can update it yourself by following the firmware update instructions in this forum post. This should take no more than one or two minutes, but please note before you start that it’s vital nothing disrupts the process. If it does – for example, if the camera becomes disconnected, or if your Raspberry Pi loses power – the camera will become unusable and you’ll need to return it to your reseller for a replacement. Cameras with the earlier firmware are entirely functional, and their performance is identical in every respect except for model upload speed.
Install the IMX500 firmware
In addition to updating the RP2040 firmware if required, the AI camera must download runtime firmware onto the IMX500 sensor during startup. To install these firmware files onto your Raspberry Pi, run the following command:
sudo apt install imx500-all
This command:
installs the /lib/firmware/imx500_loader.fpk and /lib/firmware/imx500_firmware.fpk firmware files required to operate the IMX500 sensor
places a number of neural network model firmware files in /usr/share/imx500-models/
installs the IMX500 post-processing software stages in rpicam-apps
installs the Sony network model packaging tools
NOTE: The IMX500 kernel device driver loads all the firmware files when the camera starts, and this may take several minutes if the neural network model firmware has not been previously cached. The demos we’re using here display a progress bar on the console to indicate firmware loading progress.
Reboot
Now that you’ve installed the prerequisites, restart your Raspberry Pi:
sudo reboot
Run example applications
Once all the system packages are updated and firmware files installed, we can start running some example applications. As mentioned earlier, the Raspberry Pi AI Camera integrates fully with libcamera, rpicam-apps, and Picamera2. This blog post concentrates on rpicam-apps, but you’ll find more in our AI Camera documentation.
The examples on this page use post-processing JSON files located in /usr/share/rpicam-assets/.
Object detection
The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. imx500_mobilenet_ssd.json contains the configuration parameters for the IMX500 object detection post-processing stage using the MobileNet SSD neural network.
imx500_mobilenet_ssd.json declares a post-processing pipeline that contains two stages:
imx500_object_detection, which picks out bounding boxes and confidence values generated by the neural network in the output tensor
object_detect_draw_cv, which draws bounding boxes and labels on the image
The MobileNet SSD tensor requires no significant post-processing on your Raspberry Pi to generate the final output of bounding boxes. All object detection runs directly on the AI Camera.
The following command runs rpicam-hello with object detection post-processing:
You can configure the imx500_object_detection stage in many ways.
For example, max_detections defines the maximum number of objects that the pipeline will detect at any given time. threshold defines the minimum confidence value required for the pipeline to consider any input as an object.
The raw inference output data of this network can be quite noisy, so this stage also performs some temporal filtering and applies hysteresis. To disable this filtering, remove the temporal_filter config block.
Pose estimation
The PoseNet neural network performs pose estimation, labelling key points on the body associated with joints and limbs. imx500_posenet.json contains the configuration parameters for the IMX500 pose estimation post-processing stage using the PoseNet neural network.
imx500_posenet.json declares a post-processing pipeline that contains two stages:
imx500_posenet, which fetches the raw output tensor from the PoseNet neural network
plot_pose_cv, which draws line overlays on the image
The AI Camera performs basic detection, but the output tensor requires additional post-processing on your host Raspberry Pi to produce final output.
The following command runs rpicam-hello with pose estimation post-processing:
You can configure the imx500_posenet stage in many ways.
For example, max_detections defines the maximum number of bodies that the pipeline will detect at any given time. threshold defines the minimum confidence value required for the pipeline to consider input as a body.
Picamera2
For examples of image classification, object detection, object segmentation, and pose estimation using Picamera2, see the picamera2 GitHub repository.
Most of the examples use OpenCV for some additional processing. To install the dependencies required to run OpenCV, run the following command:
sudo apt install python3-opencv python3-munkres
Now download the picamera2 repository to your Raspberry Pi to run the examples. You’ll find example files in the root directory, with additional information in the README.md file.
Run the following script from the repository to run YOLOv8 object detection:
To try pose estimation in Picamera2, run the following script from the repository:
python imx500_pose_estimation_higherhrnet_demo.py
To explore further, including how things work under the hood and how to convert existing models to run on the Raspberry Pi AI Camera, see our documentation.
In the latest issue of The MagPi, Raspberry Pi Documentation Lead Nate Contino shows you how to attach a Raspberry Pi Pico-series device and start development with the new VS Code extension.
The following tutorial assumes that you are using a Pico-series device; some details may differ if you use a different Raspberry Pi microcontroller-based board. Pico-series devices are built around microcontrollers designed by Raspberry Pi itself. Development on the boards is fully supported with both a C/C++ SDK, and an official MicroPython port. This article talks about how to get started with the SDK, and walks you through how to build, install, and work with the SDK toolchain.
VS Code running on a Raspberry Pi computer. This IDE (integrated development environment) has an extension for Pico-series computers
To install Visual Studio Code (known as VS Code for short) on Raspberry Pi OS or Linux, run the following commands:
$ sudo apt update $ sudo apt install code
On macOS and Windows, you can install VS Code from magpi.cc/vscode. On macOS, you can also install VS Code with brew using the following command:
$ brew install --cask visual-studio-code
The Raspberry Pi Pico VS Code extension helps you create, develop, run, and debug projects in Visual Studio Code. It includes a project generator with many templating options, automatic toolchain management, one-click project compilation, and offline documentation of the Pico SDK. The VS Code extension supports all Raspberry Pi Pico-series devices.
Creating a project in VS Code
Install dependencies
On Raspberry Pi OS and Windows no dependencies are needed.
Most Linux distributions come preconfigured with all of the dependencies needed to run the extension. However, some distributions may require additional dependencies.
The extension requires the following:
Python 3.9 or later
Git
Tar
A native C and C++ compiler (the extension supports GCC)
You can install these with:
$ sudo apt install python3 git tar build-essential
On macOS
To install all requirements for the extension on macOS, run the following command:
$ xcode-select --install
This installs the following dependencies:
Git
Tar
A native C and C++ compiler (the extension supports GCC and Clang)
Install the extension
You can find the extension in the VS Code Extensions Marketplace. Search for the Raspberry Pi Pico extension, published by Raspberry Pi. Click the Install button to add it to VS Code.
You can find the store entry at magpi.cc/vscodeext. You can find the extension source code and release downloads at magpi.cc/picovscodegit. When installation completes, check the Activity sidebar (by default, on the left side of VS Code). If installation was successful, a new sidebar section appears with a Raspberry Pi Pico icon, labelled “Raspberry Pi Pico Project”.
Create code to blink the LED on a Pico 2 board
Load and debug a project
The VS Code extension can create projects based on the examples provided by Pico Examples. For an example, we’ll walk you through how to create a project that blinks the LED on your Pico-series device:
In the VS Code left sidebar, select the Raspberry Pi Pico icon, labelled Raspberry Pi Pico Project.
Select New Project from Examples.
In the Name field, select the blink example.
Choose the board type that matches your device.
Specify a folder where the extension can generate files. VS Code will create the new project in a sub-folder of the selected folder.
Click Create to create the project. The extension will now download the SDK and the toolchain, install them locally, and generate the new project. The first project may take five to ten minutes to install the toolchain. VS Code will ask you whether you trust the authors because we’ve automatically generated the .vscode directory for you. Select yes.
The CMake Tools extension may display some notifications at this point. Ignore and close them.
Pico’s Micro USB connector makes sending code easy
On the left Explorer sidebar in VS Code, you should now see a list of files. Open blink.c to view the blink example source code in the main window. The Raspberry Pi Pico extension adds some capabilities to the status bar at the bottom right of the screen:
Compile. Compiles the sources and builds the target UF2 file. You can copy this binary onto your device to program it.
Run. Finds a connected device, flashes the code into it, and runs that code.
The extension sidebar also contains some quick access functions. Click on the Pico icon in the side menu and you’ll see Compile Project. Hit Compile Project and a terminal tab will open at the bottom of the screen displaying the compilation progress.
Compile and run blink
To run the blink example:
Hold down the BOOTSEL button on your Pico-series device while plugging it into your development device using a Micro USB cable to force it into USB Mass Storage Mode.
Press the Run button in the status bar or the Run Project button in the sidebar. You should see the terminal tab at the bottom of the window open. It will display information concerning the upload of the code. Once the code uploads, the device will reboot, and you should see the following output:
The device was rebooted to start the application.
Your blink code is now running. If you look at your device, the LED should blink twice every second.
The new Raspberry Pi Pico 2 has upgraded capabilities over the original model
Make a code change and re-run
To check that everything is working correctly, click on the blink.c file in VS Code. Navigate to the definition of LED_DELAY_MS at the top of the code:
Change the 250 (in ms, a quarter of a second) to 100 (a tenth of a second): #ifndef LED_DELAY_MS #define LED_DELAY_MS 100 #endif LED_DELAY_MS
Disconnect your device, then reconnect while holding the BOOTSEL button just as you did before.
Press the Run button in the status bar or the Run Project button in the sidebar. You should see the terminal tab at the bottom of the window open. It will display information concerning the upload of the code. Once the code uploads, the device will reboot, and you should see the following output:
The device was rebooted to start the application.
Your blink code is now running. If you look at your device, the LED should flash faster, five times every second.
Top tip
Read the online guide
This tutorial also features in the Raspberry Pi Datasheet: Getting started with Pico. It also features information on using Raspberry Pi’s Debug Probe.
The summer, and Louis Wood’s internship with our Maker in Residence, was creeping to a close without his final build making it off the ground. But as if by magic, on his very last day, Louis got his handmade drone flying.
3D-printed CAD design
The journey of building a custom drone began with designing in CAD software. My initial design was fully 3D-printed with an enclosed structure and cantilevered arms to support point forces. The honeycomb lid provided cooling, and the enclosure allowed for embedded XT-60 and MR-30 connections, creating a clean and integrated look. Inside, I ensured all electrical components were rigidly mounted to avoid unwanted movement that could destabilise the flight.
Testing quickly revealed that 3D-printed frames were brittle, often breaking during crashes. Moreover, the limitations of my printer’s build area meant that motor placement was cramped. To overcome these issues, I CNC-routed a new frame from 4 mm carbon fibre, increasing the wheelbase for better stability. Using Carveco software, I generated toolpaths and cut the frame on a WorkBee CNC in our Maker Lab. After two hours, I had a sturdy, assembled frame ready for electronics.
Not one, not two, but three Raspberry Pis
For the drone’s brain, I used a Raspberry Pi Pico 2 connected to an MPU6050 gyroscope for real-time orientation data and an IBUS protocol receiver for streamlined control inputs. Initially, I faced issues with signal processing due to the delay of handling five separate PWM signals. Switching to IBUS sped up the loop frequency by tenfold, which greatly improved flight response. The Pico handled PID (Proportional-Integral-Derivative) calculations for stability, and a 4-in-1 ESC managed the motor signals. The drone also carries a Raspberry Pi Zero with a Camera Module 2 and an analogue VTX for real-time FPV (first-person view) flying.
All coming together in the Maker Lab at Pi Towers
Programming was based on Tim Hanewich’s Scout flight controller code, implementing a ‘rate’ mode controller that uses PID values to maintain desired angular velocities. Fine-tuning the PID gains was essential; improper settings could lead to instability and dangerous oscillations. I followed a careful tuning process, starting with low values for each parameter and slowly increasing them.
To make the process safer, I constructed a testing rig to isolate each axis and simulate flight conditions. This allowed me to achieve a rough tune before moving on to actual flight tests, ultimately ensuring the drone’s safe and stable performance.
AI models are adept at distinguishing one winged creature from another. This #MagPiMonday, Rosie Hattersley goes beyond the buzz.
Once attracted to liquid in a Petri dish, VespAI identifies any Asian hornets and automatically alerts researchers who trace them back to their nest
Fun fact that might get you a point in the local pub quiz: Vespa, Piaggio’s iconic scooter, is Italian for wasp, which its buzzing engine sounds a bit like. Less fun fact: nature’s counterpart to the speedy two-wheeler has an aggressive variant that has been seen in increasing numbers across western Europe and which is a direct threat to bees, which are one of their key food sources. Bees are great for biodiversity; Asian hornets (the largest type of eusocial wasp) are not. But it’s only particular hornet species that pose such a threat. Most citizen reports of Asian hornets are native species, and a key issue is ensuring that existing hornet species are not being destroyed on this mistaken assumption. To combat misinformation and alarm at the so-called ‘killer’ hornet (itself a subset of wasp), academics at the University of Exeter have developed a VespAI detector that presents a positive identification system showing where new colonies of the invasive hornet Vespa velutina nigrithorax have begun to spread. The system works by drawing the insects to a pad that is impregnated with tasty (to wasps) smelling foodstuffs.
Dr Thomas O’Shea-Weller, Juliet Osborne, and Peter Kennedy
Considerate response
VespAI provides a nonharmful alternative to traditional trapping surveys and can also be used for monitoring hornet behaviour and mapping distributions of both the Asian hornet (Vespa velutina) and European hornet (Vespa crabro), which is protected in some countries. “Live hornets can be caught and tracked back to the nest, which is the only effective way to destroy them,” explains the team’s research paper.
Non-Asian hornets are discounted, meaning non-invasive native species are not destroyed in a bid to eradicate the destructive newcomers
Creepy feeling
VespAI features a camera positioned above a bait station that detects insects as they land to feed and gets to work establishing whether the curious mite is, in fact, an Asian hornet. The Exeter team developed the AI algorithm in Python, using YOLO image detection models. These identify whether Asian hornets are present and, if so, send an alert to users. Raspberry Pi proved a great choice because of its compact size, ability to run the hornet recognition algorithm, real-time clock, and support for peripherals such as an external battery. The prototype bait station design was made with items that the team had at hand in their lab, including a squirrel baffle for the weather shield, Petri dishes and sponges to hold hornet attractant, and a beehive stand for the monitor to rest on.
The system is inactive unless an insect of the correct size is detected on the bait station
Design challenges included optimising the hornet detection algorithm for use on Raspberry Pi. “An AI algorithm may work well during training or when validated in the lab. However, field deployment is essential to expose it to potentially unforeseen scenarios that may return errors”, they note. The project also involved developing a monitor with an integrated camera, processor, and peripherals while minimising power consumption. To this end, the VespAI team is currently optimising their software to run on Raspberry Pi Zero, having watched footage of the AntVideoRecord device monitoring leafcutter ant (Acromyrmex lundi) foraging trails and been impressed by its ability to run for extended periods remotely due to its low power consumption.
As this interactive map shows, Asian hornets have quickly made inroads across Western Europe.
Asian hornets have rapidly spread from southern Europe and are now increasing in numbers in the UK
The Raspberry Pi-enabled setup is “intended to support national surveillance efforts, thus limiting hornet incursions into new regions,” explains Dr Thomas O’Shea-Wheller, a research fellow in the university’s Environment and Sustainability Institute. He and his colleagues have been working on the AI project since 2022, conducting additional fieldwork this summer with the National Bee Unit and the Government of Jersey (Channel Islands) mapping new locations and fine-tuning its accessibility to potential users ahead of a planned commercial version.
Given Raspberry Pi’s extensive and enthusiastic users, they hope sharing their code on GitHub will help expand the number of VespAI detection stations and improve surveillance and reporting of hornet species.
This article originally featured in issue 146 of The MagPi magazine.
The MagPi #146 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Learn how to recreate all of the best projects from HackSpace magazine with the Book of Making 2025, on sale now at £14.
We had so much fun making HackSpace magazine (and we hope you had fun reading it). It’s been a couple of months now since we incorporated HackSpace into a bigger, brighter, better version of The MagPi. While the standalone magazine may have gone from the shelves, it’s still on the immortal internet, where you can download every issue for free. And if that’s not enough to cater for your desire to make semi-useful things out of home electronics, microcontrollers, 3D printers and the like, there’s the Book of Making 2025 to scratch your itch, on sale today in all good bookshops and online from the Raspberry Pi Press store.
Book of Making 2025 distills the essence of HackSpace magazine down to our favourite maker projects. Whether you want to build a rocket or hot air balloon, learn 3D-printed mechanical engineering, or control the world around you with a Raspberry Pi Pico, there’s something for you here.
This book is full of projects perfect for an hour, afternoon, or weekend; be inspired by the amazing community projects you’ll find in its pages and make your own creations using step-by-step guides.
You’ll learn how to:
Work with microcontrollers and electronic circuits
Design for 2D and 3D fabrication methods and make them a reality
Create amazing things with everyday items
…and loads more!
Hackspaces and makerspaces have exploded in popularity the world over, as more and more people want to make things and learn in the process. Written by makers for makers, this book features a diverse range of projects to sink your teeth into. Grab some duct tape, fire up a microcontroller, ready a 3D printer, and hack the world around you!
The new and improved MagPi magazine now houses one of my favourite sections of the late great HackSpace magazine: Top Projects. The feature showcases five or six spectacular builds using Raspberry Pi, and this was our favourite from the latest issue.
Do you want a portable mini modular computer based on Raspberry Pi 5? If so, you’re in luck. A small outfit (boasting one-and-a-half people) called Soulcircuit is working on one right now, called the Pilet (it was called Consolo, but is now called Pilet, which according to the maker “reflects the project’s aim to appeal to a wider global audience”).
Two 8000mAh batteries give the device a claimed seven-hour lifespan, which if true will put a lot of computing power in your pocket for a productive day’s work. The basic unit houses a Raspberry Pi 5 and a touchscreen, running a full-fat version of the Linux operating system (it looks like Debian with a KDE desktop, which wouldn’t really have been practical with any model of Raspberry Pi until now).
Soulcircuit claims that the Pilet is “built by open-source software for the open-source community,” and credits KiCad, FreeCAD, Blender, Linux, Raspberry Pi, and KDE. As we’ve seen so many times though, it’s not enough just to have the right software; a device this good takes expertise and imagination, and if it can come in at the expected price of under $200, we’re sure it’ll be popular with open-source geeks who want to get work done but also quite like leaving the house every now and then.
The MagPi #146 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Today we’re happy to announce a couple of new accessories that we think will make a big difference to your experience with Raspberry Pi. With the latest release of Raspberry Pi OS, Raspberry Pi 5 can make use of the extra performance available from Class A2 SD cards; to help you take advantage of this, we are introducing our own range of high-quality, low-cost Raspberry Pi SD Cards. And we’re releasing the Raspberry Pi Bumper, a cute little silicone cover to protect the base and edges of your Raspberry Pi 5.
Raspberry Pi SD Cards
As many of you will know first-hand, your choice of SD card makes a huge difference to your Raspberry Pi experience. Historically, we’ve worked with our Approved Reseller partners to test and endorse third-party SD cards. But as cards have become more sophisticated, and particularly with the advent of Class A2 cards, this process has become increasingly cumbersome.
To ensure you have the best possible experience at the lowest possible cost, we’ve worked with our partner Longsys to develop a range of branded Raspberry Pi SD Cards. These Class A2 cards offer exceptional random read and write throughput across the entire range of Raspberry Pi computers, and when used on Raspberry Pi 5 support command queueing for even higher performance.
From today, our Approved Resellers will only promote Raspberry Pi SD Cards alongside Raspberry Pi computers, and you can be assured of their quality.
Class A2 SD Cards: harder, better, faster, stronger
SD cards which support Application Performance Class A2, such as our new Raspberry Pi SD Cards, enable faster read and write operations, and Raspberry Pi 5 incorporates hardware features which allow it to make the most of this extra performance. To enable these features, you will need to use the latest release of Raspberry Pi OS, or update your Raspberry Pi OS install with the latest packages. Run the following command to update:
sudo apt update && sudo apt full-upgrade
How exactly do Class A2 cards achieve better performance? Read on!
What is CQHCI?
The SD Host Controller Interface (SDHCI) specification standardises the piece of hardware (the host controller) which controls communication with the SD card. On Raspberry Pi computers, the host controller lives inside the Broadcom application processor. The Command Queueing Host Controller Interface (CQHCI) extends SDHCI with an extra set of control registers, and a CQ engine which takes over from the legacy host controller when a suitable card is detected.
Cards must be explicitly put into command queueing (CQ) mode, after which a new set of SD commands becomes available and many of the existing SD commands become invalid. The new commands decouple the request to read or write a card sector from the response of the card. Each read or write operation is tagged, with up to 32 tags in use across both reads and writes. The card can choose the order in which it returns responses to the commands, and may optionally buffer write data rather than committing it immediately to flash.
By allowing it to effectively “see into the future”, command queueing lets the flash controller hide more of the latency associated with accessing disparate NAND flash pages. This results — at least in theory — in better throughput for random I/O workloads of the sort generated by Raspberry Pi OS.
CQ support first landed in eMMC devices with JEDEC specification JESD84-B51, in 2015. The SD specification equivalent landed some time later with SD v6.00, in 2017. However, at the time of the Raspberry Pi 5 launch in 2023, Linux only supported CQHCI on eMMC devices — so we were leaving performance on the table.
In early 2024 I set about implementing the missing CQ support for SD cards.
How do you use CQHCI?
Carefully parsing the SD specification led me to develop a dependency chain of optional card features that all needed to be supported if CQ mode is to be used. These are, in order:
The card must support Extension Register access, which is a generic method of accessing optional features over 512-byte pages, each with a type identifying to what feature extension the page refers
The card must support the Performance Enhancement extension registers
In the Performance Enhancement extension, the card must support Write Caching
As a consequence of Write Caching support, the card must also support the Power extension registers and at a minimum support Power-Off notifications
The card must declare the queue depth required to meet Class A2 performance — from 2 to 32 tags
As Linux already supported CQ with eMMC cards, all I had to do was to find out where the SD implementation differed — and there were a few of these cases.
During normal operation the host operating system sometimes needs to issue “meta-ops” that don’t directly transfer data but do related things, such as recalibrating the host-to-card data path delays, requesting card status as a proxy for card removal, and doing flash maintenance operations such as signalling block discard.
For eMMC devices, most meta-ops are performed by issuing command CMD6 with a 32-bit argument. CQHCI supports injecting these while in CQ mode by designating the “top” tag in the controller for performing DCMDs (direct commands). However, with SD cards, the set of commands performing meta-ops generally require us to halt the CQ engine, and issue a non-CQ command using the regular SD host controller registers.
Once these differences were ironed out, I had a workable Linux driver, which was pushed to rpi-update. I created a testing thread in the forums for the adventurous, and set about evaluating my extensive collection of retail cards.
How well do SD cards implement CQ mode?
In a very hit-and-miss fashion.
SanDisk cards, in particular the Extreme and Extreme Pro product lines, were my first choice — and they performed well. However, other manufacturers’ offerings suffered from one or more of a number of common deficiencies that precluded CQ mode operation, or caused them to flake out in use:
Not declaring Power-Off notification support despite implementing the extension
Hanging on receipt of a cache flush request after CQ mode had been activated then deactivated
Cards not correctly implementing the “CQ enable” expansion register bit — if I wrote a 1, I would still read back 0 forever
There was even one type of card that claimed Class A2 support but ignored any request to read the expansion registers to probe for any of these features!
The Raspberry Pi kernel filters out cards that fail these tests, either during feature probing or with an explicit quirk that matches the card identifier. If you find an A2-branded card that misbehaves on a Raspberry Pi 5, then please report it in the above-mentioned forum thread.
Write caching + surprise removal = badness
One potential pitfall of enabling CQ mode was that it provides cards with new opportunities to corrupt your filesystem if power is removed unexpectedly. In CQ mode, hosts should honour the requirement to maintain the card’s power supply, and only remove it after a Power-Off notification is sent; this provides an opportunity for the flash controller to commit all outstanding writes to flash. For battery-powered hosts with concealed SD slots such as a phone, that is an easy contract to fulfil — requesting device shutdown or uncovering the slot can trigger a Power-Off notification. Raspberry Pi, with its exposed SD slot and pluggable PSU, has a harder time providing this guarantee.
With multiple writes in flight, or multiple posted notifications of pending writes, we can no longer guarantee the order in which writes get committed to flash. If power is removed unexpectedly, an arbitrary collection of recent writes may not have been committed, rather than strictly the n most recent writes; this greatly complicates the task of making the filesystem resilient to corruption. The Raspberry Pi kernel sidesteps this problem by limiting the maximum number of posted writes in CQ mode to one. While in theory this may result in lower sequential write throughput, the cards I’ve tested see at most a 2–3% percent reduction in performance.
Introducing Longsys
Once it became clear that Class A2 SD cards offer a significant performance uplift when operating in CQ mode on Raspberry Pi 5, we started discussions with several card OEMs, with the goal of qualifying a cost-effective offering that would work well across every generation of Raspberry Pi computer.
We settled on Longsys as our vendor after working with their engineering team to align their cards’ declared feature sets with our requirements; to prove that the cards were robust by automatically performing over 100,000 surprise power cycles under I/O heavy load; and to tune the cards to get the best out of Raspberry Pi 5.
While best performance on Raspberry Pi 5 was our primary goal, the non-CQ performance of these cards is still stonkingly fast, and you will generally see a significant uplift in performance on older Raspberry Pi computers.
Raspberry Pi Bumper for Raspberry Pi 5
Today’s other accessory launch brings you the Raspberry Pi Bumper: the simple casing solution you never knew you needed, and already a firm favourite here at Pi Towers. It’s a snap-on silicone base that unfussily protects the base and edges of your Raspberry Pi 5, and the surface you’re putting it down on, and also makes it easier to use the power button. It’s compatible with the Raspberry Pi Active Cooler, and will set you back a meagre $3.
And there you are. Two unglamorous, yet excellent, accessories that we wonder how we managed without. We hope you like them.
This canny way to transfer analogue film to digital was greatly improved by using Raspberry Pi, as Rosie Hattersley discovered in issue 145 of The MagPi.
Gugusse is a French term meaning something ‘quite flimsy’, explains software engineer and photography fan Denis-Carl Robidoux. The word seemed apt to describe the 3D-printed project: a “flimsy and purely mechanical machine to transfer film.”
The Gugusse Roller uses Raspberry Pi HQ camera and Pi 4B+ to import and digitise analogue film footage Image credit: Al Warner
Denis-Carl created Gugusse as a volunteer at the Montreal museum where his girlfriend works. He was “their usual pro bono volunteer guy for anything special with media, [and] they asked me if I could transfer some rolls of 16mm film to digital.” Dissatisfied with the resulting Gugusse Roller mechanism, he eventually decided to set about improving upon it with a little help from Raspberry Pi. Results from the Gugusse Roller’s digitisation process can be admired on YouTube.
New and improved
Denis-Carl brought decades of Linux coding (“since the era when you had to write your own device drivers to make your accessories to work with it”), and a career making drivers for jukeboxes and high-level automation scripts, to the digitisation conundrum. Raspberry Pi clearly offered potential: “Actually, there was no other way to get a picture of this quality at this price level for this DIY project.” However, the Raspberry Pi Camera Module v2 Denis-Carl originally used wasn’t ideal for the macro photography approach and alternative lenses involved in transferring film. The module design was geared up for a lens in close proximity to the camera sensor, and Bayer mosaics aligned for extremities of incoming light were at odds with his needs. “But then came Raspberry Pi HQ camera, which didn’t have the Bayer mosaic alignment issue and was a good 12Mp, enough to perform 4K scans.”
Gugusse Roller fan Al Warner built his own version Image credit: Al Warner
Scene stealer
Denis-Carl always intended the newer Gugusse Roller design to be sprocketless, since this would allow it to scan any film format. This approach meant the device needed to be able to detect the film holes optically: “I managed this with an incoming light at 45 degrees and a light sensitive resistor placed at 45 degrees but in the opposite direction.” It was “a Eureka moment” when he finally made it work. Once the tension is set, the film scrolls smoothly past the HQ camera, which captures each frame as a DNG file once the system detects the controlling arms are correctly aligned and after an interval for any vibration to dissipate.
Version 3.1 of Denis-Carl’s Gugusse Roller PCB
The Gugusse Roller uses Raspberry Pi 4 to control the HQ Camera, three stepper motors, and three GPIO inputs. So far it has scanned thousands of rolls of film, including trailers of classics such as Jaws, and other, lesser-known treasures. The idea has also caught the imagination of more than a dozen followers who have gone on to build their own Gugusse Roller using Denis-Carl’s instructions — check out other makers’ builds on Facebook.
Denis-Carl Robidoux beside his Gugusse Roller film digitiser
Who needs to laboriously shuffle their own deck when Raspberry Pi can do it for you? In the latest issue of The MagPi, our recent intern, Louis Wood, tells us all about the nifty LEGO card shuffler he designed during his summer spent in the Maker Lab.
Maker and Cambridge engineering undergraduate Louis Wood first encountered Raspberry Pi while looking for a low-cost microcontroller that could be programmed with Python for an A-level project. Inspiring plenty of envy, he’s just spent six whole weeks ensconced in Raspberry Pi HQ’s very own maker space building a range of Raspberry Pi projects, including a LEGO card shuffler. Basing it around the LEGO Build HAT helped him evolve and improve upon a design that he and his Queens’ College Cambridge friends, Lucas Hoffman and Emily Wang, devised. The card shuffler idea was their response to a design and build challenge, based around a LEGO NXT system, to demonstrate an aspect of engineering science. The dual-motor design was in need of some reworking, which Louis undertook while working as an intern at Raspberry Pi Towers alongside Maker in Residence Toby Roberts.
A MicroPython script takes shuffling out of your hands by spinning these wheels alternately and pushing cards into the shuffled pile at random
Quirky and cool
Louis used a LEGO Spike education kit with Raspberry Pi’s LEGO Build HAT to create a simpler but more robust design. The kit includes cycle motors, which he attached directly to the Build HAT’s four connectors. “The Build HAT made it pretty easy to pick up all the motors and plug them in.” He then programmed Raspberry Pi 4 over SSH, “which made it easy to tweak code.”
The MicroPython code produces either a one or a zero and spins either the left or right motor accordingly: “When the motor turns on, the wheel spins a few cards into the middle.” The motors run on a loop, each powering on for a second or two, pushing cards from each side and randomly shuffling them into a central pile until the Build HAT colour sensors detect the black base of either card bay. The card shuffler then skips that side and only runs the opposite motor for a while to clear the rest of the cards. Once it notices it’s done shuffling, it stops.
The build took a couple of hours, and Louis spent a similar time coding and tweaking the build. “The hardest thing was making it so that it doesn’t just spit out the whole side [of cards] at once,” he says.
His simple-but-effective barrier is positioned such that only a single card at a time can (usually) be shuffled along by the motor. The setup doesn’t always work flawlessly, occasionally requiring the user to deftly flick a card back into place, but Louis aims to improve the design by moving apart the two card holding sides to prevent blockages.
The dynamic Raspberry Pi 4 × Build HAT duo at work
The Build HAT came into its own thanks to the colour sensor Raspberry Pi 4 used to detect whether there were still cards awaiting shuffling. The white background of the cards contrasted with the black base of the crates he’d created, which was visible only when the stack of cards was depleted. Other card decks — such as Uno ones, which usually have a black background — could be shuffled too, as long as the card holder base colour was changed.
Makers gonna make
Two weeks into his internship, Louis had already created and written about a ‘Pixie’ tube clock and had been building “a Raspberry Pi mount and cooling system for one of the engineers upstairs, so you can sort of be running eight Raspberry Pis at the same time, fans, and an enclosure,” as well as a remote control based on the brand-new Pico 2.
Louis’ Pixie clock (Pi-powered Nixie clock) has since been repurposed as a TikTok ‘likes’ counter
Given this prodigious rate of design, we asked whether an engineering career or one as a maker is in his future. “I’d like to be a maker, but I think it’s quite hard…” Louis said. “To be a maker YouTuber takes a lot of work and time, I think… probably a bit risky.”
Read the full story, including some extra tips and projects from maker Louis Wood, in The MagPi #146.
The MagPi #146 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
People have been using Raspberry Pi products to build artificialintelligenceprojects for almost as long as we’ve been making them. As we’ve released progressively more powerful devices, the range of applications that we can support natively has increased; but in any generation there will always be some workloads that require an external accelerator, like the Raspberry Pi AI Kit, which we launched in June.
The AI Kit is an awesomely powerful piece of hardware, capable of performing thirteen trillion operations per second. But it is only compatible with Raspberry Pi 5, and requires a separate camera module to capture visual data. We are very excited therefore to announce a new addition to our camera product line: the Raspberry Pi AI Camera.
The AI Camera is built around a Sony IMX500 image sensor with an integrated AI accelerator. It can run a wide variety of popular neural network models, with low power consumption and low latency, leaving the processor in your Raspberry Pi free to perform other tasks.
Key features of the Raspberry Pi AI Camera include:
12 MP Sony IMX500 Intelligent Vision Sensor
Sensor modes: 4056×3040 at 10fps, 2028×1520 at 30fps
1.55 µm × 1.55 µm cell size
78-degree field of view with manually adjustable focus
Integrated RP2040 for neural network and firmware management
The AI Camera can be connected to all Raspberry Pi models, including Raspberry Pi Zero, using our regular camera ribbon cables.
Using Sony’s suite of AI tools, existing neural network models using frameworks such as TensorFlow or PyTorch can be converted to run efficiently on the AI Camera. Alternatively, new models can be designed to take advantage of the AI accelerator’s specific capabilities.
Under the hood
To make use of the integrated AI accelerator, we must first upload a model. On older Raspberry Pi devices this process uses the I2C protocol, while on Raspberry Pi 5 we are able to use a much faster custom two-wire protocol. The camera end of the link is managed by an on-board RP2040 microcontroller; an attached 16MB flash device caches recently used models, allowing us to skip the upload step in many cases.
Once the sensor has started streaming, the IMX500 operates as a standard Bayer image sensor, much like the one on Raspberry Pi Camera Module 3. An integrated Image Signal Processor (ISP) performs basic image processing steps on the sensor frame (principally Bayer-to-RGB conversion and cropping/rescaling), and feeds the processed frame directly into the AI accelerator. Once the neural network model has processed the frame, its output is transferred to the host Raspberry Pi together with the Bayer frame over the CSI-2 camera bus.
Integration with Raspberry Pi libcamera
A key benefit of the AI Camera is its seamless integration with our Raspberry Pi camera software stack. Under the hood, libcamera processes the Bayer frame using our own ISP, just as it would for any sensor.
We also parse the neural network results to generate an output tensor, and synchronise it with the processed Bayer frame. Both of these are returned to the application during libcamera’s request completion step.
The Raspberry Pi camera frameworks — Picamera2 and rpicam-apps, and indeed any libcamera-based application — can retrieve the output tensor, correctly synchronised with the sensor frame. Here’s an example of an object detection neural network model (MobileNet SSD) running under rpicam-apps and performing inference on a 1080p video at 30fps.
This demo uses the postprocessing framework in rpicam-apps to generate object bounding boxes from the output tensor and draw them on the image. This stage takes no more than 300 lines of code to implement. An equivalent application built using Python and Picamera2 requires many fewer lines of code.
Another example below shows a pose estimation neural network model (PoseNet) performing inference on a 1080p video at 30fps.
Although these examples were recorded using a Raspberry Pi 4, they run with the same inferencing performance on a Raspberry Pi Zero!
Together with Sony, we have released a number of popular visual neural network models optimised for the AI Camera in our model zoo, along with visualisation example scripts using Picamera2.
Which product should I buy?
Should you buy a Raspberry Pi AI Kit, or a Raspberry Pi AI Camera? The AI Kit has higher theoretical performance than the AI Camera, and can support a broader range of models, but is only compatible with Raspberry Pi 5. The AI Camera is more compact, has a lower total cost if you don’t already own a camera, and is compatible with all models of Raspberry Pi.
Ultimately, both products provide great acceleration performance for common models, and both have been optimised to work smoothly with our camera software stack.
Getting started and going further
Check out our Getting Started Guide. There you’ll find instructions on installing the AI Camera hardware, setting up the software environment, and running the examples and neural networks in our model zoo.
Sony’s AITRIOS Developer site has more technical resources on the IMX500 sensor, in particular the IMX500 Converter and IMX500 Package documentation, which will be useful for users who want to run custom-trained networks on the AI Camera.
We’ve been inspired by the incredible AI projects you’ve built over the years with Raspberry Pi, and your hard work and inventiveness encourages us to invest in the tools that will help you go further. The arrival of first the AI Kit, and now the AI Camera, opens up a whole new world of opportunities for high-resolution, high-frame rate, high-quality visual AI: we don’t know what you’re going to build with them, but we’re sure it will be awesome.
In the latest issue of The MagPi, out today, we talk to David Miles, who designed a Raspberry Pi Pico kit to bring junked joysticks back to life.
One of the many great things about the EMF Camp events is the swap shop tent, where all kinds of things are brought to be sold and exchanged. On one of my many visits there, I found a slightly worn (but very heavy) pair of joysticks which looked as if they had been part of a professional simulator at some point. In this article, I’ll talk about how I reverse-engineered them to create a fully fledged flight simulator controller. Along the way, I happened to create a Pico program that makes it easy to use any input device as a USB joystick.
Figure 1: The joysticks as I found them in the EMF Camp swap shop
Getting started
Figure 1 shows the joysticks as I found them. Ultra Electronics is a manufacturer of devices for the Ministry of Defence in the UK, so this looked like something interesting. My hope was to try and get them working so I could use them with a flight simulator for a plane that used joysticks like these. This meant I had two challenges:
Getting data out of the joystick
Making something which connects to a PC as a game controller
Follow the signals
To discover how to get data out of the joystick, I had a look at the wires that came out of it. The main unit has two plugs on the end of a (surprisingly long) wire — what looks like a nine-pin RS232 serial connector, and a 15-pin game port connector. The secondary joystick (the one on the right in Figure 1) has a 25-pin connector which plugs into the primary one, which suggests that it just contains switches, and that the first is the brains of the operation. The connector types fit the late-’90s feel of the hardware and give clues as to how we can talk to it, but we can take a look inside to confirm some assumptions.
Figure 2 shows the view inside the device. There are four screws holding the top part of the joystick inside the case. After removing these, the whole stick and gimbal assembly lifts out and we can see this circuit board. Some of the components on here give us some pointers on how this works.
Figure 2: The PCB at the heart of the joystick. Look at all those carefully adjusted and sealed calibration resistors!
There’s a chip with a label covering it. This is normally a sign of some sort of microcontroller, indicating that there’s more going on than just a simple controller. This is backed up by seeing some chips with ‘ADC’ on them — this stands for ‘Analogue to Digital Converter’, and could be used to turn an analogue value (e.g. the position of a potentiometer in a joystick) into a digital value sent to a computer.
Finally, in the middle, there is a chip with ‘MAX232’ written on it. This family of chips are used to convert 5v logic levels you often see used in microelectronics to the 15v levels used in RS232 communications. This is another sign that the nine-pin connector is probably a regular RS232 connector I can use to get data from the joystick.
This leaves the 15-pin connector. Is it a game port? Game port connectors were often used in the ’90s to connect a joystick to your PC (confusingly, often using a socket on your sound card). They allowed you to connect two joysticks to your computer, each with two buttons. We can see that this joystick has more than four buttons — doing a quick count, we seem to have about 30, so maybe it’s being used for something else.
Figure 3: All the wires have unique colour combinations
If we pull the back off the connector, we can see that there are only two pins connected using red and black wires. If we check those pins on the game port specification, we see that these are connected to ground and 5v.
This fits in with our earlier findings. There’s no way to power a device over a regular serial connection, but using a game port alongside a serial connector would allow you to connect to a PC and transfer power and data.
Plugging it in
Now that we think we know how this works, we can have a go at connecting it to a computer. To do this, I used a regular USB-to-serial converter and a power adapter connected to a USB power bank.
After connecting power, I could see a steady 100ma power draw, which felt reassuring, so I hooked up the serial connection and tried a few different configurations to see if I could receive anything sensible. Eventually, I found one which gave me a regular stream of data which changed as I moved the controls and pressed buttons. It turns out that these joysticks work at 9600 baud (around 100 characters per second) and send 8-bit data.
Figure 4: This is how Windows sees the controller. The Raspberry Pi has a similar interface for joysticks
Now that I had data, the next thing was to decode it. I started by writing a Python program to read from the joystick and write it out. By starting to build something on the PC, I would have something I could then use on a Pico with CircuitPython.
Working through the controls on the joystick, I found that each button corresponds to a single bit in the data, and each of the axes (X and Y movements of the sticks) with a byte in the output, ranging from -128 to 127, and encoded using two’s complement.
Making a game controller
Now that I could read from the joystick, I needed to send these to the computer as a proper game controller that could be used by programs such as Microsoft Flight Simulator. Figure 3 shows what I was aiming for: I wanted all the abilities of the joystick to be exposed for use on the PC.
A Raspberry Pi Pico was the perfect thing to use here as it has great IO options and support for working in Python. I needed to configure it to announce itself as a game controller over USB, and then adapt my code to run on the Pico and send the appropriate messages to the PC.
Fortunately, CircuitPython has great support for making custom devices like this. First, we need to define how we want our device to appear over USB, and write some Python to make sure this is registered whenever the Pico connects over USB.
Figure 5: The system laid out on the desk. At the top is the power supply provided by a USB battery, at the left is a Raspberry Pi Pico connected to the PC using USB, in the middle is the RS232 interface, and on the right is the game port adapter
This involves digging into the world of USB HID descriptors. HID stands for Human Interface Device, and covers a wide range of gadgets which you can connect over USB, including mice, keyboards, and game controllers, even going as far as volume controls and exercise equipment.
These descriptors contain information about the device, such as what class it is and which types of data it will send to the PC. By default, CircuitPython supports only a mouse and keyboard. We need to send a descriptor which contains the different axes, buttons, and hats the controller supports.
The full specification for this is on the official www.usb.org site, but we can extend the Adafruit example to increase the number of buttons to 32, and to add an additional entry for the joystick’s hats.
Putting the boot in
When building CircuitPython projects, you will normally write Python in the code.py file. Code in this file runs after the Pico has initialised USB, so it’s too late to change anything about the device. To do this, we need to look at the lesser-known boot.py file. This file runs when CircuitPython first boots, and before USB is configured, which means that you can make changes to the USB devices that are registered.
Figure 6: The EMF joystick with its virtual counterpart. This looks like a close match to the real thing
The program in the code.py file reads data packets from the joystick, decodes values from the data, and sends USB messages corresponding to the state of the joystick. The present version is in two parts. The first reads the data into a buffer, and the second pulls values out of this buffer and sends USB messages.
Bringing it all together
Finally, we need to integrate our project which reads from the PC serial port with our game controller code. As the Pico only supports 5v logic levels, we can use our own MAX232 to map the RS232 voltages to something the Pico can handle. To do this, I used an integrated module with a DB-9 connector on one side and a pin header on the other, connected to the Pico. I could also power the joystick from the Pico’s 5v supply.
Now that I had the two halves of the project working together, it was a case of going through the buttons on the device and mapping them across to the controls defined for the controller. This was one of the more tedious parts of the project, but it was a good chance to make sure everything was working correctly.
Figure 7: The joystick in action
Finally, I’ve got everything I need to use the joystick to control a plane running on the PC. Here I’m using it to control the McDonnell Douglas F/A-18 included with Flight Simulator, which has a similar stick to this, but with two hats swapped. I still don’t know precisely which plane (if any) this joystick is based on. The AV-8 Harrier has those hats in the correct order, but has an extra button just below them on the face of the joystick.
I’ve not been able to find any images of joysticks similar to the second stick anywhere, so if you have any idea where that could have come from, please let me know!
Read the full story, including some extra tips from the maker David Miles, in The MagPi #146.
The MagPi #146 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
The Hazard 3 RISC-V cores on the RP2350 were designed by Raspberry Pi’s own Luke Wren in his spare time – and as they’re open source, you can download the design files yourself and start poking around in the very same chip that will eventually be in use on millions of units out in the wild. As Eben Upton puts it: “In adding Hazard3 to RP2350, we’re aiming to give software developers a chance to experiment with the RISC-V architecture in a stable, well-supported environment, and to popularise Hazard3 as a clean, open core, suitable for verbatim use in other devices, or as a basis for further development.” Luke’s reflections first appeared in issue 145 of The MagPi.
I’ve been doing logic design in my spare time since I was a student. It’s highly addictive, and I think it’s more accurate to say I’m a hobbyist who works in chip design than a chip designer with a hobby! It’s an open-source processor design that anyone can put in their chip and use to run RISC-V code anywhere. You can also run it on an FPGA board, or run the simulator on your own machine. It’s all built using open-source tools like yosys, nextpnr and gtkwave.
The best way to get started is to get an FPGA board and just get hacking. Writing RTL [register transfer level] is a bit mind-bending at first — you can think of it like a C program where all of the statements execute at once, rather than sequentially — but that kick of seeing your own hardware come to life keeps you going. Start by blinking an LED, and keep going.
Hazard3 is 100% my own design. It’s a fork of Hazard5, the processor I designed for RISCBoy, my open-source competitor to the Game Boy Advance. Hazard5 is a five-stage pipeline — therefore having many hazards: data flow, control flow and structural — and a hazard is also a kind of ‘risk’, like the instruction set.
Hazard5 was meant to run at the highest possible frequency on an iCE40 FPGA, so I could run the RISCBoy graphics core at a higher frequency too. Hazard3 on the other hand is a production-grade processor which delivers as much performance as possible in its small area envelope and within the range of frequencies I expect to see on microcontroller designs. It’s a productionised version of Hazard5 with a shorter pipeline, hardware debug, and some security and memory protection features that people expect in real systems.
From forking Hazard5 to having Hazard3 running CoreMark took less than a week. From that point until the first RP2350 tapeout was around two years, working on it on-and-off throughout. There is still ongoing maintenance work, and plans for future expansion — it will never be ‘finished’, just transition from development to stable releases.
Before I started working on RISCBoy I had a project called Tarantula which was an eight-thread barrel processor implementing the Armv6-M instruction set, because that was the ISA I was most familiar with at the time, having written some Assembly during a summer internship. I abandoned the project because I realised I would never be able to share it with anybody, and I don’t think I even have that source code any more.
That experience changed how I looked at things from that point forward. When I decided I wanted to build a games console from scratch, including the processor, I looked around the instruction sets available at this point, this was around 2018, and there were a few interesting ones — Hitachi SuperH had just become much less legally restrictive — but RISC-V stood out as an instruction set where I could implement it fairly easily.
The base instruction set is quite clean and simple, and you can add more complexity from a menu of extensions. I could share that with other people, and they could actually use it, and I could program using a real production-grade compiler like GCC or LLVM.
That was a long time ago, and RISC-V has come a long way since, both technically and as a community. There are other instruction sets that have become more open in the wake of RISC-V but I think it’s clear where the momentum is. It’s easy to criticise some of the technical decisions made in the base ISA — did we really need 31 link registers? — but the community is the most important thing in my eyes.
I am excited about RISC-V because it lets you perform your mad-scientist architecture experiments on top of a clean and standard architecture. If you look at something like CHERI, which is a super-exciting development in the embedded security space, those folks have just gone and written a spec, and you can just go and implement it — no need to wait for it to be served on a plate.