We’re pleased to announce that as of today, our RP2350 microcontroller is now available via JLCPCB’s fantastic fast-turn PCB assembly service.
RP2350 is our latest high-performance, secure microcontroller, offering unparalleled levels of processing and flexibility at its very affordable price point. Now that rapid assembly of RP2350-based boards is available from JLCPCB, prototyping and initial production of your designs is straightforward and speedy.
Before users submit a design to JLC, we’re asking that they do the following:
If using the on-chip SMPS, please use the custom Abracon polarised inductor
Familiarise themselves with the RP2350 datasheet including chip errata
This is to help ensure the designs will function well across temperature and process variations. We also provide a helpful reference design in KiCAD format here.
We’re huge fans of JLC’s PCB services and it’s been great working with them to bring RP2350 into their inventory of processors. To learn more about their service, visit the JLCPCB website.
Initially, the RP2350A and B package versions are available via JLCPCB. The RP2354A and B versions (the package versions with stacked flash) will be available at JLCPCB, as well as other distributors and authorised resellers, later this year.
Visit our RP2350 page to learn more about the RP2350 family of microcontrollers.
Visit the JLCPCB website to learn more or submit a design to JLC’s PCB service.
A competition for space-bound students resulted in a tiny, can-sized, Raspberry Pi-powered satellite. Rob Zwetsloot boldly takes a look at it.
What would you do if you had to create a satellite the size of a drinks can? The yearly CanSat competition for students in their teens asks this question, and many teams have answered — including LittleBlueDot.
Satellites are constructed to fit the same space as a can of soft drink for the competition
“The challenge for students is to fit all the major subsystems found in a satellite, such as power, sensors, and a communication system, into this minimal volume,” the team tell us. They came third in the country for their final build. As the competition instructions explain, “After building their CanSat, teams will be invited to launch events across the UK to launch their CanSats on small rockets, with their CanSats returning to Earth using a parachute designed by the students. Teams are set a primary mission of measuring air pressure and air temperature during the CanSat’s descent, with data being transmitted to the students’ ground station.”
They also needed to design a secondary mission, which in the case of LittleBlueDot included taking photos of the ground below to map it. “The idea of mapping large areas, including foreign bodies, came up when we were discussing potential asteroid mining in the future,” the team say. “And also improving efficiency in agriculture, both fields where large benefits could be seen from mapping land cheaply.”
Trial and error
For the project, Raspberry Pi was an obvious choice for the team — while a microcontroller would be able to handle the environmental recording and transmitting requirements, a Raspberry Pi computer allowed for on-board image processing. The team then got to work building and refining.
“Initially, a very basic CanSat was made to help visualise the size and space that was available to be worked with,” they explain. “Different ways to secure the Can’s inner electronics in an accessible way were explored. In V0, there were two bodies: a screw lid with an attached compartment behind, and the main module itself.”
The V1 build went from a vertical orientation to horizontal to accommodate a larger gap between the cameras. Across V1 and V2 builds, different ways of wiring up and loading the circuit were explored, and clear acrylic discs were added to protect the cameras from moisture and reduce their drag.
“In V1, the parachute was attached via four straight vertical holes,” the team continue. “V2 featured a more reliable solution, using four M5 nuts inset into the walls of the Can to secure the paracord in place and put the strain on the parachute rather than on the Can itself.”
The design was iterated on several times via 3D prints
After some issues at the regional launch, a V3 was created to better fit all the components they required.
“The Can was simplified by removing the inner module and trays [for the electronics], and a friction fit was used to directly mount components to the inside of the CanSat,” the team say. “During testing of the temperature readings, it was found that heat from the internal components was affecting the readings being taken. To mitigate this, fans were added for cooling, and vents were installed on both sides of the CanSat using a honeycomb grid to allow air flow. The strength of the vents were tested in Fusion 360 and they still passed the stress tests.”
With this, they were ready for the national launch, where they were part of the national finals.
Mapping with data
As well as cameras, the CanSat had temperature and pressure sensors, an IMU (inertial measurement unit), a magnetometer, and GPS. These were used to calculate altitude and orientation.
Hi team!
“The two on-board cameras took photos of the ground simultaneously,” the team explain. “This meant that an FFT [fast Fourier transform] taken of an image from the first camera would give a wave that was a translation of the wave an FFT would give for the second camera. This translation would vary based on the orientation of the Can, the distance between the two cameras, the altitude of the Can, and finally the actual altitude of points on the ground. Given values for the first three variables, the fourth could be calculated using trigonometry.”
The team came third overall in the competition. And the data? Sadly, due to a safety quick-release switch being released during launch, they were only able to get one set of images. Hopefully they can get it all working for another launch.
We’re partial to a 3D printer around here. The Maker Lab at Pi Towers has a nice collection of various types and sizes to serve the unique needs of our engineers, so we’re pretty good at figuring them out across a range of brands. When we saw Form 4, the newest 3D printer from Formlabs, we figured it would be especially easy to get our heads around, seeing as it’s built on Raspberry Pi Compute Module 4.
Printing for professionals
While some printer brands focus on building machines to support the quick and easy home printing jobs lots of makers need, Formlabs has always been more focused on industrial customers — they were the first company to build a 3D printer capable of achieving professional part quality at an affordable price. Turning to our Compute Module 4 to base their newest machine around was a no-brainer as they looked to increase the speed, quality, and success rate of printing for their flagship line, providing a reliable, high-power solution capable of meeting the needs of businesses.
Formlabs was founded in 2011 and, these days, we see their printers used in all sorts of industries, including engineering, manufacturing, automotive, aerospace, and medical. All Formlabs printers across the range have various apps running in the background to move motors, regulate temperatures, log critical events, and so on. The new Form 4 would also need to run two high-resolution displays and a camera simultaneously, so more CPU, RAM, and graphics capabilities were required. Enter Raspberry Pi Compute Module 4.
Story time
Formlabs was initially most familiar with Raspberry Pi’s popularity with makers and hobbyists, and investigated whether the devices were also suitable for industrial applications, checking that they met needs regarding security, supply, ease of use, and, of course, price. The Compute Module line satisfied all their requirements.
Our Product Information Portal provides business customers and professional users with access to white papers, guides, compliance reports, and other information to help keep product development moving along at pace. Formlabs harnessed all of the above and managed to hit its time-to-market target. We do love a good success story.
There’s a much longer story behind Formlabs’ new Compute Module 4-based machine if you’d like to read it. You’ll find all sorts of juicy detail about the design, development, and journey to market, so if you’re into your printers or are curious about how Raspberry Pi supported this industrial use case, give our recent case study a read.
Accurate maps of intricate cave systems help improve the safety of intrepid divers. In issue 150 of The MagPi, Rosie Hattersley hears about Raspberry Shake’s contribution.
Richard Wylde describes himself as “a sort of physicist and engineer living between the business and academic worlds” whose passion for cave diving is “closer to an obsession than a hobby”. He is co-founder of Terahertz, an advanced engineering company which, among other impressive achievements, developed remote sensing instruments for the European Space Agency’s EarthCARE mission. Richard is also one of several experienced cave explorers involved in mapping the subterranean network of cenotes [sinkholes] in Yucatan, Mexico. “The caves are stunningly beautiful and [mapping them] is technically difficult,” he says. “A lot of effort goes into staying alive.” Acoustic and magnetic mapping can help plot the location and direction of these unexplored passageways, improving safety for all who visit them — an endeavour made more robust using Raspberry Shake, a Raspberry Pi-based device more commonly used to detect earthquakes.
Cave measurements are made manually underwater using a compass and tape measure
The maps are also useful for dive guides keen to show off the speleothems (mineral deposits such as stalactites and stalagmites), and for developers to know whether building on a particular area is possible and permissible. Their dives also reveal the effects of developments such as golf courses, which are built by clearing jungles, use nitrates to maintain their greens, and may also be drawing water from the aquifers.
Distinguished company
Richard often dives with renowned cave explorer Fred Devos in Mexico’s Quintana Roo region, which has no overground rivers. Mapping its subterranean cave network is “incredibly dangerous and physically challenging”.
Team Raspberry Shake’s kit, including a Raspberry Shake acoustic seismograph, oxygen tanks, and maps of remote and barely accessible stretches of cave systems
Exploring the caves involves following taut lines of string with knots every ten feet to mark the way, just like Theseus in the Greek myth. Richard mentions the trust and focus needed to accurately read a compass and count out distances travelled ten feet at a time based on how many knots you’ve passed. Visibility and human physical resilience are all factors too — if you’re exhausted from a lengthy dive, you probably aren’t noticing arrows or counting knots accurately. “It’s milk of magnesia down there when the bubbles hit the ceiling.”
Richard explains the process: “We mark the depth, the distance and the azimuth, and the angle to the next station” — often simply where the line is wrapped around a rock. Painted arrows help ensure divers don’t get lost, but some caves have more than one entrance, or arrows pointing in more than one direction.
Richard Wylde, Fred Devos, and colleagues published a booklet mapping the cave system at Actun Koh
The maps are written on specially printed paper and include geographical features such as cave openings, changes in cave and water depth, and height. Relating this information to the outside world requires a way of referring it to the surface and getting a GPS position from it. “The map is linked to an absolute position by taking the line out of the cave entrance and accessing a GPS coordinate in an area with few obstructions to the sky,” says Richard. “In caves which have more than one entrance, it is possible to ascertain and correct for the build-up of errors by taking GPS measurements at the entrances. Programs such as Ariane [a widely used mapping tool] can then be used to distribute the correction through the map.”
Instrumental improvements
The team previously used a fluxgate magnetometer to match above-ground and subterranean locations at the Sagitario cenote. In the summer of 2024, Richard and his cave-mapping colleagues trialled a new means of confirming their findings, using both the Raspberry Shake 1D vertical motion seismograph and a far more sensitive acoustic magnetometer. Getting to the site after hacking through the jungle, the Raspberry Shake and acoustic magnetoscope were placed directly above where divers believed the cave was located.
Golfing greens are treated with nitrates that poison the cenotes with algae
“Raspberry Shake helped confirm our findings and add a degree of accuracy that was not previously possible,” says Richard. The results were promising enough that the team ordered an RS3D three-axis model for their planned return trip in early 2025. This time, the Raspberry Shake will be placed in an IP67 waterproof box, and the team hopes the additional measurements will allow direction to be determined from the relative amplitudes of the disturbance in the X, Y, Z frames.
Richard, Sam, and Chris embark on a cave dive at Actun Koh
For 2025’s Official Raspberry Pi Handbook, PJ Evans created an entire feature exploring the world of Raspberry Pi home automation projects. We especially liked this one, which shows you how to take control of your home, and your privacy, with a Raspberry Pi 4.
Home automation is not only useful but can also be a great deal of fun, particularly when setting up cool automations or connecting different devices together in new ways. It can also help boost the energy efficiency and security of your home; there are a wealth of practical reasons to start experimenting with this technology. Although there are different vendor-specific automation systems out there, we prefer one that doesn’t ‘lock’ you into one provider. One such platform is Home Assistant (home-assistant.io), a free open-source operating system designed with flexibility and independence in mind. Home Assistant is a huge topic, but here we’ll look at the basics of setting up a server to get you started on your automation journey.
Prepare your Raspberry Pi
Although Home Assistant isn’t strictly an operating system in its own right, it is available as a Raspberry Pi image that significantly reduces the work a user has to do to get up and running. Home Assistant is intended to run on a Raspberry Pi as the sole service. It is possible to run Home Assistant alongside other apps and services, but we’re keeping to the true path here. Home Assistant works with Raspberry Pi 3, but we strongly recommend using a Raspberry Pi 4 for the best performance. You should also use a wired Ethernet connection for setup and to ensure reliability. Home Assistant is headless, so no monitor or keyboard is needed.
As your confidence and knowledge grow, you can create more complex dashboards. You can even incorporate video feed
Write the Home Assistant image
Luckily for us, you can write the latest stable Home Assistant image directly from the Raspberry Pi Imager. Insert a fast SD card 32GB or more in size into your computer. In Imager, select Choose OS > Other specific-purpose OS > Home assistants and home automation > Home Assistant > Home Assistant OS 9.5 (RPI 4/400 or RPI 3, as needed).
You’ll now get a ready-to-boot image. Insert the card into your Raspberry Pi, make sure you’ve got a wired network connection, and power up. After a few minutes, try to connect to http://homeassistant:8123 in your web browser.
Replace your porch light with a smart light bulb and you can trigger it at sundown all year round
Initial setup
Time to grab your favourite beverage. You’ll see an initial setup screen stating that it will take about 20 minutes before you can proceed. Soon it will be automatically replaced with the first stage of setup. Provide your name, username, and choice of password. On the next screen, there will be some questions about the server’s location. It’s important to set this accurately if you want to take advantage of sun-up/down times. Finally, Home Assistant will ‘look’ around your network for any existing smart devices and let you know what it’s found. Don’t worry if something doesn’t appear, it will probably just need manual configuration later on.
Add integrations
Home Assistant refers to smart platforms as ‘integrations’. For instance, if you have Philips Hue or IKEA Trådfri smart lights, it will add an integration for them and then a ‘device’ for each light it finds. It will also create a default dashboard for you based on what’s been found. Not all integrations can be found automatically, so you can browse available integrations and add them yourself or install third-party add-ons. Integrations include lights, security devices, media centres, printers, and mains power controllers —you can even make your own. At this point, it’s best to start exploring.
Combine devices to create clever energy-saving automation. Got solar panels? When it’s sunny, switch the washing machine and all the lights on
Customisation
One of Home Assistant’s greatest strengths is customisation. The dashboard system (‘Lovelace’) allows you to arrange your device control and set automations however you like them. By default, Home Assistant automates the layout, but we recommend you disable that. Click the three dots in the top right of the screen, followed by ‘Edit dashboard’. You’ll be asked if you want to take control of layout. Do so, and then you can create your perfect layout. You can resize, restyle, add graphs, tabs, and badges. Don’t be intimidated; start small and build things up as you become more familiar.
Next steps
Congratulations, you now have a running Home Assistant server. The capabilities of this service can seem overwhelming at times, but with a little reading on home-assistant.io and some digging around the menus, you’ll soon be taking control of your home. Once you’ve added the ability to switch devices on and off, or monitor things like printer ink levels, move on to Automations. These allow certain actions to happen based on events. For example, you can have a motion sensor turn on certain lights around the house. Check out Settings > Automations and have a play.
The Official Raspberry Pi Handbook 2025
Dive into the world of Raspberry Pi with this huge book of tutorials, project showcases, guides, product reviews, and much more. With 200 pages packed full of maker goodness, you’ll also find inspiration for your Raspberry Pi Zero 2 W, Raspberry Pi 4, or any other Raspberry Pi model you have — there’s something for everyone.
Having looked to see how blood pressure monitors operate, Miloš Rašić has been hard at work trying to improve their accuracy.David Crookes conducted this interview for the special 150th anniversary issue of our official magazine.
Keeping track of blood pressure is crucial for maintaining good health, especially when managing heart-related conditions. Electrical engineer Miloš Rašić knows this only too well. “Like most older people, my grandma suffers from elevated blood pressure, so a digital pressure monitor is something that is being used daily in the household,” he says. But he also noticed the machines can be flawed.
Besides the main PCB, which is based around a Raspberry Pi Pico W, there is an air pump and valve, GX12 connectors, buttons, an 18650 battery, NeoPixel LEDs, an OLED display, and some other smaller parts
“Different monitors have provided widely different measurements and their performance was highly dependent on their battery level, which is not a good thing,” he explains. “So for my master’s thesis project, I wanted to explore digital blood pressure monitors and discover how they work.” This led him to develop a cardiography signal measuring device based around a Raspberry Pi Pico W.
Conducting experiments
When Miloš approached his project, he had a list of requirements in mind, chief among them being safety. “The device had to have optical isolation when connected to a PC and be battery-powered or have an isolated power supply,” he says.
As a priority, it needed to measure blood pressure. “This included measuring the air pressure inside an arm cuff, controlling a small air pump, and controlling an electromagnetic valve,” he adds. Miloš also wanted the device to use a well-supported microcontroller unit with wireless capabilities, hence the use of a Raspberry Pi Pico W. “It provided everything I needed in a small package and was supported by a large community, which meant everything would be easy to troubleshoot,” he says.
The main device casing as well as the PPG clamp have been 3D printed using a Creality K1C. The models can be downloaded from Printables
Along the way, Miloš began to add more features, including a stethoscope and the ability to take an ECG measurement. By using a photoplethysmography (PPG) clamp, he also figured the device could detect blood volume changes in the microvascular bed of tissue and that, combined, these sensors would be able to give a better insight into a person’s heart health.
And yet he was clear from the start that he wasn’t going to create a medical device. Instead, the ultimate aim was to take readings and conduct experiments to discover an optimal algorithm for measuring blood pressure. “The whole area of blood pressure monitors was a curiosity for me and I wanted to demystify it a bit and generally have a platform which other people can experiment with,” he explains. “So I created a setup that can be used for experimenting with new methods of analysing cardiography signals.”
To connect the stethoscope to the system, the earphones were removed and a small piezo microphone was then connected to an amplifier circuit
Heart of the build
To fulfil his ambition, he got to work designing the PCB before looking at the other necessary components, such as the pump, valve, battery, and connectors. Some parts were simple enough — for example, the air pressure cuff, which you’ve likely seen on a visit to a GP or hospital. “This is the only sensor most commercial devices use, and the estimations using it are good enough for most cases,” Miloš says. But others required more work.
The ECG sensor to record heart activity was an important part of the build. “I wanted to extract the pulses from the air pressure signal and for the ECG to be my reference measurement so that I knew the algorithm was working properly,” he says. For this, Miloš included a custom layout of the AD8232 IC on the PCB (AD8232 is an integrated signal conditioning block for ECG measurement applications), allowing measurements to be taken.
The pressure sensor calibration apparatus was created so that constant pressure can be maintained in the system
Miloš also made a PPG clamp using a MikroE Oxi5 Click board that communicated with the rest of the system over I2C. “The PPG clamp is often used to measure blood oxygen saturation, but since it works by detecting the changes in blood flow in the finger, it’s a very useful sensor when it’s used in combination with the arm cuff,” Miloš says. “Since the arm cuff cuts off circulation in the arm, and then slowly lowers the air pressure inside until the circulation is established again, by using the PPG we can have a precise detection of when the laminar flow has been established again, which is the moment that the air pressure inside the arm cuff is equal to the diastolic air pressure.”
Finally, an old analogue stethoscope was added. Miloš combined this with a small piezo microphone, turning the stethoscope into an electronic device. “A stethoscope is used when doing manual blood pressure measurements, and since [this] is still the gold standard for non-invasive methods, I wanted to see how the signal on the stethoscope looks during this process and if I could draw any conclusions from it,” Miloš reveals.
Pressure’s on
To make sense of the data, Miloš decided the project would need a graphical interface. “This would have a live view of all of the measured signals and the capability of recording all of the data into a CSV file,” he says. It required a hefty dose of programming; Python was used to code the GUI, handling the graphical interface, the communication with the device, and the data logging capabilities. Python was also used to analyse the recorded signals, while the firmware was written in C++, “so that it runs as fast as possible on the Pico,” Miloš explains.
A custom four-layer PCB was developed, using Raspberry Pi Pico W as the microcontroller
With everything working, Miloš designed a case. “I needed to see the rough space required for everything, which allowed me to design a case with mounting points for each of those things,” he says. “On the top, there is a lid that has NeoPixel LEDs and a small OLED display that can be programmed to show information to the user.”
Since then, he’s been using the project to conduct many tests, and you can see the results of those on Miloš’ GitHub page. The project has also been made open source because he hopes it will help others with their own projects. “It can give them a head start so they don’t have to develop their electronics from scratch if all they want to do is, for example, signal analysis,” he says. “This is why I’ve also included some data that I’ve recorded with this device if anyone wants to use just that without ever having any contact points with the hardware!”
Of course, you shouldn’t use home-made tools to diagnose medical problems; Miloš made it clear from the start that he wasn’t creating a medical device.
A very rewarding thing about designing affordable hardware is watching young makers grow up with it, sometimes taking their interest in computing to university and beyond. We’ve seen kids who began playing around with Raspberry Pi go on to use our devices in their professional lives — in fact, that’s how a couple of our own engineers started out. In issue 149 of The MagPi, we spoke to the three siblings behind the GurgleAppsYouTube channel. They’ve been sharing STEM projects for ten years now.
We’ve been covering projects from the team of siblings who make up GurgleApps for a long time — most recently their Colour Word Clock (image below) — and they themselves have been using a Raspberry Pi since the year it came out. In fact, it helped turn them into the makers they are today.
“Making became a part of our lives largely due to the influence of our parents, who filled our home with electronics, science, and coding projects,” the GurgleApps trio tell us. “Funnily enough, we weren’t hooked immediately — we had all this amazing equipment and knowledge at home, but took it for granted. The real spark came when Caleb received his first Raspberry Pi in 2012. Our dad playfully ‘forgot’ to tell us about the startx command, so we spent the first month working solely in the terminal, using simple commands like top and programming in Vi (a text editor) to create quiz and adventure games — without realising there was a graphical interface! It was rather frustrating for us at the time, but as our dad reminded us, it was nothing compared to his old ZX Spectrum.”
How did you start making videos together?
We started making videos together somewhat accidentally in 2015. It all kicked off with a prank on our dad where we used a Raspberry Pi to SSH into his computer and close the app he was working on. Amélie demonstrated the prank using simple shell commands, while Caleb handled the filming. Since we were too young for social media, we posted the video on our parents’ account. Unexpectedly, it went viral, gathering 1.4 million views! The overwhelming support inspired us to create more content, leading to the birth of our channel, GurgleApps.
During the COVID-19 pandemic, we noticed that many students — including us — were missing out on hands-on science experiments. We started recreating school physics experiments at home and sharing tutorials on our channel. This allowed others to keep learning and exploring STEM subjects despite the circumstances. We’re dedicated to making STEM education accessible and fun for everyone.
What was your first group maker project?
Our first significant group project was creating the Pico Piano (watch below). We built it using a Raspberry Pi Pico microcontroller and designed our own circuit board right at home. To make the circuit board, we used a DIY method: drawing the circuit design on a copper board with Sharpies and then etching it using ferric chloride. This hands-on process was both challenging and exciting, as it combined electronics, coding, and a bit of chemistry.
How has the channel affected your lives?
Running our YouTube channel has taught us a wide range of skills — from presenting and video editing to live-streaming and valuable maker and business skills. Live streaming helped us handle mistakes on the fly and build confidence. We’ve also been guests on podcasts and other live streams, which allowed us to meet lots of fun and interesting people in the maker community.
Our STEM knowledge has deepened significantly. Supportive viewers often share their expertise; for example, one viewer spent hours teaching us about PCB manufacturing, and another pointed out an inaccuracy in our light gate calculations, helping us learn and improve.
Imitation is the sincerest form of flattery — Raspberry Pi 400 was inspired by the ZX Spectrum
What’s your favourite thing you’ve made together?
Our favourite project we’ve made together is definitely the Word Clock! It’s special to us because it was inspired by our very first word clock project with a tiny 8×8 display over ten years ago. We’ve evolved it into a kit that you can now buy, and we’ve made everything open source — even the 3D print files for the case are available. We spent months perfecting it and putting everything we’ve learned into making it something we’re really proud of. What’s even more exciting is seeing people hack it to do things we never dreamed of. Watching others take our creation, build upon it, and share their own versions has been incredibly rewarding. We’ve recently updated our custom-made RGB LED matrix display — a key component of our word clock — and hopefully it will be ready for purchase from our shop very soon!
To see more of the trio’s projects and tutorials, subscribe to GurgleApps on YouTube.
The MagPi #150 out NOW!
You can grab the latest issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
At Raspberry Pi, we have always been keen to do things that are more sustainable. Our products are intrinsically small and low-power, which makes them efficient in terms of resource usage (materials, shipping, electricity); Raspberry Pi devices can replace more resource-hungry solutions in many applications that would traditionally use a legacy x86 PC, such as digital signage. They also enable innovative approaches to improving the environment or people’s lives – including, of course, Raspberry Pi’s founding mission to make cheap computers and computing resources readily available to all to support learning.
When Raspberry Pi floated on the London Stock Exchange on 11 June 2024, we were proud to be awarded the LSE’s Green Economy Mark. The Mark recognises that not only are our products efficient in terms of resource usage, they also enable the displacement of other, less sustainable, technologies.
Raspberry Pi joins the London Stock Exchange
We’ve also worked to reduce the environmental impact of our manufacturing and shipping. Over time we have reduced our use of plastic packaging and shrunk product carton and shipper sizes, as well as working on more efficient production methods such as the new intrusive reflow soldering process for Raspberry Pi 5. The latter both saves energy and reduces the physical manufacturing footprint at the factory, while increasing throughput. As well as benefitting the environment, all of these initiatives improve our products’ cost structure, increasing efficiency and allowing us to keep our prices lower.
Now that we are a public limited company, we have formalised the role of sustainability within the business, with a Sustainability Committee to oversee activities like these. The committee monitors our performance using various metrics such as CO₂ emissions and materials use, comes up with future targets, and challenges the team to develop a strategy to achieve them. Raspberry Pi aims to be a leader in sustainable practices within the technology sector while maintaining our core focus on providing affordable and accessible technology for all. Below we explore the key principles that guide our sustainability strategy, and look at how we calculate our impact in terms of carbon emissions.
Guiding principles for sustainable practices
Raspberry Pi is firmly committed to sustainability alongside our responsibility to our shareholders. Our commitment is driven by the beliefs of our founders, our team, and the Raspberry Pi Foundation, our largest shareholder and a charity focused on digital skills education. Customers quite rightly value credible and meaningful sustainability initiatives, which often have an important positive impact on brand reputation and market share, and we are able to balance these activities with the need to perform in a price-sensitive market. It goes without saying that we will adhere to all UK sustainability laws and regulations transparently, and we will also monitor voluntary best practices and adopt them where we can.
As well as monitoring and reducing our own CO₂ footprint, we are conscious of emissions generated by the many third parties we rely on for supply of materials and services – components, shipping, and so on. Encouraging suppliers to reduce their environmental impact is central to Raspberry Pi’s approach, and we are beginning to engage with all our suppliers, asking them to quantify the carbon content in their products. Over time we will consider the “carbon cost” of what we purchase (broadly, the cost of an equivalent high-quality carbon offset), as well as the base item cost, when we do our costing calculations. So, for example, a diode from one manufacturer may appear cheaper than one from another manufacturer, but if the carbon cost of the first is higher, we would take this into account and might choose the second. Over time this will favour lower-carbon solutions and encourage manufacturers to supply lower-carbon products.
Raspberry Pi prioritises keeping our products affordable and accessible, empowering customers to make informed choices about offsetting their own carbon footprint. This allows us to maintain profitability and maximise our contribution to the Raspberry Pi Foundation’s charitable mission, which remains a core objective for the company.
Measuring emissions
It’s imporant that our sustainability efforts are both transparent and accountable. To this end, we are actively measuring our carbon footprint across what are known as Scope 1, 2, and 3 emissions, and we have developed a methodology to assess our environmental impact as accurately as possible.
Scope 1 and 2 emissions
Scope 1 emissions are the direct emissions from sources owned or controlled by a company, while Scope 2 emissions are indirect emissions from generating the energy that the company purchases. Scope 2 includes emissions from electricity, heating, and cooling that the company consumes; although the company doesn’t produce these emissions directly, it is still responsible for them because they result from its energy consumption.
For Raspberry Pi, Scope 1 and 2 are lumped together, since we don’t have any energy-generating assets that emit carbon. We measure these emissions via the energy bills for our various offices and shops.
Scope 3 emissions
Scope 3 emissions encompass all the other indirect emissions that occur in a company’s value chain. This is the broadest category; it includes emissions from upstream activities like the production and transportation of raw materials, as well as downstream activities like the use and disposal of products. Scope 3 emissions also cover employee commuting, business travel, and waste generated in a company’s operations.
We divide the measurement of Scope 3 emissions into two categories: emissions due to products that we make to sell, and other carbon emissions generated through our business activities. As we are a company that sells millions of computers and accessories every year, our product emissions are especially important to understand.
Understanding carbon emissions from Raspberry Pi products
Life Cycle Assessment (LCA) is a method for evaluating the environmental impact of a product throughout its life, from sourcing the materials used to make it through to disposing of it. We assess emissions for all our products across all of their LCA phases except for customer usage of the product. The diagram below shows the different phases in a product’s life cycle:
To assess the environmental impact of our products, we worked with our partner Inhabit to conduct a comprehensive study, following the Greenhouse Gas Protocol and ISO 14044:2006 standards. We used industry-leading tools from Inhabit together with the EcoInvent database to calculate the carbon footprint of products throughout their lifecycle. This involved carrying out a detailed analysis of a set of individual, representative products across our range, then applying the results to other, similar products.
Calculating individual product carbon emissions
How do we calculate the carbon emitted when a product is manufactured? In a nutshell, we first take every item in the product’s Bill of Materials (BOM) and find the carbon emitted during its production (called its embodied carbon), and add up all these emissions. We then take account of the carbon used to make the product in the factory as well as the carbon emitted during shipping (both shipping of the materials to make the product, and shipping of the finished product). Finally, we add on a number representing the carbon we expect will be emitted when the product is disposed of.
This may sound like a fairly simple step-by-step process, but finding the exact quantity of embodied carbon for every item of a product’s BOM is not easy – in most cases you can’t (yet) just ask a manufacturer what the embodied carbon is in their capacitor or diode or PCB or other widget. So how do we do it? The simple answer is averages: lots of them, in a database.
We map every item in a product’s BOM to a category in an officially recognised database which contains the average embedded carbon for many, many categories; there may be a category for the average small signal diode or capacitor, or for the production of steel plate, and so on. Often these figures are provided by mass – a certain quantity of carbon emitted per unit mass of the product – which means we need to know the mass of each component. Where there are no good matches for a particular item, we might need to look at that item’s own BOM, calculating the mass of the various materials in a connector, for example, and using yet more averages for the plastic, metals, and processes involved in order to estimate its embodied carbon.
Raspberry Pi hardware is manufactured at Sony’s facility in Pencoed, south Wales
For the carbon emitted in factory production, we can use real data provided by our contract manufacturer Sony, and for shipping we also have quite a lot of real data to hand. For product end-of-life, we have to turn to averages once more and add on the average carbon cost of disposal; again, these figures come from an officially recognised database.
As I explained above, we do these detailed calculations for a representative subset of our products, and then we scale the carbon footprint calculations proportionally for products with similar designs or varying sizes. We have also categorised certain items which sell in smaller quantities as “de minimis”: for these products, we have estimated their environmental impact by applying to them the average CO₂ emissions per dollar of revenue across our calculated products. This has allowed us to provide a comprehensive carbon footprint assessment for our entire product range.
Lastly, the total emissions per product are combined with our yearly sales number for each product to give us our final estimate of Scope 3 carbon emissions in the product category across our business.
Other Scope 3 emissions
I wrote earlier in this article that we divide Scope 3 carbon emissions into two categories: product emissions and other emissions. To calculate emissions for the other Scope 3 items, we take all the non-product transactions for the year from our accounting journals, and assign a category to each one. Then we can link this category to average emissions per pound spent; these average figures are drawn, once more, from the EcoInvent database. This allows us to convert the money we have spent with a business into a carbon emission figure.
Combining all of our Scope 1, 2, and 3 emissions gives us a yearly emissions figure. For 2024, this will be reported in our inaugural annual report in April 2025.
Can we rely on the results? What do we do with them?
A good question to ask, given that our Scope 3 emissions calculations are based on a large sum of approximations, is how accurate – and therefore how useful – all this is. We’re using approved standards for these processes, and other businesses like ours will necessarily be calculating their emissions in a similar way; the averages method is the best currently available. It’s important to bear in mind that the results are an approximation, and this is a reason to work towards better data in the future, not a reason to be discouraged.
Each Raspberry Pi Carbon Removal Credit offsets the embodied carbon of a Raspberry Pi computer
As well as carbon emitted today, our monitoring work allows us to get a stake in the ground and the infrastructure in place to measure carbon emitted in the future, steadily improving our accuracy over time. We are talking to our suppliers about providing product carbon emissions figures, and building these into our design cycle. By developing more accurate models and automated data collection and carbon calculation systems, over time we can both produce more accurate numbers and reduce our carbon footprint!
Understanding product carbon emissions means we can come up with innovative solutions to help reduce emissions, such as our recently launched Carbon Removal Credits. We are taking our first steps in a long and important journey: in future articles here, we’ll delve deeper into the various initiatives we are undertaking to minimise our environmental impact and build a more sustainable future for computing. Stay tuned to learn more about our efforts in responsible sourcing, energy efficiency, and waste reduction, and find out how you can contribute to a greener Raspberry Pi ecosystem.
Raspberry Pi’s official magazine, The MagPi, has turned the big 150 and decided to mark the occasion in true maker stylewith a special feature celebrating 150 Raspberry Pi people and projects previously featured on its hallowed pages. Here, we’ve cherry-picked a few of our favourites. You can read the full feature, including Raspberry Pi appearances on TV, some famous makers, and excellent Pi-focused events, in The MagPi #150.
We Still Fax
People found creative ways to stay entertained in 2020. Enter We Still Fax, an intriguing theatrical project that interacts with an audience remotely using a fax machine. The core components of the show are the fax machine, Raspberry Pi, and Grandstream adapter, which translates a phone signal into an Ethernet signal and vice versa.
The Blueswarm team from Harvard University set out to explore how shoals of fish coordinate by building a swarm of underwater fish robots. Raspberry Pi Zero W was used to create multiple Bluebot fish-style robots that can be accessed remotely.
Taking gaming on a tiny screen to its extreme, maker James Brown responded to enquiries about whether his LEGO brick-embedded console could play the popular first-person shooter. With a 0.42-inch OLED, 4MB flash chip, and RP2040 microcontroller (as on Pico), it uses the latter’s second core to update the screen fast enough to create greyscale images and play video.
BrewPi was one of the first initiatives to recognise the power of Raspberry Pi for precision brewing. The BrewPi Spark 3 is a temperature controller that handles beer or wine fermentation with 0.1°C precision and sends data to an on‑board display.
Martin Spendiff and Vanessa Bradley updated a Goblin Teasmade with a Raspberry Pi Zero WH to produce their hot drink of choice… coffee! It uses a Grove ReSpeaker HAT and a speaker with a relay switch to replace the alarm. A script monitors Google Calendar, and if it sees a trigger phrase, it starts the boil cycle.
Greece’s NTAU School of Naval Architecture and Marine Engineering knew plenty about Raspberry Pi before selecting it for its underwater archaeology surveillance project, in which a self-powered submarine unit detects people or craft coming close to sensitive marine areas and sites of historic wrecks and alerts authorities to potential intruders.
A solar-powered sensor buoy that is “cheap to build, easy to run”, and provides continuous and reliable data. It helps study rising sea levels and was deployed in Grenada in the Caribbean for this job. It communicates via radio signals to a Raspberry Pi base station — something Raspberry Pi is very well suited to.
Art and technology can go hand-in-hand, especially with this Raspberry Pi Zero W-powered dress that shows how the wearer is feeling via the special EEG headband they wear and the images displayed on various (eye-catching) screens attached to the outfit.
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Raspberry Pi Touch Display 2 is a portrait orientation touchscreen LCD display designed for interactive projects like tablets, entertainment systems, and information dashboards. Here, our documentation lead Nate Contino shows you how to connect a Touch Display 2 to your Raspberry Pi, use an on-screen keyboard, and change your screen orientation.
Touch Display 2 running Raspberry Pi OS
Touch Display 2 connects to a Raspberry Pi using a DSI connector and a GPIO connector. Raspberry Pi OS provides touchscreen drivers with support for five-finger multi-touch and an on-screen keyboard, providing full functionality without the need to connect a keyboard or mouse.
Specifications
1280×720px resolution, 24-bit RGB display
155×88 mm active area
7-inch diagonal display
Powered directly by the host Raspberry Pi, requiring no separate power supply
Supports up to five points of simultaneous multi-touch
Touch Display 2 is compatible with all models of Raspberry Pi from Raspberry Pi 1B+ onwards, except Raspberry Pi Zero and Zero 2 W, which lack a DSI connector.
Figure 1: What’s in the box
The Touch Display 2 box contains the following parts (in left-to-right, top-to-bottom order in the Figure 1 image):
To connect a Touch Display 2 to a Raspberry Pi, use a Flat Flexible Cable (FFC) and a GPIO connector. The FFC you’ll use depends upon your Raspberry Pi model: for Raspberry Pi 5, use the included 22‑way to 15-way FFC; for any other Raspberry Pi model, use the included 15-way to 15-way FFC.
Raspberry Pi is connected to Touch Display 2 using the GPIO pins and a Flat Flexible Cable (FFC)
Once you have determined the correct FFC for your Raspberry Pi model, complete the following steps to connect your Touch Display 2 to your Raspberry Pi:
1. Disconnect your Raspberry Pi from power.
2. Lift the retaining clips on either side of the FFC connector on the Touch Display 2.
3. Insert one 15-way end of your FFC into the Touch Display 2 FFC connector, with the metal contacts facing upwards, away from the Touch Display 2.
4. While holding the FFC firmly in place, simultaneously push both retaining clips down on the FFC connector of the Touch Display 2.
5. Lift the retaining clips on either side of the DSI connector of your Raspberry Pi. This port should be marked with some variation of the term ‘DISPLAY’ or ‘DISP’. If your Raspberry Pi has multiple DSI connectors, prefer the port labelled ‘1’.
6. Insert the other end of your FFC into the Raspberry Pi DSI connector, with the metal contacts facing towards the Ethernet and USB-A ports.
7. While holding the FFC firmly in place, simultaneously push both retaining clips down on the DSI connector of your Raspberry Pi (see Figure 2 below).
8. Plug the GPIO connector cable into the port marked J1 on the Touch Display 2.
9. Connect the other (three-pin) end of the GPIO connector cable to pins 2, 4, and 6 of your Raspberry Pi’s GPIO. Connect the red cable (5 V power) to pin 2, and the black cable (ground) to pin 6. Viewed from above, with the Ethernet and USB-A ports facing down, these pins are located at the top right of the board, with pin 2 in the top right-most position (see Figure 3 below).
10. Optionally, use the included M2.5 screws to mount your Raspberry Pi to the back of the Touch Display 2:
11. Align the four corner stand-offs of your Raspberry Pi with the four mount points that surround the FFC connector and J1 port on the back of the Touch Display 2, taking special care not to pinch the FFC.
12. Insert the screws into the four corner stand‑offs and tighten until your Raspberry Pi is secure.
13. Reconnect your Raspberry Pi to power. It may take up to one minute to initialise the Touch Display 2 connection and begin displaying to the screen.
Figure 2: A Raspberry Pi 5 connected and mounted to a Touch Display 2
Use an on-screen keyboard
Raspberry Pi OS Bookworm and later include the Squeekboard on-screen keyboard by default. When a touch display is attached, the on-screen keyboard should automatically show when it is possible to enter text and automatically hide when it is not possible to enter text.
For applications which do not support text entry detection, use the keyboard icon at the right-hand end of the taskbar to manually show and hide the keyboard.
You can also permanently show or hide the on‑screen keyboard in the Display tab of Raspberry Pi Configuration or the Display section of raspi-config.
In Raspberry Pi OS releases prior to Bookworm, use matchbox-keyboard instead. If you use the Wayfire desktop compositor, use wvkbd instead.
Figure 3: The GPIO connection to Touch Display 2
Change screen orientation
If you want to physically rotate the display, or mount it in a specific position, select Screen Configuration from the Preferences menu. Right-click on the touch display rectangle (likely DSI-1) in the layout editor, select Orientation, then pick the best option to fit your needs.
Rotate the screen without a desktop
To set the screen orientation on a device that lacks a desktop environment, edit the /boot/firmware/cmdline.txt configuration file to pass an orientation to the system. Add the following entry to the end of cmdline.txt:
video=DSI-1:720x1280@60,rotate=<rotation-value>
Replace the <rotation-value> placeholder with one of the following values, which correspond to the degree of rotation relative to the default on your display:
0
90
180
270
For example, a rotation value of 90 rotates the display 90 degrees clockwise. 180 rotates the display 180 degrees, or upside-down.
Note: It’s not possible to rotate the DSI display separately from the HDMI one with cmdline.txt. When you use DSI and HDMI simultaneously, they share the same rotation value.
This tutorial is an excerpt from the Raspberry Pi Documentation. Our extensive documentation covers all of our hardware, software, accessories, and microcontrollers as well as Raspberry Pi OS. If you still have questions, try posting on our forums, which are full of Raspberry Pi enthusiasts and some of the finest nerds on the planet.
In the latest issue of The MagPi magazine, Andrew Gregory speaks to Senior Principal Hardware Engineer Dominic Plunkett about how the pieces of the Raspberry Pi Compute Module 5 puzzle came together.Read their conversation to learn more about the design process and the sort of products companies are building with CM5.
The MagPi: What’s changed between CM4 and CM5?
Dominic Plunkett: CM5 takes all of the goodness of Raspberry Pi 5 and puts it on the Compute Module. So we’ve got the BCM2712 Broadcom processor used on Raspberry Pi 5. We’ve got our I/O processor, RP1. That’s a whole extra chip on the board compared with CM4, and so that required a lot of effort to get it on there.
I’d set myself the challenge that the central processor wouldn’t move, so that anyone who has used a CM4 with any sort of heatsinking would be able to use the same setup with CM5. That gave me a huge challenge to try and get the RP1 on the board – for weeks it was hanging off the edge of the board, but eventually I managed to squeeze up the bits and get all the electronics on there correctly.
Want to make your own modified CM5 I/O board? Install KiCad, download the design files, and get cracking!
Compute Module 5 is basically a Raspberry Pi 5 without the connectors, so what’s stopping you from just taking Raspberry Pi 5 and sort of snipping off the bits of the PCB with the connectors on?
I can do exactly that, but it won’t be as small. Compute Module is significantly smaller than Raspberry Pi 5, and we also wanted to add things like on-board eMMC, so there’s extra technology to squeeze into the same area as Compute Module 4. In theory, yes, all you’re doing is cutting off the connectors, but there’s a lot of work to make that happen correctly.
So the challenge is to keep the same form factor as CM4?
Yes. It was possible to change the form factor, but that was something that I didn’t want to do, because that potentially affects backward compatibility. You could probably change form factor in small ways that won’t affect many people, but the second you make a change, you’re going to affect somebody.
Apart from the physical change in the shape of the heatsinking of the main processor, it is basically the same form factor. Some of the parts have moved on the board, but they shouldn’t affect end users.
But electrically, there have had to be some changes, because you’re trying to add new features. So there are some differences which means that it’s not 100% compatible. But for most people it will be a drop-in replacement, and we’re already seeing that people are using it within setups that were designed for CM4 with no problems.
We’ve added new features such as USB 3.0 that won’t work when CM5 is plugged into a carrier board designed for CM4, because CM4 didn’t have USB 3.0. That’s life.
If you want something 100% compatible, stay with CM4; CM4 is still in production and will remain in production for a number of years – 2030-something, and it may well be that we extend it beyond that so it remains available.
Compute Module 5Raspberry Pi 5 16GBTo fit all of Raspberry Pi 5’s goodness in a much smaller footprint, the Compute Module 5 PCB has had to go to ten copper layers rather than the six on Raspberry Pi 5
So if a manufacturer wants to get the USB 3.0 functionality out of Compute Module 5, they either have to upgrade to the new carrier board, or design their own electronics, right?
Indeed. The Compute Module is designed for people who want to design their own board. My main aim for both CM4 and CM5 was to absorb as many of the bits that you need into the CM module, so all you need to do is put connectors on your board. So if you look at the Compute Module 5 IO Board, there is very little on there apart from connectors. We’re not talking difficult electronics on there. And that was the whole aim. We do the CM5 IO Board in KiCad, which is a freely downloadable CAD system, and the design files for the CM5 IO Board are freely available, so you can take the files, delete the bits you don’t want, move things around however you want, and design your own board.
What were the challenges in shrinking the functionality of Raspberry Pi 5 onto the CM5 shape?
It was the density, and it was getting RP1 onto the board – RP1 is actually a small chip, but as a proportion of the board, it’s made the electronics quite a bit denser.
So getting it onto the board sensibly was hard because there’s a lot of I/O – it’s our I/O chip, so there’s the USB 3.0 pairs that come out of there. There’s the MIPI pairs; the Ethernet comes out of it via a PHY. And then there’s all the PCIe to get into it, and all the GPIO to come out of it. So that area of the board is very dense, and it took a long time to be able to work out how to make it all fit.
The CM5 itself is now a ten-layer circuit board (Raspberry Pi 5 has six layers). So there’s ten layers of copper inside it, with quite a lot of ground planes, because all of these high-speed signals like USB 3.0 and PCIe have to be electrically matched on the circuit board. So you’ve got to do some quite accurate routing of the traces to make sure you get good signal integrity across the board.
The edge of the RP1 chip, which is on the end of the board, has all the USB 3.0 signals coming off. They can’t come out because there’s no board space, so they have to go down into the board and then be routed on an inner layer of the board. And so that’s quite dense at that corner of the board. And then you’re routing them on the inner layers. And you’ve also got the MIPI pairs in another layer, and then you’ve got Ethernet on the bottom layer. So there are a lot of signals trying to cross each other and route out and take up the same sort of space, and so you’re just trying to keep everything in three dimensions correctly spaced apart with the correct copper reference planes in the board there.
It took a while to work out with our board manufacturers just how it was going to work. And in the end, we actually made the circuit board 40 microns thicker than CM4 to make all the electric impedances correct. That extra thickness then allowed me to get the next part of the puzzle solved.
It’s a big puzzle-solving exercise that just requires a lot of juggling and a lot of looking at and working on it. It’s quite a dense little circuit board, this; it’s complex, but once you’ve sat at it for a couple of weeks, you start to you get a feel of where things are happening, where things are dense… I usually concentrate on the hard bits first, so I’ll do a bit, then I’ll get to a point where I think, ‘Oh, I’m pretty sure I know how that area is going to route out now.’ So then I’ll go and do the next hardest bit, and I’ll come and finish that off once I’m sure I can get all the hard bits done, because if I can’t get the hard bits done, then I have to make a decision of what to change.
Was there anything that you were forced to leave off in the process of shrinking the goodness of Raspberry Pi 5 into the smaller size of Compute Module 5?
Very early on, we had an internal discussion about some of the signals, because we’ve got the 200-pin connectors and we knew we were going to have to change some signals there, as some of the signals don’t exist in the new world. So that freed up some pins. But then we had more signals that we wanted to put on the pins than there were pins available, and we had to decide what features were going to be included. So Raspberry Pi 5 has two USB 2.0 ports on the right-hand side, and they got left off. There was no signal space for those two USB 2.0 ports, so they don’t exist on CM5.
Some people will find that they would like some extra USB ports, but we have to balance and try and get a good product for everybody, and not just one person or one group of people. So the key thing is to make sure it’s good for a number of people, and there was a good level of backwards compatibility for our main customers as well.
You’ve got more USB overall available than you had on CM4. So CM4 had four MIPI ports, but Raspberry Pi 5 onwards only supports two MIPI ports. So that frees up two MIPI ports that we could reallocate for USB 3.0. And that’s exactly what we did.
So if you do plug a CM5 into a CM4 board, and you use one of the MIPI ports, then that can no longer be used for one of the cameras and one of the displays. But that’s life. We have to make some choices. And yes, those choices will be hard for some people, and I fully acknowledge that some people will find the choices that we made were not right for them. But as I say, CM4 is still available, and CM4 was obviously the right product when they designed their product around the CM4 board. It’s not going to become obsolete. But a lot of people will find that they can just drop in CM5 and get more processing performance.
If you have the on-board eMMC, that is significantly faster. So that’s faster than an SD card, and that’s significantly faster than the on-board eMMC that CM4 had. So we’ve made some other improvements as well. There’s more memory available – in future there’ll be a 16GB version.
There’s no 1GB version any more – if someone came along with an order for a few million of them I’m sure we’d consider it, but at the moment there isn’t going to be a 1GB version. In part that’s the inevitable march of progress. It’s also that we already have loads of products on the books, and we have to be rational and not overload ourselves with loads of different products that are just going to sit in inventory.
Where are Compute Modules turning up? What sort of products are companies building with them?
They get into all sorts of places because they are small, efficient compute power for people. And it becomes easy just to add your own I/O to your system, and you get all the goodness of Raspberry Pi. And because it uses the same software, you can do all your development on a Raspberry Pi 5 in advance of creating your custom board.
Read The MagPi #149
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. Plus you can get it via our app on Android or iOS.
Last but not least, you can subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Towards the end of last year, we band of merry social media folk had the genius idea to learn more about robotics — so off to the Pi Towers Maker Lab we went, armed with a dream and next to zero idea how difficult building a giant robotic LEGO figure would be.
Under the hood
So, how does our gigantic bright yellow LEGO figure do all their tricks?
Three servos are hidden inside the body: one to move the head and one in charge of each arm. A SparkFun Servo pHAT physically moves the three servos (the HAT is capable of controlling many more than three if you need it to). Running the show is a Raspberry Pi Zero 2 W connected to a Raspberry Pi Power Supply for juice.
Raspberry Pi Zero 2 W
As well as powering the servos, Raspberry Pi Zero 2 W runs a basic web server that allows you to access the LEGO figure from anywhere on your network. You’ll be able to play around with the individual buttons our Maker in Residence programmed to make the figure dance, lift its right or left arm, or swivel its head.
Fluorescent filament
Every limb of the LEGO figure was printed on our trusty Ultimaker S5 printer and assembled by hand. The embossed Raspberry Pi logo on the t-shirt was achieved with an imported CAD file, before the whole t-shirt was 3D-printed as one piece. The eyes, eyebrows, and mouth were cut out on a Cricut machine and carefully stuck on to the head to achieve the appropriate level of jaunt.
Not an ominous greeter at all
Hard hands and servo schooling
Despite being lightly terrified by the technical difficulty of this project at the start, we found some really simple instructions from SparkFun on how to use their servos, and then it all came pretty easily. (Having our Maker in Residence on hand to translate for us laypeople may have been another invaluable bit of support. Probably. We’d have got there on our own in the end though, I’m sure…).
The LEGO figure’s hands also proved pretty tricky: they were hard to orientate for printing, so needed lots of supports. Luckily, snapping off 3D-printed supports is a favourite pastime of mine, so, silver linings.
Our Principal Software Engineer Graham Sanderson explains all the new boot path functionality supported on our RP2350 chip. This section from Graham was part of an in-depth Raspberry Pi Pico 2 feature including expert insights from our engineering team. It first appeared in the bumper Pico 2 launch issue of The MagPi, which you can download to read.
“When you power up the chip, you have to run some software, but the program that the user installs, their firmware is stored in flash, so you have to run some code to be able to read flash before you can do anything else. That code is part of the boot ROM, named because it runs at boot, and it’s stored in ROM.”
“On RP2040, the boot path is fairly simple — there is a program in flash, you go look for it, and then you run it. The rest of the boot ROM space is taken up by things like floating point math support, a variety of other useful runtime APIs, and of course the UF2 bootloader that enables the user to drag and drop programs onto the Pico, mounted as a USB drive, and make them run.”
“The RP2350 boot path supports a bunch of new functionality, with support for RISC-V as well as Arm processors, and particularly completely new support for secure boot on Arm. This requires us to verify that the program stored in flash is trusted to run on RP2350, by verifying a cryptographic signature. Additionally, we have hardened the boot code with the goal of making it impossible to run any user code that is not correctly signed, even in the hands of an attacker.”
“The RP2350 boot ROM also supports dividing the flash into multiple partitions so that you can keep multiple copies of a binary, or keep shared data or resources separate from the main program. Our focus, as ever, has been to make things powerful yet simple; therefore you can set up secure boot and have two (A/B) partitions, but still just drag and drop a UF2 to update your software. Dropping the UF2 will automatically target the partition that isn’t currently in use, before switching at the next boot, thus avoiding situations where you have only half written the program. If your new program version is not correctly signed, the old version will continue to boot. Support for A/B partitions makes it much easier for user code to over-the-air update, for example, reading a new version of itself from a web server — but now it can write that new copy to an unused area of flash, rather than worrying about updating the part of flash it itself is running from!”
“Let’s not forget about the Raspberry Pi Pico SDK either. This has had a lot of enhancements, bug fixes, and new features, and of course, now supports both RP2040 and RP2350, as well as both Arm and RISC-V. Nonetheless, most people should only need to recompile their RP2040 program for RP2350 with minor, if any, changes. I can’t wait to see what people do with the new chip.”
Did you miss the results of the RP2350 Hacking Challenge? Read all about the winners, what they did, and why Eben Upton chose to achieve security through transparency.
We launched our second-generation microcontroller, RP2350, in August last year. Building on the success of its predecessor, RP2040, this adds faster processors, more memory, lower power states, and a security model built around Arm TrustZone for Cortex-M. Alongside our own Raspberry Pi Pico 2 board, and numerous partner boards, RP2350 also featured on the DEF CON badge, designed by Entropic Engineering, with firmware by our friend Dmitry Grinberg.
All chips have vulnerabilities, and most vendors’ strategy is not to talk about them. We consider this to be suboptimal, so instead, we entered into the DEF CON spirit by offering a one-month, $10,000 prize to the first person to retrieve a secret value from the one-time-programmable (OTP) memory on the device. Our aim was to smoke out weaknesses early, so that we could fix them before RP2350 became widely deployed in secure applications. This open approach to security engineering has been generally well received: call it “security through transparency”, in contrast with the “security through obscurity” philosophy of other vendors.
Nobody claimed the prize by the deadline, so in September we extended the deadline to the end of the year and doubled the prize to $20,000. Today, we’re pleased (ish) to announce that we received not one but four valid submissions, all of which require physical access to the chip, with varying degrees of intrusiveness. Outside of the contest, Thomas “stacksmashing” Roth and the team at Hextree also discovered a vulnerability, which we describe below.
So with no further ado, the winners are:
“Hazardous threes” – Aedan Cullen
RP2350’s antifuse OTP memory is a security-critical component: security configuration bits are stored in OTP and read early in the reset process. A state machine called the OTP PSM is responsible for these reads. Unfortunately, it turns out that the OTP PSM has an exploitable weakness.
The antifuse array is powered via the USB_OTP_VDD pin. To protect against power faults, the PSM uses “guard reads”: reads of known data very close to reads of security-critical data. A power fault should cause a mismatch in the known guard data, indicating that the associated security-critical read is untrustworthy. We use a single guard word: 0x333333.
However, the OTP may retain the last sensed read data during a power fault, and subsequent reads return the most-recently-read data from when power was good. This is not itself a flaw, but it interacts poorly with the choice of guard word. If USB_OTP_VDD is dropped precisely after a guard read has occurred, 0x333333 will be read until power is restored. Therefore, an attacker can overwrite security-critical configuration data with this value.
Image courtesy of Aedan Cullen
If the CRIT0 and CRIT1 words are replaced by 0x333333 during the execution of the OTP PSM, the RISCV_DISABLE and ARM_DISABLE bits will be set, and the DEBUG_DISABLE bit will be cleared. ARM_DISABLE takes precedence, so the chip leaves reset with the RISC-V cores running and debugging allowed, regardless of the actual configuration written in the fuses. Dumping secret data from the OTP is then straightforward.
More information can be found in Aedan’s GitHub repository here, and in his Chaos Communication Congress presentation here.
No mitigation is currently available for this vulnerability, which has been assigned erratum number E16. It is likely to be addressed in a future stepping of RP2350.
USB bootloader single-instruction fault with supply-voltage injection – Marius Muench
A foundational security feature of RP2350 is secure boot, which restricts the chip to only run code signed with a specific private key. If an attacker can bypass or break out of secure boot, they can run their own unsigned code, which can potentially dump secret data from the OTP.
Marius discovered a weakness in the boot ROM’s reboot API. This supports several different reboot modes, one of which is REBOOT_TYPE_PC_SP, which reboots and starts execution with a specific program counter and stack pointer. This can only be triggered from secure firmware already running on the chip, but if an attacker could trigger this boot mode externally, and with controlled parameters, we would start executing code at an attacker-supplied address – without verifying the signature of the code!
But how can one enter this boot mode, if it is only accessible to signed and verified firmware?
The answer (of course) is fault injection. By issuing a normal reboot command to the USB bootloader, and injecting a fault (in this case by glitching the supply voltage) so that an instruction is skipped just at the right time, it is possible to trick the reboot API into believing that REBOOT_TYPE_PC_SP was requested. If an attacker has loaded malicious code beforehand into the RAM, this code can be executed and used to extract the secret.
An interesting aspect of this attack is that the code for accepting the reboot command is actually hardened against fault injection. Unfortunately, the function implementing the reboot logic itself assumes that the incoming parameters (including the requested boot mode) are sanitised. Due to an unlucky arrangement of instructions emitted by the compiler, injecting a fault which skips one out of two very specific instructions confuses the chip into rebooting to the hazardous boot type.
Marius says: “While this break may seem straightforward in retrospect, reality is quite different. Identifying and exploiting these types of issues is far from trivial. Overall, this hacking challenge was a multi-month project for me, with many dead-ends explored along the way and countless iterations of attack code and setups to confirm or refute potential findings. Nonetheless, I had plenty of fun digging deep into the intricacies of the new RP2350 microcontroller, and I would like to thank Raspberry Pi and Hextree for hosting the challenge!”
Several effective mitigations are available against this attack, which has been assigned erratum number E20. The most precise mitigation is to set the OTP flag BOOT_FLAGS0.DISABLE_WATCHDOG_SCRATCH, which disables the ability to reboot to a particular PC/SP where that function is not required by application code.
Signature check single-instruction fault with laser injection – Kévin Courdesses
Kévin discovered an exploitable weakness in the secure boot path, just after the firmware to be validated has been loaded into RAM, and just before the hash function needed for the signature check is computed. Injecting a single precisely timed fault at this stage can cause the hash function to be computed over a different piece of data, controlled by the attacker. If that data is a valid signed firmware, the signature check will pass, and the attacker’s unsigned firmware will run!
Image courtesy of Kévin Courdesses
The most common method of introducing faults, seen in Marius’s attack, is to briefly pull down the supply voltage, introducing a brief “glitch”, which causes the digital logic in the chip to misbehave. RP2350 contains glitch detector circuitry, which is designed to spot most voltage glitches and to purposely halt the chip in response. To permit the injection of faults without triggering the glitch detectors, Kévin built a custom laser fault injection system; this applies a brief pulse of laser light to the back of the die, which has been exposed by grinding away part of the package. And, although several technical compromises were necessary to keep the setup within a limited budget, it worked!
More information can be found in Kévin’s paper here.
No mitigation is available for this attack, which has been assigned erratum number E24. It is likely to be addressed in a future stepping of RP2350.
Extracting antifuse secrets from RP2350 by FIB/PVC – IOActive
OTP memories based on antifuses are widely used for storing small amounts of data (such as serial numbers, keys, and factory trimming) in integrated circuits because they are inexpensive and require no additional mask steps to fabricate. RP2350 uses an off-the-shelf antifuse memory block for storing secure boot keys and other sensitive configuration data.
Antifuses are widely considered to be a “high security” storage medium, meaning that they are significantly more difficult for an attacker to extract data from than other types of memory, such as flash or mask ROM. However, with this attack, IOActive has (almost) demonstrated that data bits stored in the RP2350 antifuse memory array can be extracted using a well-known semiconductor failure analysis technique: passive voltage contrast (PVC) with a focused ion beam (FIB).
Image courtesy of IOActive
The current form of the attack recovers the bitwise OR of two physically adjacent memory cells sharing common metal-1 contacts. However, with some per-bit effort it may be possible for an attacker to separate the even/odd cell values by taking advantage of the circuit-editing capabilities of the FIB.
IOActive has not yet tested the technique against other antifuse IP blocks or on other process nodes. Nonetheless, it is believed to have broad applicability to all antifuse-based memories. Dr Andrew Zonenberg, who led the technical team on this project along with Antony Moor, Daniel Slone, Lain Agan, and Mario Cop, commented: “Our team found a unique attack vector for reading data out of antifuse memory, which we intend to further develop. Those who rely on antifuse memory for confidentiality should immediately reassess their security posture.”
The suggested mitigation for this attack is to employ a “chaffing” technique, storing either {0, 1} or {1, 0} in each pair of bit cells, as the attack in its current form is unable to distinguish between these two states. To guard against a hypothetical version of the attack which uses circuit editing to distinguish between these states, it is recommended that keys and other secrets be stored as larger blocks of chaffed data, from which the secret is recovered by hashing.
Glitch detector evaluation, and OTP read double-instruction fault with EM injection – Hextree
We commissioned the Hextree team to evaluate the secure boot process, and the effectiveness of the redundancy coprocessor (RCP) and glitch detectors. They found that at the highest sensitivity setting, the glitch detectors can detect many voltage glitches; however, the rate of undetected glitches is still high enough to make attacks feasible with some effort.
The majority of their work focused on electromagnetic fault injection (EMFI), which delivers a high-voltage pulse to a small coil on top of the chip. This creates an electromagnetic field which will collapse in the chip, providing for the injection of very localized faults which do not disturb the glitch detectors. Testing yielded multiple security-relevant results, notably that it is possible to corrupt values read from OTP by injecting faults very early in the boot process, and that random delays provided by the RCP are susceptible to side-channel measurements.
The team also found a path to bypass an aspect of the OTP protection of the chip using a double fault: the s_varm_crit_nsboot function, which locks down the OTP permissions prior to entering BOOTSEL mode, has two instructions which, when both are disturbed by precisely timed faults, can prevent an OTP page from being correctly locked, effectively allowing the user to read-out and write to the OTP even when the chip configuration forbids this. The double fault can be triggered with reasonable reliability by EMFI.
Several effective mitigations are available against this attack, which has been assigned erratum number E21. The attack occurs when the device is running non-secure bootloader code, and the OTP keys are extracted via the PICOBOOT interface. The USB bootloader can be disabled by setting the OTP flags BOOT_FLAGS0.DISABLE_BOOTSEL_USB_PICOBOOT_IFC and BOOT_FLAGS0.DISABLE_BOOTSEL_USB_MSD_IFC, which mitigates this vulnerability at the cost of removing the ability to update firmware on the device over USB.
Image courtesy of NewAE and Fritz
We’d also like to express gratitude to Colin O’Flynn and his team at NewAE for collaborating with both us and Thomas Roth / Hextree on this advanced silicon security research, as well as enabling us with their fantastic ChipWhisperer kit.
What’s next?
We’d like to thank everyone who participated in the challenge. While the rules specify a single $20,000 prize for the “best” attack, we were so impressed by the quality of the submissions that we have chosen to pay the prize in full for each of them.
As expected, we’ve learned a lot. In particular, we’ve revised downward our estimate of the effectiveness of our glitch detection scheme; the difficulty of reliably injecting multiple faults even in the presence of timing uncertainty; and the cost and complexity of laser fault injection. We’ll take these lessons into account as we work to harden future chips, and anticipated future steppings of RP2350.
And while this hacking challenge is over, another one is about to start. As a component of the broader RP2350 security architecture, we’ve been working to develop an implementation of AES which is hardened against side-channel attacks (notably differential power analysis), and we’ll be challenging you to defeat it. Check back next week for more details.
All vendors have security vulnerabilities in their chips. We are unusual because we talk about them, and aim to fix them, rather than brushing them under the carpet. Security through transparency is here to stay.
We first announced Raspberry Pi 5 back in the autumn of 2023, with just two choices of memory density: 4GB and 8GB. Last summer, we released the 2GB variant, aimed at cost-sensitive applications. And today we’re launching its bigger sibling, the 16GB variant, priced at $120.
Why 16GB, and why now?
We’re continually surprised by the uses that people find for our hardware. Many of these fit into 8GB (or even 2GB) of SDRAM, but the threefold step up in performance between Raspberry Pi 4 and Raspberry Pi 5 opens up use cases like large language models and computational fluid dynamics, which benefit from having more storage per core. And while Raspberry Pi OS has been tuned to have low base memory requirements, heavyweight distributions like Ubuntu benefit from additional memory capacity for desktop use cases.
The optimised D0 stepping of the Broadcom BCM2712 application processor includes support for memories larger than 8GB. And our friends at Micron were able to offer us a single package containing eight of their 16Gbit LPDDR4X die, making a 16GB product feasible for the first time.
Carbon Removal Credits
We’re proud of the low environmental impact of Raspberry Pi computers. They are small and light, which translates directly into a small upfront carbon footprint for manufacturing, logistics and disposal. With an idle power consumption in the 2–3W range, and a fully loaded power consumption of less than 10W, replacing a legacy x86 PC with a Raspberry Pi typically results in a significant reduction in operating power consumption, and thus ongoing carbon footprint.
But while our upfront carbon footprint is small, it is not zero. So today, we’re launching Raspberry Pi Carbon Removal Credits, priced at $4, giving you the option to mitigate the emissions associated with the manufacture and disposal of a modern Raspberry Pi.
How does it work?
We commissioned Inhabit to conduct an independent assessment of the carbon footprint of manufacturing, shipping, and disposing of a Raspberry Pi 4 or 5, which came to 6.5kg of CO₂ equivalent. When you buy a Raspberry Pi Carbon Removal Credit from one of our Approved Resellers, we pay our friends at UNDO Carbon to begin capturing that quantity of CO2 from the atmosphere using enhanced rock weathering (ERW) technology.
It’s that simple.
What is enhanced rock weathering?
As rain falls through the atmosphere, it combines with CO₂ to form carbonic acid. When this weak acid falls on mountains, forests and grassland, the CO₂ interacts with rocks and soil, mineralises, and is safely stored in solid carbonate form. The natural process of weathering already accounts for the removal of one billion tonnes of CO₂ from the atmosphere every year.
ERW accelerates this natural process by spreading crushed silicate rock (in our case, basalt) on agricultural land, increasing the surface area of the rock and therefore increasing its contact with CO₂. Overall, this reduces the timescales involved from millions of years to mere decades. Once the reaction takes place, the CO₂ is permanently locked away for 100,000+ years.
In addition to capturing CO₂, spreading basalt on agricultural land also brings with it significant co-benefits. Silicate rocks are mineral-rich; as they weather, they release nutrients such as magnesium, calcium and potassium, improving soil health and reducing the need for fertilisers. Trials with the University of Newcastle have shown an increase in crop yield following the application of crushed basalt rock. In addition, the alkaline bicarbonate ions captured during the ERW process are eventually washed out to sea, where they help to deacidify our oceans.
Generally, when you buy carbon offsets, you are paying for carbon capture which has taken place in the past (for example by planting and growing trees). When you buy Raspberry Pi Carbon Removal Credits, UNDO spreads basalt now, which then captures the rated quantity of carbon over, roughly, the next twenty years.
We’ve chosen ERW because we believe it’s a more rigorous, scalable, verifiable approach to carbon capture than traditional approaches like planting (or, more ridiculously, agreeing not to cut down) trees: quite simply, it’s our best shot at drawing down a material fraction of humanity’s carbon emissions in our lifetimes. But, as it is a relatively new technology, there is no pool of offsets corresponding to historical capture available for us to purchase.
So, we’re doing the next best thing: paying UNDO to start an irrevocable process of carbon capture which will continue over the next two decades and beyond. We hope that our embrace of ERW will help raise awareness of this world-changing technology, and perhaps inspire others to take their first steps with it.
Extracting an arresting array of sounds from a guitar became a mission for keen coder Gary. In the latest issue of The MagPi, he tells Rosie Hattersley how he built a Raspberry Pi-based expression pedal.
The MIDI Gesture Controller is a sort of musical expression pedal that rotates and rolls around a ball joint, providing six degrees of freedom
Guitarist and keen coder Gary Rigg says he always thought floor-based controllers — particularly expression pedals — should have a more prominent role. They are usually operated by pressing your foot down for a subtle or more obvious wah-wah or delay effect, but only in a single direction, also known as one degree of freedom (DOF).
You use your foot to “control the pitch of the pedal, and the pitch determines the parameter value.” Gary reasoned that adding degrees of freedom such as yaw (rotation around an axis) and roll to an expression pedal could extend its pitch parameters. He began pondering what new sounds could be achieved by redesigning how the humble foot pedal was operated. The result is the MIDI Gesture Controller, a Raspberry Pi Pico-based expression pedal that can control three parameters, “which ought to lead to more control while playing live.”
The Gesture Controller can be plugged into a PC as a MIDI control device and works with synthesizers and samplers
New musical direction
Gary hit upon a ball and socket setup, since these move through three or more planes of motion in multiple directions. He soon settled on a desk-based rotating puck design, realising that since the expression pedal did not necessarily need to be foot-operated, it could have several additional uses: “it works as well as a hand controller as a foot controller, so could be used for DJs or in a studio.” Camera controllers, stage lighting, and other non-musical applications also came to mind. Gary points out that MIDI is simply a protocol and could be swapped for something else, such as an HID controlling gameplay, for example. Sensor values are sent down a serial line, so the Gesture Controller could theoretically be used in “any situation needing a multi-axis controller.”
Give it a try
Gary uses Python regularly for his job as a software developer for websites and mobile devices. In “paid work land” he’s used Raspberry Pi for IoT projects to control lights and smart devices, in fire alarm panels, and alongside NFC cards and in MQTT Edge devices. As a hobbyist, Gary has created Raspberry Pi-based retro games consoles, set up sensors, and designed a Ghostbusters PKE Meter, so he is fairly confident with prototyping and seeing diverse projects through to completion.
Prototyping the MIDI Gesture Controller with Raspberry Pi Pico, which runs CircuitPython code
He made use of Adafruit’s MIDI library, and says programming in CircuitPython using Thonny IDE on Raspberry Pi Pico made a lot of sense: “an incredible bit of kit as a low-cost microcontroller, and being in Python-land feels like home.” He also found it to be the best value for money, and the most reliable board for his project. Other components — including the 6DOF AHRS IMU sensor, arcade joystick ball, 3D printer, and neoprene rubber for grip — were bought from The Pi Hut and other stores. The wiring setup was straightforward enough, with the IMU (inertial measurement unit) and yaw reset button connected to Raspberry Pi Pico.
Despite Gary’s years of experience as a computer scientist and software engineer, the MIDI Gesture Controller project took him several weeks to complete and provided plenty of challenges. Getting a smooth motion on the ball joint was particularly difficult. Having designed the casing in CAD software, Gary says he must have 3D-printed nearly 20 variants to get it right. Another challenge involved getting actual pitch, yaw, and roll values from the IMU. “It took a bit of effort, as did calibrating the ranges and limits of minimums and maximums.”
Gary’s YouTube video amply demonstrates the extra sound possibilities his Gesture Controller can generate
Having first contemplated a multi-DOF expression pedal a few years ago, the MIDI Gesture Controller is now up and running, and Gary continues to tweak and improve it, planning to add a few extra features. He always likes to have a project on the go, is unafraid to try things, and is a big advocate for experimenting with designs in Tinkercad. A few years ago, he launched a Raspberry Pi-based Wi-Fi blocker that caught the press’ attention. The Kickstarter campaign wasn’t successful, but it was a fun project, and he still owns the trademark for a Wi-Fi ‘notspot’.
The MagPi #149 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Season’s greetings! I set this up to auto-publish while I’m off sipping breakfast champagne, so don’t yell at me in the comments — I’m not really here.
I hope you’re having the best day, and if you unwrapped something made by Raspberry Pi for Christmas, I hope the following helps you navigate the first few hours with your shiny new device.
Power and peripherals
If you’ve received, say, a Raspberry Pi 5 or 500 on its own and have no idea what you need to plug it in, the product pages on raspberrypi.com often feature sensible suggestions for additional items you might need.
Scroll to the bottom of the Raspberry Pi 5 product page, for example, and you’ll find a whole ‘Accessories’ section featuring affordable things specially designed to help you get the best possible performance from your computer.
You can find all our hardware here, so have a scroll to find your particular Christmas gift.
Dedicated documentation
There are full instructions on how everything works if you know where to look. Our fancy documentation site holds the keys to all of your computing dreams.
Your one-stop shop for all your Raspberry Pi questions
If all the suggestions above aren’t working out for you, there are approx. one bajillion experts eagerly awaiting your questions on the Raspberry Pi forums. Honestly, I’ve barely ever seen a question go unanswered. You can throw the most esoteric, convoluted problem out there and someone will have experienced the same issue and be able to help. Lots of our engineers hang out in the forums too, so you may even get an answer direct from Pi Towers.
Be social
Outside of our official forums, you’ve all cultivated an excellent microcosm of Raspberry Pi goodwill on social media. Why not throw out a question or a call for project inspiration on our official Facebook, Threads, Instagram, TikTok, or “Twitter” account? There’s every chance someone who knows what they’re talking about will give you a hand.
Also, tag us in photos of your festive Raspberry Pi gifts! I will definitely log on to see and share those.
Again, we’re not really here, it’s Christmas!
I’m off again now to catch the new Wallace and Gromit that’s dropping on Christmas Day (BIG news here in the UK), but we’ll be back in early January to hang out with you all in the blog comments and on social.
Glad tidings, joy, and efficient digestion wished on you all.
This #MagPiMonday, we take a look at Md. Khairul Alam’s potentially life-changing project, which aims to use AI to assist people living with a visual impairment.
Technology has long had the power to make a big difference to people’s lives, and for those who are visually impaired, the changes can be revolutionary. Over the years, there has been a noticeable growth in the number of assistive apps. As well as JAWS — a popular computer screen reader for Windows — and software that enables users to navigate phones and tablets, there are audio-descriptive apps that use smart device cameras to read physical documents and recognise items in someone’s immediate environment.
Understanding the challenges facing people living with a visual impairment, maker and developer Md. Khairul Alam has sought to create an inexpensive, wearable navigation tool that will free up the user’s hands and describe what someone would see from their own eyes’ perspective. Based around a pair of spectacles, it uses a small camera sensor that gathers visual information which is then sent to a Raspberry Pi 1 Model B for interpretation. The user is able to hear an audio description of whatever is being seen.
There’s no doubting the positive impact this project could have on scores of people around the world. “Globally, around 2.2 billion don’t have the capability to see, and 90% of them come from low-income countries,” Khairul says. “A low-cost solution for people living with a visual impairment is necessary to give them flexibility so they can easily navigate and, having carried out research, I realised edge computer vision can be a potential answer to this problem.”
Cutting edge
Edge computer vision is potentially transformative. It gathers visual data from edge devices such as a camera before processing it locally, rather than sending it to the cloud. Since information is being processed close to the data source, it allows for fast, real-time responses with reduced latency. This is particularly vital when a user is visually impaired and needs to be able to make rapid sense of the environment.
The connections are reasonably straightforward: plug the Xiao ESP32S3 Sense module into a Raspberry Pi
For his project, Khairul chose to use the Xiao ESP32S3 Sense module which, aside from a camera sensor and a digital microphone, has an integrated Xtensa EPS32-S3R8 SoC processor, 8MB of flash memory, and a microSD card slot. This was mounted onto the centre of a pair of spectacles and connected to a Raspberry Pi computer using a USB-C cable, with a pair of headphones then plugged into Raspberry Pi’s audio out port. With those connections made, Khairul could concentrate on the project’s software.
As you can imagine, machine learning is an integral part of this project; it needs to accurately detect and identify objects. Khairul used Edge Impulse Studio to train his object detection model. This tool is well equipped for building datasets and, in this case, one needed to be created from scratch. “When I started working on the project, I did not find any ready-made dataset for this specific purpose,” he tells us. “A rich dataset is very important for good accuracy, so I made a simple dataset for experimental purposes.”
To help test the device, Khairul has been using an inexpensive USB-C portable speaker
Object detection
Khairul initially concentrated on six objects, uploading 188 images to help identify chairs, tables, beds, and basins. The more images he could take of an object, the greater the accuracy — but it posed something of a challenge. “For this type of work, I needed a unique and rich dataset for a good result, and this was the toughest job,” he explains. Indeed, he’s still working on creating a larger dataset, and these things take a lot of time; but upon uploading the model to the Xiao ESP32S3 Sense, it has already begun to yield some positive results.
When an object is detected, the module returns the object’s name and position. “After detecting and identifying the object, Raspberry Pi is then used to announce its name — Raspberry Pi has built-in audio support, and Python has a number of text-to-speech libraries,” Khairul says. The project uses a free software package called Festival, which has been written by The Centre for Speech Technology Research in the UK. This converts the text to speech, which can then be heard by the user.
A tidier solution will be needed — including a waterproof case — for real-world situations
For convenience, all of this is currently being powered by a small rechargeable lithium-ion battery, which is connected by a long wire to enable it to sit in the user’s pocket. “Power consumption has been another important consideration,” Khairul notes, “and because it’s a portable device, it needs to be very power efficient.” Since Third Eye is designed to be worn, it also needs to feel right. “The form factor is a considerable factor — the project should be as compact as possible,” Khairul adds.
Going forward
Third Eye is still in a proof-of-concept stage, and improvements are already being identified. Khairul knows that the Xiao ESP32S3 Sense will eventually fall short of fulfilling his ambitions for the project as it expands in the future and, with a larger machine learning model proving necessary, Raspberry Pi is likely to take on more of the workload.
“To be very honest, the ESP32S3 Sense module is not capable enough to respond using a big model. I’m just using it for experimental purposes with a small model, and Raspberry Pi can be a good alternative,” he says. “I believe for better performance, we may use Raspberry Pi for both inferencing and text-to-speech conversions. I plan to completely implement the system inside a Raspberry Pi computer in the future.”
Other potential future tweaks are also stacking up. “I want to include some control buttons so that users can increase and decrease the volume and mute the audio if required,” Khairul reveals. “A depth camera would also give the user important information about the distance of an object.” With the project shared on Hackster, it’s hoped the Raspberry Pi community could also assist in pushing it forward. “There is huge potential for a project such as this,” he says.
The MagPi #149 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Dip your toes into the world of PIO on Raspberry Pi 5 using PIOLib
The launch of Raspberry Pi 5 represented a significant change from previous models. Building chips that run faster and use less power, while continuing to support 3.3V I/O, presents real, exciting challenges. Our solution was to split the main SoC (System on Chip) in two — the compute half, and the I/O half — and put a fast interconnect (4-lane PCIe Gen 3) between them. The SoC on Raspberry Pi 5 is the Broadcom BCM2712, and the I/O processor (which used to be known in the PC world as the ‘southbridge’) is Raspberry Pi RP1.
Along with all the usual peripherals — USB, I2C, SPI, DMA, and UARTs — RP1 included something a bit more interesting. One of RP2040’s distinguishing features was a pair of PIO blocks, deceptively simple bits of Programmable I/O capable of generating and receiving patterns on a number of GPIOs. With sufficient cunning, users have been able to drive NeoPixel LEDs and HDMI displays, read from OneWire devices, and even connect to an Ethernet network.
RP1 is blessed with a single PIO block — almost identical to the two that RP2040 has — as well as four state machines and a 32-entry instruction memory. However, apart from a few hackers out there, it has so far lain dormant; it would be great to make this resource available to users for their own projects, but there’s a catch.
Need for speed
The connection between RP1’s on-board ARM M3 microcontrollers and the PIO hardware was made as fast as possible, but at the cost of making the PIO registers inaccessible over PCIe; the only exceptions are the state machine FIFOs — the input and output data pipes — that can be reached by DMA (direct memory access). This makes it impossible to control PIO directly from the host processors, so an alternative is required. One option would be to allow the uploading of code to run on the M3 cores, but there are a number of technical problems with that approach:
1. We need to “link” the uploaded code with what is already present in the firmware — think of it as knitting together squares to make a quilt (or a cardigan for Harry Styles). For that to work, the firmware needs a list of the names and addresses of everything the uploaded code might want to access, something that the current firmware doesn’t have.
2. Third-party code running on M3 cores presents a security risk — not in the sense that it might steal your data (although that might be possible…), but that by accident or design it could disrupt the operation of your Raspberry Pi 5.
3. Once the M3s have been opened up in that way, we can’t take it away, and that’s not a step we’re prepared to take.
Not like that, like this
For these reasons, we took a different path.
The latest RP1 firmware implements a mailbox interface: a simple mechanism for sending messages between two parties. The kernel has corresponding mailbox and firmware drivers, and an rp1-pio driver that presents an ioctl() interface to user space. The end result of adding all this software is the ability to write programs using the PIO SDK that can run in user space or in kernel drivers.
Latency trade-off
Most of the PIOLib functions cause a message to be sent to the RP1 firmware, which performs the operation — possibly just a single I/O access — and replies. Although this makes it simple to run PIO programs on Raspberry Pi 5 (and the rest of the Raspberry Pi family), it does come at a cost. All that extra software adds latency; most PIOLib operations take at least 10 microseconds. For PIO software that just creates a state machine and then reads or writes data, this is no problem — the WS2812 LED and PWM code are good examples of this. But anything that requires close coupling between the state machine and driver software is likely to have difficulties.
The first official use of PIOLib is the new pwm-pio kernel driver. It presents a standard Linux PWM interface via sysfs, and creates a very stable PWM signal on any GPIO on the 40-pin header (GPIOs 0 to 27). You can configure up to four of these PWM interfaces on Raspberry Pi 5; you are limited by the number of state machines. Like many peripherals, you create one with a Device Tree overlay:
dtoverlay=pwm-pio,gpio=7
One feature absent from this first release is interrupt support. RP1 provides two PIO interrupts, which can be triggered by the PIO instruction IRQ (interrupt request), and these could be used to trigger actions on the SoC.
Over time, we may discover that there are some common usage patterns — groups of the existing PIOLib functions that often appear together. Adding those groups to the firmware as single, higher-level operations may allow more complex PIO programs to run. These and other extensions are being considered.
The latest kernel (sudo apt update; sudo apt upgrade)
The latest EEPROM (see the ‘Advanced Options’ section of raspi-config)
I’ll leave you with a video of some flashing lights — two strings of WS2812 LEDs being driven from a Raspberry Pi 5. It’s beginning to look a bit festive!
Ah, the WOPR — or “War Operation Plan Response” for those who enjoy abbreviations that sound like a robot from the future, only less like a friend and more like an overzealous maths teacher.
The WOPR is the supercomputer from the 1983 movie WarGames. It doesn’t understand sarcasm, it can’t sense when it’s being pranked, and it certainly doesn’t know when it’s been told to “play a game” — much like our Maker in Residence, Toby, who built it to delight and entertain all visitors to the Pi Towers Maker Lab.
A script runs on boot, which twinkles the NeoPixels in the traditional 1980s supercomputer colours, yellow and red.
Another script can be run to play a short clip from the film WarGames on the Touch Display 2 screen, explaining the WOPR. At the press of a button on the Touch Display, our faux WOPR also parrots famous lines from the film, such as: “Shall we play a game?” and “How about a nice game of chess?”
For those who wish to linger a little longer in the Maker Lab, Toby devised a game in which clips from 1980s films and music videos flash (a little too fast, in my opinion) up on the screen, with your job being to enthusiastically shout out where each clip is from.
Authentic enclosure
The body of the WOPR is a combination of 3D-printed plastics and laser-cut MDF painted in industrial grey, with Cricut silver lettering on the side. Everything is glued together, and a lot of sanding was required to make it appear as though it’s a sleek, fancy contraption from the future.