Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Raspberry Pi Connect: new native panel plugin and connectivity testing

By: Paul
13 November 2024 at 18:43

The latest release of Raspberry Pi OS includes an all-new, native panel plugin for Raspberry Pi Connect, our secure remote access solution that allows you to connect to your Raspberry Pi desktop and command line directly from your web browser.

Since the launch of our public beta with screen sharing back in May, and the addition of remote shell access and support for older Raspberry Pi devices in June, we’ve been working on improving support and performance on as many Raspberry Pi devices as possible — from Raspberry Pi Zero to Raspberry Pi 5 — both when using Raspberry Pi OS with desktop and our Lite version.

By default, Raspberry Pi Connect will be installed but disabled, only becoming active for your current user if you choose ‘Turn On Raspberry Pi Connect’ from the menu bar, or by running rpi-connect on from the terminal.

If this is your first time trying the service, using the menu bar will open your browser to sign up for a free Raspberry Pi Connect account; alternatively, you can run rpi-connect signin from the terminal to print a unique URL that you can open on any device you like. Once signed up and signed in, you can then connect to your device either via screen sharing (if you’re using Raspberry Pi desktop) or via remote shell from your web browser on any computer.

You can now stop and disable the service for your current user by choosing ‘Turn Off Raspberry Pi Connect’ or running rpi-connect off from the terminal.

With the latest release of 2.1.0 (available via software update), we now include a new rpi-connect doctor command that runs a series of connectivity tests to check the service can establish connections properly. We make every effort to ensure you can connect to your device without having to make any networking changes or open ports in your firewall — but if you’re having issues, run the command like so:

$ rpi-connect doctor
✓ Communication with Raspberry Pi Connect API
✓ Authentication with Raspberry Pi Connect API
✓ Peer-to-peer connection candidate via STUN
✓ Peer-to-peer connection candidate via TURN

Full documentation for Raspberry Pi Connect can be found on our website, or via man rpi-connect in the terminal when installed on your device.

Updates on updates

We’ve heard from lots of users about the features they’d most like to see next, and we’ve tried to prioritise the things that will bring the largest improvements in functionality to the largest number of users. Keep an eye on this blog to see our next updates.

The post Raspberry Pi Connect: new native panel plugin and connectivity testing appeared first on Raspberry Pi.

Google Summer of Code 2024 results

7 November 2024 at 07:00

As we have previously announced, the Rust Project participated in Google Summer of Code (GSoC) for the first time this year. Nine contributors have been tirelessly working on their exciting projects for several months. The projects had various durations; some of them have ended in August, while the last one has been concluded in the middle of October. Now that the final reports of all the projects have been submitted, we can happily announce that all nine contributors have passed the final review! That means that we have deemed all of their projects to be successful, even though they might not have fulfilled all of their original goals (but that was expected).

We had a lot of great interactions with our GSoC contributors, and based on their feedback, it seems that they were also quite happy with the GSoC program and that they had learned a lot. We are of course also incredibly grateful for all their contributions - some of them have even continued contributing after their project has ended, which is really awesome. In general, we think that Google Summer of Code 2024 was a success for the Rust Project, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our project idea list.

Below you can find a brief summary of each of our GSoC 2024 projects, including feedback from the contributors and mentors themselves. You can find more information about the projects here.

Adding lint-level configuration to cargo-semver-checks

cargo-semver-checks is a tool designed for automatically detecting semantic versioning conflicts, which is planned to one day become a part of Cargo itself. The goal of this project was to enable cargo-semver-checks to ship additional opt-in lints by allowing users to configure which lints run in which cases, and whether their findings are reported as errors or warnings. Max achieved this goal by implementing a comprehensive system for configuring cargo-semver-checks lints directly in the Cargo.toml manifest file. He also extensively discussed the design with the Cargo team to ensure that it is compatible with how other Cargo lints are configured, and won't present a future compatibility problem for merging cargo-semver-checks into Cargo.

Predrag, who is the author of cargo-semver-checks and who mentored Max on this project, was very happy with his contributions that even went beyond his original project scope:

He designed and built one of our most-requested features, and produced design prototypes of several more features our users would love. He also observed that writing quality CLI and functional tests was hard, so he overhauled our test system to make better tests easier to make. Future work on cargo-semver-checks will be much easier thanks to the work Max put in this summer.

Great work, Max!

Implementation of a faster register allocator for Cranelift

The Rust compiler can use various backends for generating executable code. The main one is of course the LLVM backend, but there are other backends, such as GCC, .NET or Cranelift. Cranelift is a code generator for various hardware targets, essentially something similar to LLVM. The Cranelift backend uses Cranelift to compile Rust code into executable code, with the goal of improving compilation performance, especially for debug (unoptimized) builds. Even though this backend can already be faster than the LLVM backend, we have identified that it was slowed down by the register allocator used by Cranelift.

Register allocation is a well-known compiler task where the compiler decides which registers should hold variables and temporary expressions of a program. Usually, the goal of register allocation is to perform the register assignment in a way that maximizes the runtime performance of the compiled program. However, for unoptimized builds, we often care more about the compilation speed instead.

Demilade has thus proposed to implement a new Cranelift register allocator called fastalloc, with the goal of making it as fast as possible, at the cost of the quality of the generated code. He was very well-prepared, in fact he had a prototype implementation ready even before his GSoC project has started! However, register allocation is a complex problem, and thus it then took several months to finish the implementation and also optimize it as much as possible. Demilade also made extensive use of fuzzing to make sure that his allocator is robust even in the presence of various edge cases.

Once the allocator was ready, Demilade benchmarked the Cranelift backend both with the original and his new register allocator using our compiler benchmark suite. And the performance results look awesome! With his faster register allocator, the Rust compiler executes up to 18% less instructions across several benchmarks, including complex ones like performing a debug build of Cargo itself. Note that this is an end-to-end performance improvement of the time needed to compile a whole crate, which is really impressive. If you would like to examine the results in more detail or even run the benchmark yourself, check out Demilade's final report, which includes detailed instructions on how to reproduce the benchmark.

Apart from having the potential to speed up compilation of Rust code, the new register allocator can be also useful for other use-cases, as it can be used in Cranelift on its own (outside the Cranelift codegen backend). What can we can say other than we are very happy with Demilade's work! Note that the new register allocator is not yet available in the Cranelift codegen backend out-of-the-box, but we expect that it will eventually become the default choice for debug builds and that it will thus make compilation of Rust crates using the Cranelift backend faster in the future.

Improve Rust benchmark suite

This project was relatively loosely defined, with the overarching goal of improving the user interface of the Rust compiler benchmark suite. Eitaro tackled this challenge from various angles at once. He improved the visualization of runtime benchmarks, which were previously a second-class citizen in the benchmark suite, by adding them to our dashboard and by implementing historical charts of runtime benchmark results, which help us figure out how is a given benchmark behaving over a longer time span.

Another improvement that he has worked on was embedding a profiler trace visualizer directly within the rustc-perf website. This was a challenging task, which required him to evaluate several visualizers and figure out a way how to include them within the source code of the benchmark suite in a non-disruptive way. In the end, he managed to integrate Perfetto within the suite website, and also performed various optimizations to improve the performance of loading compilation profiles.

Last, but not least, Eitaro also created a completely new user interface for the benchmark suite, which runs entirely in the terminal. Using this interface, Rust compiler contributors can examine the performance of the compiler without having to start the rustc-perf website, which can be challenging to deploy locally.

Apart from the mentioned contributions, Eitaro also made a lot of other smaller improvements to various parts of the benchmark suite. Thank you for all your work!

Move cargo shell completions to Rust

Cargo's completion scripts have been hand maintained and frequently broken when changed. The goal for this effort was to have the completions automatically generated from the definition of Cargo's command-line, with extension points for dynamically generated results.

shanmu took the prototype for dynamic completions in clap (the command-line parser used by Cargo), got it working and tested for common shells, as well as extended the parser to cover more cases. They then added extension points for CLI's to provide custom completion results that can be generated on the fly.

In the next phase, shanmu added this to nightly Cargo and added different custom completers to match what the handwritten completions do. As an example, with this feature enabled, when you type cargo test --test= and hit the Tab key, your shell will autocomplete all the test targets in your current Rust crate! If you are interested, see the instructions for trying this out. The link also lists where you can provide feedback.

You can also check out the following issues to find out what is left before this can be stabilized:

Rewriting esoteric, error-prone makefile tests using robust Rust features

The Rust compiler has several test suites that make sure that it is working correctly under various conditions. One of these suites is the run-make test suite, whose tests were previously written using Makefiles. However, this setup posed several problems. It was not possible to run the suite on the Tier 1 Windows MSVC target (x86_64-pc-windows-msvc) and getting it running on Windows at all was quite challenging. Furthermore, the syntax of Makefiles is quite esoteric, which frequently caused mistakes to go unnoticed even when reviewed by multiple people.

Julien helped to convert the Makefile-based run-make tests into plain Rust-based tests, supported by a test support library called run_make_support. However, it was not a trivial "rewrite this in Rust" kind of deal. In this project, Julien:

  • Significantly improved the test documentation;
  • Fixed multiple bugs that were present in the Makefile versions that had gone unnoticed for years -- some tests were never testing anything or silently ignored failures, so even if the subject being tested regressed, these tests would not have caught that.
  • Added to and improved the test support library API and implementation; and
  • Improved code organization within the tests to make them easier to understand and maintain.

Just to give you an idea of the scope of his work, he has ported almost 250 Makefile tests over the span of his GSoC project! If you like puns, check out the branch names of Julien's PRs, as they are simply fantestic.

As a result, Julien has significantly improved the robustness of the run-make test suite, and improved the ergonomics of modifying existing run-make tests and authoring new run-make tests. Multiple contributors have expressed that they were more willing to work with the Rust-based run-make tests over the previous Makefile versions.

The vast majority of run-make tests now use the Rust-based test infrastructure, with a few holdouts remaining due to various quirks. After these are resolved, we can finally rip out the legacy Makefile test infrastructure.

Rewriting the Rewrite trait

rustfmt is a Rust code formatter that is widely used across the Rust ecosystem thanks to its direct integration within Cargo. Usually, you just run cargo fmt and you can immediately enjoy a properly formatted Rust project. However, there are edge cases in which rustfmt can fail to format your code. That is not such an issue on its own, but it becomes more problematic when it fails silently, without giving the user any context about what went wrong. This is what was happening in rustfmt, as many functions simply returned an Option instead of a Result, which made it difficult to add proper error reporting.

The goal of SeoYoung's project was to perform a large internal refactoring of rustfmt that would allow tracking context about what went wrong during reformatting. In turn, this would enable turning silent failures into proper error messages that could help users examine and debug what went wrong, and could even allow rustfmt to retry formatting in more situations.

At first, this might sound like an easy task, but performing such large-scale refactoring within a complex project such as rustfmt is not so simple. SeoYoung needed to come up with an approach to incrementally apply these refactors, so that they would be easy to review and wouldn't impact the entire code base at once. She introduced a new trait that enhanced the original Rewrite trait, and modified existing implementations to align with it. She also had to deal with various edge cases that we hadn't anticipated before the project started. SeoYoung was meticulous and systematic with her approach, and made sure that no formatting functions or methods were missed.

Ultimately, the refactor was a success! Internally, rustfmt now keeps track of more information related to formatting failures, including errors that it could not possibly report before, such as issues with macro formatting. It also has the ability to provide information about source code spans, which helps identify parts of code that require spacing adjustments when exceeding the maximum line width. We don't yet propagate that additional failure context as user facing error messages, as that was a stretch goal that we didn't have time to complete, but SeoYoung has expressed interest in continuing to work on that as a future improvement!

Apart from working on error context propagation, SeoYoung also made various other improvements that enhanced the overall quality of the codebase, and she was also helping other contributors understand rustfmt. Thank you for making the foundations of formatting better for everyone!

Rust to .NET compiler - add support for compiling & running cargo tests

As was already mentioned above, the Rust compiler can be used with various codegen backends. One of these is the .NET backend, which compiles Rust code to the Common Intermediate Language (CIL), which can then be executed by the .NET Common Language Runtime (CLR). This backend allows interoperability of Rust and .NET (e.g. C#) code, in an effort to bring these two ecosystems closer together.

At the start of this year, the .NET backend was already able to compile complex Rust programs, but it was still lacking certain crucial features. The goal of this GSoC project, implemented by Michał, who is in fact the sole author of the backend, was to extend the functionality of this backend in various areas. As a target goal, he set out to extend the backend so that it could be used to run tests using the cargo test command. Even though it might sound trivial, properly compiling and running the Rust test harness is non-trivial, as it makes use of complex features such as dynamic trait objects, atomics, panics, unwinding or multithreading. These features were especially tricky to implement in this codegen backend, because the LLVM intermediate representation (IR) and CIL have fundamental differences, and not all LLVM intrinsics have .NET equivalents.

However, this did not stop Michał. He has been working on this project tirelessly, implementing new features, fixing various issues and learning more about the compiler's internals every new day. He has also been documenting his journey with (almost) daily updates on Zulip, which were fascinating to read. Once he has reached his original goal, he moved the goalpost up to another level and attempted to run the compiler's own test suite using the .NET backend. This helped him uncover additional edge cases and also led to a refactoring of the whole backend that resulted in significant performance improvements.

By the end of the GSoC project, the .NET backend was able to properly compile and run almost 90% of the standard library core and std test suite. That is an incredibly impressive number, since the suite contains thousands of tests, some of which are quite arcane. Michał's pace has not slowed down even after the project has ended and he is still continuously improving the backend. Oh, and did we already mention that his backend also has experimental support for emitting C code, effectively acting as a C codegen backend?! Michał has been very busy over the summer.

We thank Michał for all his work on the .NET backend, as it was truly inspirational, and led to fruitful discussions that were relevant also to other codegen backends. Michał's next goal is to get his backend upstreamed and create an official .NET compilation target, which could open up the doors to Rust becoming a first-class citizen in the .NET ecosystem.

Sandboxed and deterministic proc macro using WebAssembly

Rust procedural (proc) macros are currently run as native code that gets compiled to a shared object which is loaded directly into the process of the Rust compiler. Because of this design, these macros can do whatever they want, for example arbitrarily access the filesystem or communicate through a network. This has not only obvious security implications, but it also affects performance, as this design makes it difficult to cache proc macro invocations. Over the years, there have been various discussions about making proc macros more hermetic, for example by compiling them to WebAssembly modules, which can be easily executed in a sandbox. This would also open the possibility of distributing precompiled versions of proc macros via crates.io, to speed up fresh builds of crates that depend on proc macros.

The goal of this project was to examine what would it take to implement WebAssembly module support for proc macros and create a prototype of this idea. We knew this would be a very ambitious project, especially since Apurva did not have prior experience with contributing to the Rust compiler, and because proc macro internals are very complex. Nevertheless, some progress was made. With the help of his mentor, David, Apurva was able to create a prototype that can load WebAssembly code into the compiler via a shared object. Some work was also done to make use of the existing TokenStream serialization and deserialization code in the compiler's proc_macro crate.

Even though this project did not fulfill its original goals and more work will be needed in the future to get a functional prototype of WebAssembly proc macros, we are thankful for Apurva's contributions. The WebAssembly loading prototype is a good start, and Apurva's exploration of proc macro internals should serve as a useful reference for anyone working on this feature in the future. Going forward, we will try to describe more incremental steps for our GSoC projects, as this project was perhaps too ambitious from the start.

Tokio async support in Miri

miri is an intepreter that can find possible instances of undefined behavior in Rust code. It is being used across the Rust ecosystem, but previously it was not possible to run it on any non-trivial programs (those that ever await on anything) that use tokio, due a to a fundamental missing feature: support for the epoll syscall on Linux (and similar APIs on other major platforms).

Tiffany implemented the basic epoll operations needed to cover the majority of the tokio test suite, by crafting pure libc code examples that exercised those epoll operations, and then implementing their emulation in miri itself. At times, this required refactoring core miri components like file descriptor handling, as they were originally not created with syscalls like epoll in mind.

Suprising to everyone (though probably not tokio-internals experts), once these core epoll operations were finished, operations like async file reading and writing started working in miri out of the box! Due to limitations of non-blocking file operations offered by operating systems, tokio is wrapping these file operations in dedicated threads, which was already supported by miri.

Once Tiffany has finished the project, including stretch goals like implementing async file operations, she proceeded to contact tokio maintainers and worked with them to run miri on most tokio tests in CI. And we have good news: so far no soundness problems have been discovered! Tiffany has become a regular contributor to miri, focusing on continuing to expand the set of supported file descriptor operations. We thank her for all her contributions!

Conclusion

We are grateful that we could have been a part of the Google Summer of Code 2024 program, and we would also like to extend our gratitude to all our contributors! We are looking forward to joining the GSoC program again next year.

Boost Your Pico Projects with the new Pico VS Code Extension

18 September 2024 at 16:56

A few months back, we quietly dropped the Pico VS Code project on GitHub. It didn’t take long before the feedback started pouring in. Since then, we’ve been listening and tweaking. Now, we’re excited to officially unveil the public beta of the Raspberry Pi Pico Visual Studio Code Extension!

What is Pico VS Code?

Pico VS Code is a Microsoft Visual Studio Code extension designed to make your life easier when creating, developing, and debugging projects for Raspberry Pi Pico-series boards. Whether you’re a total beginner or a seasoned pro, this tool is here to help you dive into Pico development with confidence and ease.

If you’ve ever tried to set up an embedded development environment, you know it’s no small feat. Beginners often find themselves tangled up in the complexities of build systems, SDKs, and toolchains. And let’s not even get started on cross-compilation; developing on one machine to run code on another introduces a whole new set of challenges.

Getting all the right configurations and installations in place can be intimidating for everyone, not just those new to the game. Even experienced developers can find themselves tangled in frustrating setup processes that eat into valuable development time.

pico-vscode New C/C++ Project wizard

That’s why we created the Pico Visual Studio Code extension: a user-friendly tool that simplifies the entire development process. We wanted to offer something that takes the guesswork out of setting up your environment, so you can start coding in an interface you’re already familiar with — Visual Studio Code — as quickly as possible.

With Pico VS Code, you won’t have to worry about nitty-gritty details that trip up newcomers and sometimes stymie veterans. Instead, you’ll be able to focus on what really matters: bringing your Raspberry Pi Pico projects to life. Whether you’re working on your first blinking LED or a more complex project, Pico VSCode is there to help you get started and keep you moving forward.

How do I get Pico VS Code?

Prerequisites

To get started with the Pico VS Code extension, you’ll need to ensure that your development environment meets a few basic requirements. The extension is compatible with various platforms, including Raspberry Pi OS, Windows, macOS, and Linux, each with its own set of prerequisites.

All platforms require an install of Visual Studio Code version 1.92.1 or newer. For detailed instructions on setting up Pico VS Code on your platform, refer to the respective prerequisites outlined below.

Raspberry Pi OS

  • Ensure you are running a 64-bit distribution of Raspberry Pi OS.

Windows

  • Make sure you’re using a x86-based PC (not ARM64).

macOS

  • For macOS users, you can install all necessary dependencies by running the following command in your terminal:
xcode-select --install

Linux

  • Refer to the README.md on our GitHub page for a full list of required software. Many distros include the required software in the standard OS install.

Installing the Pico VS Code extension

You can install the extension in two ways. The first option is to install it directly from your editor’s marketplace. This provides a seamless integration and automatic updates. The second option is to download and manually install the package. This gives you more control and is helpful in restricted environments or when managing specific versions.

Install in your editor

You can download the Pico VS Code extension directly from the Visual Studio Code Marketplace within your editor.

pico-vscode marketplace page in VS Code
The marketplace page of the Raspberry Pi Pico extension within VS Code
  1. Open the “Extensions” tab on the left side of VS Code (or press Ctrl+Shift+X on Windows/Linux, or Cmd+Shift+X on macOS).
  2. In the search bar at the top, type “pico-vscode”.
  3. Once you find the extension, click “Install”. Wait until the installation completes, and you’re ready to create a Raspberry Pi Pico project.

If you’re using a different VS Code-compatible editor, the extension is also available through the OpenVSX marketplace.

Manual installation

If you prefer, you can manually install the extension by downloading the package directly. Follow these steps:

  1. Visit the pico-vscode GitHub page and download the latest .vsix file from the release assets.
  2. To install the file in VS Code:
    • Open the “Extensions” panel as described earlier.
    • Click the three-dots menu (...) above the search bar and select “Install from VSIX…”.
    • Choose the .vsix file you just downloaded.

Alternatively, you can install the .vsix file via the terminal using the following command:

code --install-extension raspberry-pi-pico-<version>.vsix

Building a blink example with Pico VS Code

To create a project based on a blink example, select “New Project From Example” in the Pico sidebar panel added by the extension. Then, search for the example you want to use — in our case, “blink” — in the project name field. Click the “Create” button to generate a project from the template.

Raspberry Pi Pico blink example generated by pico-vscode
The blink example generated by pico-vscode

Once the project is generated, it will automatically open. When a Pico project is opened, the extension configures the build system for you based on the SDK version, board type, and other settings you’ve selected. After the progress bar disappears, you can compile the project by clicking the “Compile” button in the bottom right corner. This will open an output panel where you can follow the build progress. Once completed, you should see a line like this: [61/61] Linking CXX executable blink.elf.

When it comes to uploading firmware to your Raspberry Pi Pico board, you have two options. Both require you to connect your Pico in BOOTSEL mode. To put your Pico into BOOTSEL mode, connect your Pico board to your host computer while holding the BOOTSEL button.

The easiest way to get your firmware up and running is the “Run” button. Connect your Pico board in BOOTSEL mode, hit the “Run” button, and the firmware will upload automatically. You’ll know it’s working when the tiny LED near the USB connector starts blinking as the board disconnects itself.

In addition to the blink.elf executable, you’ll find a blink.uf2 file in the build directory within your project. If you prefer to manually flash firmware, drag and drop this UF2 file onto your Pico board in BOOTSEL mode.1 Once the file has been copied onto the board, it will automatically dismount and start running the blink project. You can confirm this by observing the tiny LED near the USB connector. It should start blinking as soon as the board disconnects itself.

For debugging directly within VS Code, the extension adds configuration to your project that allow you to use the debug panel and set breakpoints, just as you would when debugging other C/C++ projects on your computer. To run in debug mode with a Debug Probe, press F5. For detailed instructions on how to correctly wire your board for debugging, refer to the Getting Started guide on our website.2

Integrated offline documentation

When developing bare-metal code for the Pico, you often need to reference an API or check hardware specifications. To streamline your workflow, we’ve integrated the documentation directly into VS Code. This allows you to quickly access the information you need without leaving your editor — or requiring an internet connection.

Raspberry Pi Pico Project in VS Code with offline documentation for ADC open to the right
A VS Code window showcasing a Pico project on the left with the offline documentation open to the right

To access the documentation, navigate to the Raspberry Pi Pico panel in your sidebar and select the topic you want to explore. The documentation will open within the editor, so you can position it wherever you need, just like any other file. This way, you can keep your code and reference materials side by side.

How to update the project configuration

A new SDK revision has been released, and you want to take advantage of its awesome new features in your Pico project. No problem — the extension has you covered. If you need to switch the target board, change the selected SDK, or adjust any other properties configured during project creation, the extension provides commands to update your project settings in just a few clicks.

For example, to switch the SDK version, you’ll find a UI element in the status bar at the bottom of the VS Code window, or in the Raspberry Pi Pico Project quick access panel in your sidebar, displaying your currently selected version. Clicking this will open a simple dialog where you can choose the new SDK version you want to use. Once selected, the extension will automatically reconfigure your project to use the new version. For optimal IntelliSense functionality, we recommended to reload your window after changing any project settings to ensure all extensions are aware of the new configuration. After the changes have been applied, the extension sends you a notification with a button that will reload the current window.

If you need to update settings other than the SDK version or board type, you can access additional commands via the VS Code command palette, which you can open with the keyboard shortcut Ctrl+Shift+P (or Cmd+Shift+P on macOS). Type “Raspberry Pi Pico,” and you’ll see a list of all available commands provided by the Pico extension. This makes it easy to adjust your project configuration as needed.

It also supports MicroPython

For beginners or developers who want to get their projects up and running on a Pico as quickly as possible, MicroPython is an excellent choice.3 It is a lean and efficient implementation of the Python 3 programming language, specifically designed to run on microcontrollers and in constrained environments. It includes a small subset of the Python standard library, making it a powerful yet lightweight option for embedded development.4

To create a Pico project using MicroPython instead of C/C++, select “New MicroPython Project”. You can find this button either in our Quick Access panel, located in your sidebar or by running the New Pico Project command and selecting MicroPython as language. This will launch the familiar project creation wizard, now tailored for setting up a MicroPython project. Choose the location for your project folder and set a name for your project. When you click “Create,” the extension generates the new project and opens it for you, just like with a C/C++ project. But instead of using C/C++, your new project uses the MicroPico extension to run your code on the board and manage project configurations.

A newly create MicroPython based MicroPico project
Raspberry Pi Pico W running the blink.py script in MicroPython

With MicroPython, you can quickly start prototyping and experimenting with your Raspberry Pi Pico-series device, making it an ideal option for both newcomers and seasoned developers alike.

Next steps

For more detailed information on using the Pico VS Code extension, including a comprehensive list of settings and additional guidance, visit our GitHub project page. It’s a great resource for getting the most out of the extension.

If you’re new to developing Pico projects, don’t forget to check out the Getting Started guide we mentioned earlier—it’s packed with helpful tips to get you up and running.

If you’re looking to create a project that leverages the advanced features of Pico-series devices — such as I2C, PIO, or enabling stdio support—be sure to explore the “New C/C++ Project” interface. This tool allows you to customize your project setup to suit your needs, so you can dive into development quickly and efficiently.

  1. https://www.raspberrypi.com/documentation/microcontrollers/c_sdk.html#your-first-binaries ↩︎
  2. https://www.raspberrypi.com/news/raspberry-pi-debug-probe-a-plug-and-play-debug-kit-for-12/ ↩︎
  3. https://www.raspberrypi.com/documentation/microcontrollers/micropython.html#what-is-micropython ↩︎
  4. https://micropython.org ↩︎

The post Boost Your Pico Projects with the new Pico VS Code Extension appeared first on Raspberry Pi.

❌
❌