❌

Reading view

There are new articles available, click to refresh the page.

Announcing Rust 1.83.0

The Rust team is happy to announce a new version of Rust, 1.83.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.83.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.83.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.83.0 stable

New const capabilities

This release includes several large extensions to what code running in const contexts can do. This refers to all code that the compiler has to evaluate at compile-time: the initial value of const and static items, array lengths, enum discriminant values, const generic arguments, and functions callable from such contexts (const fn).

References to statics. So far, const contexts except for the initializer expression of a static item were forbidden from referencing static items. This limitation has now been lifted:

static S: i32 = 25;
const C: &i32 = &S;

Note, however, that reading the value of a mutable or interior mutable static is still not permitted in const contexts. Furthermore, the final value of a constant may not reference any mutable or interior mutable statics:

static mut S: i32 = 0;

const C1: i32 = unsafe { S };
// error: constant accesses mutable global memory

const C2: &i32 = unsafe { &S };
// error: encountered reference to mutable memory in `const`

These limitations ensure that constants are still "constant": the value they evaluate to, and their meaning as a pattern (which can involve dereferencing references), will be the same throughout the entire program execution.

That said, a constant is permitted to evaluate to a raw pointer that points to a mutable or interior mutable static:

static mut S: i32 = 64;
const C: *mut i32 = &raw mut S;

Mutable references and pointers. It is now possible to use mutable references in const contexts:

const fn inc(x: &mut i32) {
    *x += 1;
}

const C: i32 = {
    let mut c = 41;
    inc(&mut c);
    c
};

Mutable raw pointers and interior mutability are also supported:

use std::cell::UnsafeCell;

const C: i32 = {
    let c = UnsafeCell::new(41);
    unsafe { *c.get() += 1 };
    c.into_inner()
};

However, mutable references and pointers can only be used inside the computation of a constant, they cannot become a part of the final value of the constant:

const C: &mut i32 = &mut 4;
// error[E0764]: mutable references are not allowed in the final value of constants

This release also ships with a whole bag of new functions that are now stable in const contexts (see the end of the "Stabilized APIs" section).

These new capabilities and stabilized APIs unblock an entire new category of code to be executed inside const contexts, and we are excited to see how the Rust ecosystem will make use of this!

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.83.0

Many people came together to create Rust 1.83.0. We couldn't have done it without all of you. Thanks!

Rust 2024 call for testing

Rust 2024 call for testing

We've been hard at work on Rust 2024. We're thrilled about how it has turned out. It's going to be the largest edition since Rust 2015. It has a great many improvements that make the language more consistent and ergonomic, that further our relentless commitment to safety, and that will open the door to long-awaited features such as gen blocks, let chains, and the never (!) type. For more on the changes, see the nightly Edition Guide.

As planned, we recently merged the feature-complete Rust 2024 edition to the release train for Rust 1.85. It has now entered nightly beta1.

You can help right now to make this edition a success by testing Rust 2024 on your own projects using nightly Rust. Migrating your projects to the new edition is straightforward and mostly automated. Here's how:

  1. Install the most recent nightly with rustup update nightly.
  2. In your project, run cargo +nightly fix --edition.
  3. Edit Cargo.toml and change the edition field to say edition = "2024" and, if you have a rust-version specified, set rust-version = "1.85".
  4. Run cargo +nightly check to verify your project now works in the new edition.
  5. Run some tests, and try out the new features!

(More details on how to migrate can be found here and within each of the chapters describing the changes in Rust 2024.)

If you encounter any problems or see areas where we could make the experience better, tell us about it by filing an issue.

Coming next

Rust 2024 will enter the beta channel on 2025-01-09, and will be released to stable Rust with Rust 1.85 on 2025-02-20.

  1. That is, it's still in nightly (not in the beta channel), but the edition items are frozen in a way similar to it being in the beta channel, and as with any beta, we'd like wide testing. ↩

The wasm32-wasip2 Target Has Reached Tier 2 Support

Introduction

In April of this year we posted an update about Rust's WASI targets to the main Rust blog. In it we covered the rename of the wasm32-wasi target to wasm32-wasip1, and the introduction of the new wasm32-wasip2 target as a "tier 3" target. This meant that while the target was available as part of rust-lang/rustc, it was not guaranteed to build. We're pleased to announce that this has changed in Rust 1.82.

For those unfamiliar with WebAssembly (Wasm) components and WASI 0.2, here is a quick, simplified primer:

  • WasmΒ is a (virtual) instruction format for programs to be compiled into (think: x86).
  • Wasm ComponentsΒ are a container format and type system that wrap Core Wasm instructions into typed, hermetic binaries and libraries (think: ELF).
  • WASIΒ is a reserved namespace for a collection of standardized Wasm component interfaces (think: POSIX header files).

For a more detailed explanation see theΒ WASI 0.2 announcement post on the Bytecode Alliance blog.

What's new?

Starting Rust 1.82 (2024-10-17) the wasm32-wasip2 (WASI 0.2) target has reached tier-2 platform support in the Rust compiler. Among other things this now means it is guaranteed to build, and is now available to install via Rustup using the following command:

rustup target add wasm32-wasip2

Up until now Rust users writing Wasm Components would always have to rely on tools (such as cargo-component) which target the WASI 0.1 target (wasm32-wasip1) and package it into a WASI 0.2 Component via a post-processing step invoked. Now that wasm32-wasip2 is available to everyone via Rustup, tooling can begin to directly target WASI 0.2 without the need for additional post-processing.

What this also means is that ecosystem crates can begin targeting WASI 0.2 directly for platform-specific code. WASI 0.1 did not have support for sockets. Now that we have a stable tier 2 platform available, crate authors should be able to finally start writing WASI-compatible network code. To target WASI 0.2 from Rust, authors can use the following cfg attribute:

#[cfg(all(target_os = "wasi", target_env = "p2"))]
mod wasip2 {
    // items go here
}

To target the older WASI 0.1 target, Rust also accepts target_env = "p1".

Standard Library Support

The WASI 0.2 Rust target reaching tier 2 platform support is in a way just the beginning. means it's supported and stable. While the platform itself is now stable, support in the stdlib for WASI 0.2 APIs is still limited. While the WASI 0.2 specification specifies APIs for example for timers, files, and sockets - if you try and use the stdlib APIs for these today, you'll find they don't yet work.

We expect to gradually extend the Rust stdlib with support for WASI 0.2 APIs throughout the remainder of this year into the next. That work has already started, with rust-lang/rust#129638 adding native support for std::net in Rust 1.83. We expect more of these PRs to land through the remainder of the year.

Though this doesn't need to stop users from using WASI 0.2 today. The stdlib is great because it provides portable abstractions, usually built on top of an operating system's libc or equivalent. If you want to use WASI 0.2 APIs directly today, you can either use the wasi crate directly. Or generate your own WASI bindings from the WASI specification's interface types using wit-bindgen.

Conclusion

The wasm32-wasip2 target is now installable via Rustup. This makes it possible for the Rust compiler to directly compile to the Wasm Components format targeting the WASI 0.2 interfaces. There is now also a way for crates to compile add WASI 0.2 platform support by writing:

#[cfg(all(target_os = "wasi", target_env = "p2"))]
mod wasip2 {}

We're excited for Wasm Components and WASI 0.2 to have reached this milestone within the Rust project, and are excited to see what folks in the community will be building with it!

gccrs: An alternative compiler for Rust

This is a guest post from the gccrs project, at the invitation of the Rust Project, to clarify the relationship with the Rust Project and the opportunities for collaboration.

gccrs is a work-in-progress alternative compiler for Rust being developed as part of the GCC project. GCC is a collection of compilers for various programming languages that all share a common compilation framework. You may have heard about gccgo, gfortran, or g++, which are all binaries within that project, the GNU Compiler Collection. The aim of gccrs is to add support for the Rust programming language to that collection, with the goal of having the exact same behavior as rustc.

First and foremost, gccrs was started as a project because it is fun. Compilers are incredibly rewarding pieces of software, and are great fun to put together. The project was started back in 2014, before Rust 1.0 was released, but was quickly put aside due to the shifting nature of the language back then. Around 2019, work on the compiler started again, led by Philip Herron and funded by Open Source Security and Embecosm. Since then, we have kept steadily progressing towards support for the Rust language as a whole, and our team has kept growing with around a dozen contributors working regularly on the project. We have participated in the Google Summer of Code program for the past four years, and multiple students have joined the effort.

The main goal of gccrs is to provide an alternative option for compiling Rust. GCC is an old project, as it was first released in 1987. Over the years, it has accumulated numerous contributions and support for multiple targets, including some not supported by LLVM, the main backend used by rustc. A practical example of that reach is the homebrew Dreamcast scene, where passionate engineers develop games for the Dreamcast console. Its processor architecture, SuperH, is supported by GCC but not by LLVM. This means that Rust is not able to be used on those platforms, except through efforts like gccrs or the rustc-codegen-gcc backend - whose main differences will be explained later.

GCC also benefits from the decades of software written in unsafe languages. As such, a high amount of safety features have been developed for the project as external plugins, or even within the project as static analyzers. These analyzers and plugins are executed on GCC's internal representations, meaning that they are language-agnostic, and can thus be used on all the programming languages supported by GCC. Likewise, many GCC plugins are used for increasing the safety of critical projects such as the Linux kernel, which has recently gained support for the Rust programming language. This makes gccrs a useful tool for analyzing unsafe Rust code, and more generally Rust code which has to interact with existing C code. We also want gccrs to be a useful tool for rustc itself by helping pan out the Rust specification effort with a unique viewpoint - that of a tool trying to replicate another's functionality, oftentimes through careful experimentation and source reading where the existing documentation did not go into enough detail. We are also in the process of developing various tools around gccrs and rustc, for the sole purpose of ensuring gccrs is as correct as rustc - which could help in discovering surprising behavior, unexpected functionality, or unspoken assumptions.

We would like to point out that our goal in aiding the Rust specification effort is not to turn it into a document for certifying alternative compilers as "Rust compilers" - while we believe that the specification will be useful to gccrs, our main goal is to contribute to it, by reviewing and adding to it as much as possible.

Furthermore, the project is still "young", and still requires a huge amount of work. There are a lot of places to make your mark, and a lot of easy things to work on for contributors interested in compilers. We have strived to create a safe, fun, and interesting space for all of our team and our GSoC students. We encourage anyone interested to come chat with us on our various communication platforms, and offer mentorship for you to learn how to contribute to the project and to compilers in general.

Maybe more importantly however, there is a number of things that gccrs is NOT for. The project has multiple explicit non-goals, which we value just as highly as our goals.

The most crucial of these non-goals is for gccrs not to become a gateway for an alternative or extended Rust-like programming language. We do not wish to create a GNU-specific version of Rust, with different semantics or slightly different functionality. gccrs is not a way to introduce new Rust features, and will not be used to circumvent the RFC process - which we will be using, should we want to see something introduced to Rust. Rust is not C, and we do not intend to introduce subtle differences in standard by making some features available only to gccrs users. We know about the pain caused by compiler-specific standards, and have learned from the history of older programming languages.

We do not want gccrs to be a competitor to the rustc_codegen_gcc backend. While both projects will effectively achieve the same goal, which is to compile Rust code using the GCC compiler framework, there are subtle differences in what each of these projects will unlock for the language. For example, rustc_codegen_gcc makes it easy to benefit from all of rustc's amazing diagnostics and helpful error messages, and makes Rust easily usable on GCC-specific platforms. On the other hand, it requires rustc to be available in the first place, whereas gccrs is part of a separate project entirely. This is important for some users and core Linux developers for example, who believe that having the ability to compile the entire kernel (C and Rust parts) using a single compiler is essential. gccrs can also offer more plugin entrypoints by virtue of it being its own separate GCC frontend. It also allows Rust to be used on GCC-specific platforms with an older GCC where libgccjit is not available. Nonetheless, we are very good friends with the folks working on rustc_codegen_gcc, and have helped each other multiple times, especially in dealing with the patch-based contribution process that GCC uses.

All of this ties into a much more global goal, which we could summarize as the following: We do not want to split the Rust ecosystem. We want gccrs to help the language reach even more people, and even more platforms.

To ensure that, we have taken multiple measures to make sure the values of the Rust project are respected and exposed properly. One of the features we feel most strongly about is the addition of a very annoying command line flag to the compiler, -frust-incomplete-and-experimental-compiler-do-not-use. Without it, you are not able to compile any code with gccrs, and the compiler will output the following error message:

crab1: fatal error: gccrs is not yet able to compile Rust code properly. Most of the errors produced will be the fault of gccrs and not the crate you are trying to compile. Because of this, please report errors directly to us instead of opening issues on said crate's repository.

Our github repository: https://github.com/rust-gcc/gccrs

Our bugzilla tracker: https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=__open__&component=rust&product=gcc

If you understand this, and understand that the binaries produced might not behave accordingly, you may attempt to use gccrs in an experimental manner by passing the following flag:

-frust-incomplete-and-experimental-compiler-do-not-use

or by defining the following environment variable (any value will do)

GCCRS_INCOMPLETE_AND_EXPERIMENTAL_COMPILER_DO_NOT_USE

For cargo-gccrs, this means passing

GCCRS_EXTRA_ARGS="-frust-incomplete-and-experimental-compiler-do-not-use"

as an environment variable.

Until the compiler can compile correct Rust and, most importantly, reject incorrect Rust, we will be keeping this command line option in the compiler. The hope is that it will prevent users from potentially annoying existing Rust crate maintainers with issues about code not compiling, when it is most likely our fault for not having implemented part of the language yet. Our goal of creating an alternative compiler for the Rust language must not have a negative effect on any member of the Rust community. Of course, this command line flag is not to the taste of everyone, and there has been significant pushback to its presence... but we believe it to be a good representation of our main values.

In a similar vein, gccrs separates itself from the rest of the GCC project by not using a mailing list as its main mode of communication. The compiler we are building will be used by the Rust community, and we believe we should make it easy for that community to get in touch with us and report the problems they encounter. Since Rustaceans are used to GitHub, this is also the development platform we have been using for the past five years. Similarly, we use a Zulip instance as our main communication platform, and encourage anyone wanting to chat with us to join it. Note that we still have a mailing list, as well as an IRC channel (gcc-rust@gcc.gnu.org and #gccrust on oftc.net), where all are welcome.

To further ensure that gccrs does not create friction in the ecosystem, we want to be extremely careful about the finer details of the compiler, which to us means reusing rustc components where possible, sharing effort on those components, and communicating extensively with Rust experts in the community. Two Rust components are already in use by gccrs: a slightly older version of polonius, the next-generation Rust borrow-checker, and the rustc_parse_format crate of the compiler. There are multiple reasons for reusing these crates, with the main one being correctness. Borrow checking is a complex topic and a pillar of the Rust programming language. Having subtle differences between rustc and gccrs regarding the borrow rules would be annoying and unproductive to users - but by making an effort to start integrating polonius into our compilation pipeline, we help ensure that the results we produce will be equivalent to rustc. You can read more about the various components we use, and we plan to reuse even more here. We would also like to contribute to the polonius project itself and help make it better if possible. This cross-pollination of components will obviously benefit us, but we believe it will also be useful for the Rust project and ecosystem as a whole, and will help strengthen these implementations.

Reusing rustc components could also be extended to other areas of the compiler: Various components of the type system, such as the trait solver, an essential and complex piece of software, could be integrated into gccrs. Simpler things such as parsing, as we have done for the format string parser and inline assembly parser, also make sense to us. They will help ensure that the internal representation we deal with will correspond to the one expected by the Rust standard library.

On a final note, we believe that one of the most important steps we could take to prevent breakage within the Rust ecosystem is to further improve our relationship with the Rust community. The amount of help we have received from Rust folks is great, and we think gccrs can be an interesting project for a wide range of users. We would love to hear about your hopes for the project and your ideas for reducing ecosystem breakage or lowering friction with the crates you have published. We had a great time chatting about gccrs at RustConf 2024, and everyone's interest in the project was heartwarming. Please get in touch with us if you have any ideas on how we could further contribute to Rust.

Google Summer of Code 2024 results

As we have previously announced, the Rust Project participated in Google Summer of Code (GSoC) for the first time this year. Nine contributors have been tirelessly working on their exciting projects for several months. The projects had various durations; some of them have ended in August, while the last one has been concluded in the middle of October. Now that the final reports of all the projects have been submitted, we can happily announce that all nine contributors have passed the final review! That means that we have deemed all of their projects to be successful, even though they might not have fulfilled all of their original goals (but that was expected).

We had a lot of great interactions with our GSoC contributors, and based on their feedback, it seems that they were also quite happy with the GSoC program and that they had learned a lot. We are of course also incredibly grateful for all their contributions - some of them have even continued contributing after their project has ended, which is really awesome. In general, we think that Google Summer of Code 2024 was a success for the Rust Project, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our project idea list.

Below you can find a brief summary of each of our GSoC 2024 projects, including feedback from the contributors and mentors themselves. You can find more information about the projects here.

Adding lint-level configuration to cargo-semver-checks

cargo-semver-checks is a tool designed for automatically detecting semantic versioning conflicts, which is planned to one day become a part of Cargo itself. The goal of this project was to enable cargo-semver-checks to ship additional opt-in lints by allowing users to configure which lints run in which cases, and whether their findings are reported as errors or warnings. Max achieved this goal by implementing a comprehensive system for configuring cargo-semver-checks lints directly in the Cargo.toml manifest file. He also extensively discussed the design with the Cargo team to ensure that it is compatible with how other Cargo lints are configured, and won't present a future compatibility problem for merging cargo-semver-checks into Cargo.

Predrag, who is the author of cargo-semver-checks and who mentored Max on this project, was very happy with his contributions that even went beyond his original project scope:

He designed and built one of our most-requested features, and produced design prototypes of several more features our users would love. He also observed that writing quality CLI and functional tests was hard, so he overhauled our test system to make better tests easier to make. Future work on cargo-semver-checks will be much easier thanks to the work Max put in this summer.

Great work, Max!

Implementation of a faster register allocator for Cranelift

The Rust compiler can use various backends for generating executable code. The main one is of course the LLVM backend, but there are other backends, such as GCC, .NET or Cranelift. Cranelift is a code generator for various hardware targets, essentially something similar to LLVM. The Cranelift backend uses Cranelift to compile Rust code into executable code, with the goal of improving compilation performance, especially for debug (unoptimized) builds. Even though this backend can already be faster than the LLVM backend, we have identified that it was slowed down by the register allocator used by Cranelift.

Register allocation is a well-known compiler task where the compiler decides which registers should hold variables and temporary expressions of a program. Usually, the goal of register allocation is to perform the register assignment in a way that maximizes the runtime performance of the compiled program. However, for unoptimized builds, we often care more about the compilation speed instead.

Demilade has thus proposed to implement a new Cranelift register allocator called fastalloc, with the goal of making it as fast as possible, at the cost of the quality of the generated code. He was very well-prepared, in fact he had a prototype implementation ready even before his GSoC project has started! However, register allocation is a complex problem, and thus it then took several months to finish the implementation and also optimize it as much as possible. Demilade also made extensive use of fuzzing to make sure that his allocator is robust even in the presence of various edge cases.

Once the allocator was ready, Demilade benchmarked the Cranelift backend both with the original and his new register allocator using our compiler benchmark suite. And the performance results look awesome! With his faster register allocator, the Rust compiler executes up to 18% less instructions across several benchmarks, including complex ones like performing a debug build of Cargo itself. Note that this is an end-to-end performance improvement of the time needed to compile a whole crate, which is really impressive. If you would like to examine the results in more detail or even run the benchmark yourself, check out Demilade's final report, which includes detailed instructions on how to reproduce the benchmark.

Apart from having the potential to speed up compilation of Rust code, the new register allocator can be also useful for other use-cases, as it can be used in Cranelift on its own (outside the Cranelift codegen backend). What can we can say other than we are very happy with Demilade's work! Note that the new register allocator is not yet available in the Cranelift codegen backend out-of-the-box, but we expect that it will eventually become the default choice for debug builds and that it will thus make compilation of Rust crates using the Cranelift backend faster in the future.

Improve Rust benchmark suite

This project was relatively loosely defined, with the overarching goal of improving the user interface of the Rust compiler benchmark suite. Eitaro tackled this challenge from various angles at once. He improved the visualization of runtime benchmarks, which were previously a second-class citizen in the benchmark suite, by adding them to our dashboard and by implementing historical charts of runtime benchmark results, which help us figure out how is a given benchmark behaving over a longer time span.

Another improvement that he has worked on was embedding a profiler trace visualizer directly within the rustc-perf website. This was a challenging task, which required him to evaluate several visualizers and figure out a way how to include them within the source code of the benchmark suite in a non-disruptive way. In the end, he managed to integrate Perfetto within the suite website, and also performed various optimizations to improve the performance of loading compilation profiles.

Last, but not least, Eitaro also created a completely new user interface for the benchmark suite, which runs entirely in the terminal. Using this interface, Rust compiler contributors can examine the performance of the compiler without having to start the rustc-perf website, which can be challenging to deploy locally.

Apart from the mentioned contributions, Eitaro also made a lot of other smaller improvements to various parts of the benchmark suite. Thank you for all your work!

Move cargo shell completions to Rust

Cargo's completion scripts have been hand maintained and frequently broken when changed. The goal for this effort was to have the completions automatically generated from the definition of Cargo's command-line, with extension points for dynamically generated results.

shanmu took the prototype for dynamic completions in clap (the command-line parser used by Cargo), got it working and tested for common shells, as well as extended the parser to cover more cases. They then added extension points for CLI's to provide custom completion results that can be generated on the fly.

In the next phase, shanmu added this to nightly Cargo and added different custom completers to match what the handwritten completions do. As an example, with this feature enabled, when you type cargo test --test= and hit the Tab key, your shell will autocomplete all the test targets in your current Rust crate! If you are interested, see the instructions for trying this out. The link also lists where you can provide feedback.

You can also check out the following issues to find out what is left before this can be stabilized:

Rewriting esoteric, error-prone makefile tests using robust Rust features

The Rust compiler has several test suites that make sure that it is working correctly under various conditions. One of these suites is the run-make test suite, whose tests were previously written using Makefiles. However, this setup posed several problems. It was not possible to run the suite on the Tier 1 Windows MSVC target (x86_64-pc-windows-msvc) and getting it running on Windows at all was quite challenging. Furthermore, the syntax of Makefiles is quite esoteric, which frequently caused mistakes to go unnoticed even when reviewed by multiple people.

Julien helped to convert the Makefile-based run-make tests into plain Rust-based tests, supported by a test support library called run_make_support. However, it was not a trivial "rewrite this in Rust" kind of deal. In this project, Julien:

  • Significantly improved the test documentation;
  • Fixed multiple bugs that were present in the Makefile versions that had gone unnoticed for years -- some tests were never testing anything or silently ignored failures, so even if the subject being tested regressed, these tests would not have caught that.
  • Added to and improved the test support library API and implementation; and
  • Improved code organization within the tests to make them easier to understand and maintain.

Just to give you an idea of the scope of his work, he has ported almost 250 Makefile tests over the span of his GSoC project! If you like puns, check out the branch names of Julien's PRs, as they are simply fantestic.

As a result, Julien has significantly improved the robustness of the run-make test suite, and improved the ergonomics of modifying existing run-make tests and authoring new run-make tests. Multiple contributors have expressed that they were more willing to work with the Rust-based run-make tests over the previous Makefile versions.

The vast majority of run-make tests now use the Rust-based test infrastructure, with a few holdouts remaining due to various quirks. After these are resolved, we can finally rip out the legacy Makefile test infrastructure.

Rewriting the Rewrite trait

rustfmt is a Rust code formatter that is widely used across the Rust ecosystem thanks to its direct integration within Cargo. Usually, you just run cargo fmt and you can immediately enjoy a properly formatted Rust project. However, there are edge cases in which rustfmt can fail to format your code. That is not such an issue on its own, but it becomes more problematic when it fails silently, without giving the user any context about what went wrong. This is what was happening in rustfmt, as many functions simply returned an Option instead of a Result, which made it difficult to add proper error reporting.

The goal of SeoYoung's project was to perform a large internal refactoring of rustfmt that would allow tracking context about what went wrong during reformatting. In turn, this would enable turning silent failures into proper error messages that could help users examine and debug what went wrong, and could even allow rustfmt to retry formatting in more situations.

At first, this might sound like an easy task, but performing such large-scale refactoring within a complex project such as rustfmt is not so simple. SeoYoung needed to come up with an approach to incrementally apply these refactors, so that they would be easy to review and wouldn't impact the entire code base at once. She introduced a new trait that enhanced the original Rewrite trait, and modified existing implementations to align with it. She also had to deal with various edge cases that we hadn't anticipated before the project started. SeoYoung was meticulous and systematic with her approach, and made sure that no formatting functions or methods were missed.

Ultimately, the refactor was a success! Internally, rustfmt now keeps track of more information related to formatting failures, including errors that it could not possibly report before, such as issues with macro formatting. It also has the ability to provide information about source code spans, which helps identify parts of code that require spacing adjustments when exceeding the maximum line width. We don't yet propagate that additional failure context as user facing error messages, as that was a stretch goal that we didn't have time to complete, but SeoYoung has expressed interest in continuing to work on that as a future improvement!

Apart from working on error context propagation, SeoYoung also made various other improvements that enhanced the overall quality of the codebase, and she was also helping other contributors understand rustfmt. Thank you for making the foundations of formatting better for everyone!

Rust to .NET compiler - add support for compiling & running cargo tests

As was already mentioned above, the Rust compiler can be used with various codegen backends. One of these is the .NET backend, which compiles Rust code to the Common Intermediate Language (CIL), which can then be executed by the .NET Common Language Runtime (CLR). This backend allows interoperability of Rust and .NET (e.g. C#) code, in an effort to bring these two ecosystems closer together.

At the start of this year, the .NET backend was already able to compile complex Rust programs, but it was still lacking certain crucial features. The goal of this GSoC project, implemented by MichaΕ‚, who is in fact the sole author of the backend, was to extend the functionality of this backend in various areas. As a target goal, he set out to extend the backend so that it could be used to run tests using the cargo test command. Even though it might sound trivial, properly compiling and running the Rust test harness is non-trivial, as it makes use of complex features such as dynamic trait objects, atomics, panics, unwinding or multithreading. These features were especially tricky to implement in this codegen backend, because the LLVM intermediate representation (IR) and CIL have fundamental differences, and not all LLVM intrinsics have .NET equivalents.

However, this did not stop MichaΕ‚. He has been working on this project tirelessly, implementing new features, fixing various issues and learning more about the compiler's internals every new day. He has also been documenting his journey with (almost) daily updates on Zulip, which were fascinating to read. Once he has reached his original goal, he moved the goalpost up to another level and attempted to run the compiler's own test suite using the .NET backend. This helped him uncover additional edge cases and also led to a refactoring of the whole backend that resulted in significant performance improvements.

By the end of the GSoC project, the .NET backend was able to properly compile and run almost 90% of the standard library core and std test suite. That is an incredibly impressive number, since the suite contains thousands of tests, some of which are quite arcane. MichaΕ‚'s pace has not slowed down even after the project has ended and he is still continuously improving the backend. Oh, and did we already mention that his backend also has experimental support for emitting C code, effectively acting as a C codegen backend?! MichaΕ‚ has been very busy over the summer.

We thank MichaΕ‚ for all his work on the .NET backend, as it was truly inspirational, and led to fruitful discussions that were relevant also to other codegen backends. MichaΕ‚'s next goal is to get his backend upstreamed and create an official .NET compilation target, which could open up the doors to Rust becoming a first-class citizen in the .NET ecosystem.

Sandboxed and deterministic proc macro using WebAssembly

Rust procedural (proc) macros are currently run as native code that gets compiled to a shared object which is loaded directly into the process of the Rust compiler. Because of this design, these macros can do whatever they want, for example arbitrarily access the filesystem or communicate through a network. This has not only obvious security implications, but it also affects performance, as this design makes it difficult to cache proc macro invocations. Over the years, there have been various discussions about making proc macros more hermetic, for example by compiling them to WebAssembly modules, which can be easily executed in a sandbox. This would also open the possibility of distributing precompiled versions of proc macros via crates.io, to speed up fresh builds of crates that depend on proc macros.

The goal of this project was to examine what would it take to implement WebAssembly module support for proc macros and create a prototype of this idea. We knew this would be a very ambitious project, especially since Apurva did not have prior experience with contributing to the Rust compiler, and because proc macro internals are very complex. Nevertheless, some progress was made. With the help of his mentor, David, Apurva was able to create a prototype that can load WebAssembly code into the compiler via a shared object. Some work was also done to make use of the existing TokenStream serialization and deserialization code in the compiler's proc_macro crate.

Even though this project did not fulfill its original goals and more work will be needed in the future to get a functional prototype of WebAssembly proc macros, we are thankful for Apurva's contributions. The WebAssembly loading prototype is a good start, and Apurva's exploration of proc macro internals should serve as a useful reference for anyone working on this feature in the future. Going forward, we will try to describe more incremental steps for our GSoC projects, as this project was perhaps too ambitious from the start.

Tokio async support in Miri

miri is an interpreter that can find possible instances of undefined behavior in Rust code. It is being used across the Rust ecosystem, but previously it was not possible to run it on any non-trivial programs (those that ever await on anything) that use tokio, due a to a fundamental missing feature: support for the epoll syscall on Linux (and similar APIs on other major platforms).

Tiffany implemented the basic epoll operations needed to cover the majority of the tokio test suite, by crafting pure libc code examples that exercised those epoll operations, and then implementing their emulation in miri itself. At times, this required refactoring core miri components like file descriptor handling, as they were originally not created with syscalls like epoll in mind.

Surprising to everyone (though probably not tokio-internals experts), once these core epoll operations were finished, operations like async file reading and writing started working in miri out of the box! Due to limitations of non-blocking file operations offered by operating systems, tokio is wrapping these file operations in dedicated threads, which was already supported by miri.

Once Tiffany has finished the project, including stretch goals like implementing async file operations, she proceeded to contact tokio maintainers and worked with them to run miri on most tokio tests in CI. And we have good news: so far no soundness problems have been discovered! Tiffany has become a regular contributor to miri, focusing on continuing to expand the set of supported file descriptor operations. We thank her for all her contributions!

Conclusion

We are grateful that we could have been a part of the Google Summer of Code 2024 program, and we would also like to extend our gratitude to all our contributors! We are looking forward to joining the GSoC program again next year.

Next Steps on the Rust Trademark Policy

As many of you know, the Rust language trademark policy has been the subject of an extended revision process dating back to 2022. In 2023, the Rust Foundation released an updated draft of the policy for input following an initial survey about community trademark priorities from the previous year along with review by other key stakeholders, such as the Project Directors. Many members of our community were concerned about this initial draft and shared their thoughts through the feedback form. Since then, the Rust Foundation has continued to engage with the Project Directors, the Leadership Council, and the wider Rust project (primarily via all@) for guidance on how to best incorporate as much feedback as possible.

After extensive discussion, we are happy to circulate an updated draft with the wider community today for final feedback. An effective trademark policy for an open source community should reflect our collective priorities while remaining legally sound. While the revised trademark policy cannot perfectly address every individual perspective on this important topic, its goal is to establish a framework to help guide appropriate use of the Rust trademark and reflect as many common values and interests as possible. In short, this policy is designed to steer our community toward a shared objective: to maintain and protect the integrity of the Rust programming language.

The Leadership Council is confident that this updated version of the policy has addressed the prevailing concerns about the initial draft and honors the variety of voices that have contributed to its development. Thank you to those who took the time to submit well-considered feedback for the initial draft last year or who otherwise participated in this long-running process to update our policy to continue to satisfy our goals.

Please review the updated Rust trademark policy here, and share any critical concerns you might have via this form by November 20, 2024. The Foundation has also published a blog post which goes into more detail on the changes made so far. The Leadership Council and Project Directors look forward to reviewing concerns raised and approving any final revisions prior to an official update of the policy later this year.

October project goals update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as flagship goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

The biggest elements of our goal are solving the "send bound" problem via return-type notation (RTN) and adding support for async closures. This month we made progress towards both. For RTN, @compiler-errors extended the return-type notation landed support for using RTN in self-types like where Self::method(): Send. He also authored a blog post with a call for testing explaining what RTN is and how it works. For async closures, the lang team reached a preliminary consensus on the async Fn syntax, with the understanding that it will also include some "async type" syntax. This rationale was documented in RFC #3710, which is now open for feedback. The team held a design meeting on Oct 23 and @nikomatsakis will be updating the RFC with the conclusions.

We have also been working towards a release of the dynosaur crate that enables dynamic dispatch for traits with async functions. This is intended as a transitionary step before we implement true dynamic dispatch. The next steps are to polish the implementation and issue a public call for testing.

With respect to async drop experiments, @nikomatsakis began reviews. It is expected that reviews will continue for some time as this is a large PR.

Finally, no progress has been made towards async WG reorganization. A meeting was scheduled but deferred. @tmandry is currently drafting an initial proposal.

We have made significant progress on resolving blockers to Linux building on stable. Support for struct fields in the offset_of! macro has been stabilized. The final naming for the "derive-smart-pointer" feature has been decided as #[derive(CoercePointee)]; @dingxiangfei2009 prepared PR #131284 for the rename and is working on modifying the rust-for-linux repository to use the new name. Once that is complete, we will be able to stabilize. We decided to stabilize support for references to statics in constants pointers-refs-to-static feature and are now awaiting a stabilization PR from @dingxiangfei2009.

Rust for Linux (RfL) is one of the major users of the asm-goto feature (and inline assembly in general) and we have been examining various extensions. @nbdd0121 authored a hackmd document detailing RfL's experiences and identifying areas for improvement. This led to two immediate action items: making target blocks safe-by-default (rust-lang/rust#119364) and extending const to support embedded pointers (rust-lang/rust#128464).

Finally, we have been finding an increasing number of stabilization requests at the compiler level, and so @wesleywiser and @davidtwco from the compiler team have started attending meetings to create a faster response. One of the results of that collaboration is RFC #3716, authored by Alice Ryhl, which proposes a method to manage compiler flags that modify the target ABI. Our previous approach has been to create distinct targets for each combination of flags, but the number of flags needed by the kernel make that impractical. Authoring the RFC revealed more such flags than previously recognized, including those that modify LLVM behavior.

The Rust 2024 edition is progressing well and is on track to be released on schedule. The major milestones include preparing to stabilize the edition by November 22, 2024, with the actual stabilization occurring on November 28, 2024. The edition will then be cut to beta on January 3, 2025, followed by an announcement on January 9, 2025, indicating that Rust 2024 is pending release. The final release is scheduled for February 20, 2025.

The priorities for this edition have been to ensure its success without requiring excessive effort from any individual. The team is pleased with the progress, noting that this edition will be the largest since Rust 2015, introducing many new and exciting features. The process has been carefully managed to maintain high standards without the need for high-stress heroics that were common in past editions. Notably, the team has managed to avoid cutting many items from the edition late in the development process, which helps prevent wasted work and burnout.

All priority language items for Rust 2024 have been completed and are ready for release. These include several key issues and enhancements. Additionally, there are three changes to the standard library, several updates to Cargo, and an exciting improvement to rustdoc that will significantly speed up doctests.

This edition also introduces a new style edition for rustfmt, which includes several formatting changes.

The team is preparing to start final quality assurance crater runs. Once these are triaged, the nightly beta for Rust 2024 will be announced, and wider testing will be solicited.

Rust 2024 will be stabilized in nightly in late November 2024, cut to beta on January 3, 2025, and officially released on February 20, 2025. More details about the edition items can be found in the Edition Guide.

Goals with updates

  • camelid has started working on using the new lowering schema for more than just const parameters, which once done will allow the introduction of a min_generic_const_args feature gate.
  • compiler-errors has been working on removing the eval_x methods on Const that do not perform proper normalization and are incompatible with this feature.
  • Posted the September update.
  • Created more automated infrastructure to prepare the October update, utilizing an LLM to summarize updates into one or two sentences for a concise table.
  • No progress has been made on this goal.
  • The goal will be closed as consensus indicates stabilization will not be achieved in this period; it will be revisited in the next goal period.
  • No major updates to report.
  • Preparing a talk for next week's EuroRust has taken away most of the free time.
  • Key developments: With the PR for supporting implied super trait bounds landed (#129499), the current implementation is mostly complete in that it allows most code that should compile, and should reject all code that shouldn't.
  • Further testing is required, with the next steps being improving diagnostics (#131152), and fixing more holes before const traits are added back to core.
  • A working-in-process pull request is available at https://github.com/weihanglo/cargo/pull/66.
  • The use of wasm32-wasip1 as a default sandbox environment is unlikely due to its lack of support for POSIX process spawning, which is essential for various build script use cases.
  • The Autodiff frontend was merged, including over 2k LoC and 30 files, making the remaining diff much smaller.
  • The Autodiff middle-end is likely getting a redesign, moving from a library-based to a pass-based approach for LLVM.
  • Significant progress was made with contributions by @x-hgg-x, improving the resolver test suite in Cargo to check feature unification against a SAT solver.
  • This was followed by porting the test cases that tripped up PubGrub to Cargo's test suite, laying the groundwork to prevent regression on important behaviors when Cargo switches to PubGrub and preparing for fuzzing of features in dependency resolution.
  • The team is working on a consensus for handling generic parameters, with both PRs currently blocked on this issue.
  • Attempted stabilization of -Znext-solver=coherence was reverted due to a hang in nalgebra, with subsequent fixes improving but not fully resolving performance issues.
  • No significant changes to the new solver have been made in the last month.
  • GnomedDev pushed rust-lang/rust#130553, which replaced an old Clippy infrastructure with a faster one (string matching into symbol matching).
  • Inspections into Clippy's type sizes and cache alignment are being started, but nothing fruitful yet.
  • The linting behavior was reverted until an unspecified date.
  • The next steps are to decide on the future of linting and to write the never patterns RFC.
  • The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged.
  • Work on the frontend feature is in progress.
  • Key developments in the 'Scalable Polonius support on nightly' project include fixing test failures due to off-by-one errors from old mid-points, and ongoing debugging of test failures with a focus on automating the tracing work.
  • Efforts have been made to accept variations of issue #47680, with potential adjustments to active loans computation and locations of effects. Amanda has been cleaning up placeholders in the work-in-progress PR #130227.
  • rust-lang/cargo#14404 and rust-lang/cargo#14591 have been addressed.
  • Waiting on time to focus on this in a couple of weeks.
  • Key developments: Added the cases in the issue list to the UI test to reproduce the bug or verify the non-reproducibility.
  • Blockers: null.
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue.
  • Students from the CMU Practicum Project have started writing function contracts that include safety conditions for some unsafe functions in the core library, and verifying that safe abstractions respect those pre-conditions and are indeed safe.
  • Help is needed to write more contracts, integrate new tools, review pull requests, or participate in the repository discussions.
  • Progress has been made in matching rustc suggestion output within annotate-snippets, with most cases now aligned.
  • The focus has been on understanding and adapting different rendering styles for suggestions to fit within annotate-snippets.

Goals without updates

The following goals have not received updates in the last month:

Announcing Rust 1.82.0

The Rust team is happy to announce a new version of Rust, 1.82.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.82.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.82.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.82.0 stable

cargo info

Cargo now has an info subcommand to display information about a package in the registry, fulfilling a long standing request just shy of its tenth anniversary! Several third-party extensions like this have been written over the years, and this implementation was developed as cargo-information before merging into Cargo itself.

For example, here's what you could see for cargo info cc:

cc #build-dependencies
A build-time dependency for Cargo build scripts to assist in invoking the native
C compiler to compile native C code into a static archive to be linked into Rust
code.
version: 1.1.23 (latest 1.1.30)
license: MIT OR Apache-2.0
rust-version: 1.63
documentation: https://docs.rs/cc
homepage: https://github.com/rust-lang/cc-rs
repository: https://github.com/rust-lang/cc-rs
crates.io: https://crates.io/crates/cc/1.1.23
features:
  jobserver = []
  parallel  = [dep:libc, dep:jobserver]
note: to see how you depend on cc, run `cargo tree --invert --package cc@1.1.23`

By default, cargo info describes the package version in the local Cargo.lock, if any. As you can see, it will indicate when there's a newer version too, and cargo info cc@1.1.30 would report on that.

Apple target promotions

macOS on 64-bit ARM is now Tier 1

The Rust target aarch64-apple-darwin for macOS on 64-bit ARM (M1-family or later Apple Silicon CPUs) is now a tier 1 target, indicating our highest guarantee of working properly. As the platform support page describes, every change in the Rust repository must pass full tests on every tier 1 target before it can be merged. This target was introduced as tier 2 back in Rust 1.49, making it available in rustup. This new milestone puts the aarch64-apple-darwin target on par with the 64-bit ARM Linux and the X86 macOS, Linux, and Windows targets.

Mac Catalyst targets are now Tier 2

Mac Catalyst is a technology by Apple that allows running iOS applications natively on the Mac. This is especially useful when testing iOS-specific code, as cargo test --target=aarch64-apple-ios-macabi --target=x86_64-apple-ios-macabi mostly just works (in contrast to the usual iOS targets, which need to be bundled using external tooling before they can be run on a native device or in the simulator).

The targets are now tier 2, and can be downloaded with rustup target add aarch64-apple-ios-macabi x86_64-apple-ios-macabi, so now is an excellent time to update your CI pipeline to test that your code also runs in iOS-like environments.

Precise capturing use<..> syntax

Rust now supports use<..> syntax within certain impl Trait bounds to control which generic lifetime parameters are captured.

Return-position impl Trait (RPIT) types in Rust capture certain generic parameters. Capturing a generic parameter allows that parameter to be used in the hidden type. That in turn affects borrow checking.

In Rust 2021 and earlier editions, lifetime parameters are not captured in opaque types on bare functions and on functions and methods of inherent impls unless those lifetime parameters are mentioned syntactically in the opaque type. E.g., this is an error:

//@ edition: 2021
fn f(x: &()) -> impl Sized { x }
error[E0700]: hidden type for `impl Sized` captures lifetime that does not appear in bounds
 --> src/main.rs:1:30
  |
1 | fn f(x: &()) -> impl Sized { x }
  |         ---     ----------   ^
  |         |       |
  |         |       opaque type defined here
  |         hidden type `&()` captures the anonymous lifetime defined here
  |
help: add a `use<...>` bound to explicitly capture `'_`
  |
1 | fn f(x: &()) -> impl Sized + use<'_> { x }
  |                            +++++++++

With the new use<..> syntax, we can fix this, as suggested in the error, by writing:

fn f(x: &()) -> impl Sized + use<'_> { x }

Previously, correctly fixing this class of error required defining a dummy trait, conventionally called Captures, and using it as follows:

trait Captures<T: ?Sized> {}
impl<T: ?Sized, U: ?Sized> Captures<T> for U {}

fn f(x: &()) -> impl Sized + Captures<&'_ ()> { x }

That was called "the Captures trick", and it was a bit baroque and subtle. It's no longer needed.

There was a less correct but more convenient way to fix this that was often used called "the outlives trick". The compiler even previously suggested doing this. That trick looked like this:

fn f(x: &()) -> impl Sized + '_ { x }

In this simple case, the trick is exactly equivalent to + use<'_> for subtle reasons explained in RFC 3498. However, in real life cases, this overconstrains the bounds on the returned opaque type, leading to problems. For example, consider this code, which is inspired by a real case in the Rust compiler:

struct Ctx<'cx>(&'cx u8);

fn f<'cx, 'a>(
    cx: Ctx<'cx>,
    x: &'a u8,
) -> impl Iterator<Item = &'a u8> + 'cx {
    core::iter::once_with(move || {
        eprintln!("LOG: {}", cx.0);
        x
    })
//~^ ERROR lifetime may not live long enough
}

We can't remove the + 'cx, since the lifetime is used in the hidden type and so must be captured. Neither can we add a bound of 'a: 'cx, since these lifetimes are not actually related and it won't in general be true that 'a outlives 'cx. If we write + use<'cx, 'a> instead, however, this will work and have the correct bounds.

There are some limitations to what we're stabilizing today. The use<..> syntax cannot currently appear within traits or within trait impls (but note that there, in-scope lifetime parameters are already captured by default), and it must list all in-scope generic type and const parameters. We hope to lift these restrictions over time.

Note that in Rust 2024, the examples above will "just work" without needing use<..> syntax (or any tricks). This is because in the new edition, opaque types will automatically capture all lifetime parameters in scope. This is a better default, and we've seen a lot of evidence about how this cleans up code. In Rust 2024, use<..> syntax will serve as an important way of opting-out of that default.

For more details about use<..> syntax, capturing, and how this applies to Rust 2024, see the "RPIT lifetime capture rules" chapter of the edition guide. For details about the overall direction, see our recent blog post, "Changes to impl Trait in Rust 2024".

Native syntax for creating a raw pointer

Unsafe code sometimes has to deal with pointers that may dangle, may be misaligned, or may not point to valid data. A common case where this comes up are repr(packed) structs. In such a case, it is important to avoid creating a reference, as that would cause undefined behavior. This means the usual & and &mut operators cannot be used, as those create a reference -- even if the reference is immediately cast to a raw pointer, it's too late to avoid the undefined behavior.

For several years, the macros std::ptr::addr_of! and std::ptr::addr_of_mut! have served this purpose. Now the time has come to provide a proper native syntax for this operation: addr_of!(expr) becomes &raw const expr, and addr_of_mut!(expr) becomes &raw mut expr. For example:

#[repr(packed)]
struct Packed {
    not_aligned_field: i32,
}

fn main() {
    let p = Packed { not_aligned_field: 1_82 };

    // This would be undefined behavior!
    // It is rejected by the compiler.
    //let ptr = &p.not_aligned_field as *const i32;

    // This is the old way of creating a pointer.
    let ptr = std::ptr::addr_of!(p.not_aligned_field);

    // This is the new way.
    let ptr = &raw const p.not_aligned_field;

    // Accessing the pointer has not changed.
    // Note that `val = *ptr` would be undefined behavior because
    // the pointer is not aligned!
    let val = unsafe { ptr.read_unaligned() };
}

The native syntax makes it more clear that the operand expression of these operators is interpreted as a place expression. It also avoids the term "address-of" when referring to the action of creating a pointer. A pointer is more than just an address, so Rust is moving away from terms like "address-of" that reaffirm a false equivalence of pointers and addresses.

Safe items with unsafe extern

Rust code can use functions and statics from foreign code. The type signatures of these foreign items are provided in extern blocks. Historically, all items within extern blocks have been unsafe to use, but we didn't have to write unsafe anywhere on the extern block itself.

However, if a signature within the extern block is incorrect, then using that item will result in undefined behavior. Would that be the fault of the person who wrote the extern block, or the person who used that item?

We've decided that it's the responsibility of the person writing the extern block to ensure that all signatures contained within it are correct, and so we now allow writing unsafe extern:

unsafe extern {
    pub safe static TAU: f64;
    pub safe fn sqrt(x: f64) -> f64;
    pub unsafe fn strlen(p: *const u8) -> usize;
}

One benefit of this is that items within an unsafe extern block can be marked as safe to use. In the above example, we can call sqrt or read TAU without using unsafe. Items that aren't marked with either safe or unsafe are conservatively assumed to be unsafe.

In future releases, we'll be encouraging the use of unsafe extern with lints. Starting in Rust 2024, using unsafe extern will be required.

For further details, see RFC 3484 and the "Unsafe extern blocks" chapter of the edition guide.

Unsafe attributes

Some Rust attributes, such as no_mangle, can be used to cause undefined behavior without any unsafe block. If this were regular code we would require them to be placed in an unsafe {} block, but so far attributes have not had comparable syntax. To reflect the fact that these attributes can undermine Rust's safety guarantees, they are now considered "unsafe" and should be written as follows:

#[unsafe(no_mangle)]
pub fn my_global_function() { }

The old form of the attribute (without unsafe) is currently still accepted, but might be linted against at some point in the future, and will be a hard error in Rust 2024.

This affects the following attributes:

  • no_mangle
  • link_section
  • export_name

For further details, see the "Unsafe attributes" chapter of the edition guide.

Omitting empty types in pattern matching

Patterns which match empty (a.k.a. uninhabited) types by value can now be omitted:

use std::convert::Infallible;
pub fn unwrap_without_panic<T>(x: Result<T, Infallible>) -> T {
    let Ok(x) = x; // the `Err` case does not need to appear
    x
}

This works with empty types such as a variant-less enum Void {}, or structs and enums with a visible empty field and no #[non_exhaustive] attribute. It will also be particularly useful in combination with the never type !, although that type is still unstable at this time.

There are some cases where empty patterns must still be written. For reasons related to uninitialized values and unsafe code, omitting patterns is not allowed if the empty type is accessed through a reference, pointer, or union field:

pub fn unwrap_ref_without_panic<T>(x: &Result<T, Infallible>) -> &T {
    match x {
        Ok(x) => x,
        // this arm cannot be omitted because of the reference
        Err(infallible) => match *infallible {},
    }
}

To avoid interfering with crates that wish to support several Rust versions, match arms with empty patterns are not yet reported as β€œunreachable code” warnings, despite the fact that they can be removed.

Floating-point NaN semantics and const

Operations on floating-point values (of type f32 and f64) are famously subtle. One of the reasons for this is the existence of NaN ("not a number") values which are used to represent e.g. the result of 0.0 / 0.0. What makes NaN values subtle is that more than one possible NaN value exists. A NaN value has a sign (that can be checked with f.is_sign_positive()) and a payload (that can be extracted with f.to_bits()). However, both the sign and payload of NaN values are entirely ignored by == (which always returns false). Despite very successful efforts to standardize the behavior of floating-point operations across hardware architectures, the details of when a NaN is positive or negative and what its exact payload is differ across architectures. To make matters even more complicated, Rust and its LLVM backend apply optimizations to floating-point operations when the exact numeric result is guaranteed not to change, but those optimizations can change which NaN value is produced. For instance, f * 1.0 may be optimized to just f. However, if f is a NaN, this can change the exact bit pattern of the result!

With this release, Rust standardizes on a set of rules for how NaN values behave. This set of rules is not fully deterministic, which means that the result of operations like (0.0 / 0.0).is_sign_positive() can differ depending on the hardware architecture, optimization levels, and the surrounding code. Code that aims to be fully portable should avoid using to_bits and should use f.signum() == 1.0 instead of f.is_sign_positive(). However, the rules are carefully chosen to still allow advanced data representation techniques such as NaN boxing to be implemented in Rust code. For more details on what the exact rules are, check out our documentation.

With the semantics for NaN values settled, this release also permits the use of floating-point operations in const fn. Due to the reasons described above, operations like (0.0 / 0.0).is_sign_positive() (which will be const-stable in Rust 1.83) can produce a different result when executed at compile-time vs at run-time. This is not a bug, and code must not rely on a const fn always producing the exact same result.

Constants as assembly immediates

The const assembly operand now provides a way to use integers as immediates without first storing them in a register. As an example, we implement a syscall to write by hand:

const WRITE_SYSCALL: c_int = 0x01; // syscall 1 is `write`
const STDOUT_HANDLE: c_int = 0x01; // `stdout` has file handle 1
const MSG: &str = "Hello, world!\n";

let written: usize;

// Signature: `ssize_t write(int fd, const void buf[], size_t count)`
unsafe {
    core::arch::asm!(
        "mov rax, {SYSCALL} // rax holds the syscall number",
        "mov rdi, {OUTPUT}  // rdi is `fd` (first argument)",
        "mov rdx, {LEN}     // rdx is `count` (third argument)",
        "syscall            // invoke the syscall",
        "mov {written}, rax // save the return value",
        SYSCALL = const WRITE_SYSCALL,
        OUTPUT = const STDOUT_HANDLE,
        LEN = const MSG.len(),
        in("rsi") MSG.as_ptr(), // rsi is `buf *` (second argument)
        written = out(reg) written,
    );
}

assert_eq!(written, MSG.len());

Output:

Hello, world!

Playground link.

In the above, a statement such as LEN = const MSG.len() populates the format specifier LEN with an immediate that takes the value of MSG.len(). This can be seen in the generated assembly (the value is 14):

lea     rsi, [rip + .L__unnamed_3]
mov     rax, 1    # rax holds the syscall number
mov     rdi, 1    # rdi is `fd` (first argument)
mov     rdx, 14   # rdx is `count` (third argument)
syscall # invoke the syscall
mov     rax, rax  # save the return value

See the reference for more details.

Safely addressing unsafe statics

This code is now allowed:

static mut STATIC_MUT: Type = Type::new();
extern "C" {
    static EXTERN_STATIC: Type;
}
fn main() {
     let static_mut_ptr = &raw mut STATIC_MUT;
     let extern_static_ptr = &raw const EXTERN_STATIC;
}

In an expression context, STATIC_MUT and EXTERN_STATIC are place expressions. Previously, the compiler's safety checks were not aware that the raw ref operator did not actually affect the operand's place, treating it as a possible read or write to a pointer. No unsafety is actually present, however, as it just creates a pointer.

Relaxing this may cause problems where some unsafe blocks are now reported as unused if you deny the unused_unsafe lint, but they are now only useful on older versions. Annotate these unsafe blocks with #[allow(unused_unsafe)] if you wish to support multiple versions of Rust, as in this example diff:

 static mut STATIC_MUT: Type = Type::new();
 fn main() {
+    #[allow(unused_unsafe)]
     let static_mut_ptr = unsafe { std::ptr::addr_of_mut!(STATIC_MUT) };
 }

A future version of Rust is expected to generalize this to other expressions which would be safe in this position, not just statics.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.82.0

Many people came together to create Rust 1.82.0. We couldn't have done it without all of you. Thanks!

WebAssembly targets: change in default target-features

The Rust compiler has recently upgraded to using LLVM 19 and this change accompanies some updates to the default set of target features enabled for WebAssembly targets of the Rust compiler. Beta Rust today, which will become Rust 1.82 on 2024-10-17, reflects all of these changes and can be used for testing.

WebAssembly is an evolving standard where extensions are being added over time through a proposals process. WebAssembly proposals reach maturity, get merged into the specification itself, get implemented in engines, and remain this way for quite some time before producer toolchains (e.g. LLVM) update to enable these sufficiently-mature proposals by default. In LLVM 19 this has happened with the multi-value and reference-types proposals for the LLVM/Rust target features multivalue and reference-types. These are now enabled by default in LLVM and transitively means that it's enabled by default for Rust as well.

WebAssembly targets for Rust now have improved documentation about WebAssembly proposals and their corresponding target features. This post is going to review these changes and go into depth about what's changing in LLVM.

WebAssembly Proposals and Compiler Target Features

WebAssembly proposals are the formal means by which the WebAssembly standard itself is evolved over time. Most proposals need toolchain integration in one form or another, for example new flags in LLVM or the Rust compiler. The -Ctarget-feature=... mechanism is used to implement this today. This is a signal to LLVM and the Rust compiler which WebAssembly proposals are enabled or disabled.

There is a loose coupling between the name of a proposal (often the name of the github repository of the proposal) and the feature name LLVM/Rust use. For example there is the multi-value proposal but a multivalue feature.

The lifecycle of the implementation of a feature in Rust/LLVM typically looks like:

  1. A new WebAssembly proposal is created in a new repository, for example WebAssembly/foo.
  2. Eventually Rust/LLVM implement the proposal under -Ctarget-feature=+foo
  3. Eventually the upstream proposal is merged into the specification, and WebAssembly/foo becomes an archived repository
  4. Rust/LLVM enable the -Ctarget-feature=+foo feature by default but typically retain the ability to disable it as well.

The reference-types and multivalue target features in Rust are at step (4) here now and this post is explaining the consequences of doing so.

Enabling Reference Types by Default

The reference-types proposal to WebAssembly introduced a few new concepts to WebAssembly, notably the externref type which is a host-defined GC resource that WebAssembly cannot access but can pass around. Rust does not have support for the WebAssembly externref type and LLVM 19 does not change that. WebAssembly modules produced from Rust will continue to not use the externref type nor have a means of being able to do so. This may be enabled in the future (e.g. a hypothetical core::arch::wasm32::Externref type or similar), but it will mostly likely only be done on an opt-in basis and will not affect preexisting code by default.

Also included in the reference-types proposal, however, was the ability to have multiple WebAssembly tables in a single module. In the original version of the WebAssembly specification only a single table was allowed and this restriction was relaxed with the reference-types proposal. WebAssembly tables are used by LLVM and Rust to implement indirect function calls. For example function pointers in WebAssembly are actually table indices and indirect function calls are a WebAssembly call_indirect instruction with this table index.

With the reference-types proposal the binary encoding of call_indirect instructions was updated. Prior to the reference-types proposal call_indirect was encoded with a fixed zero byte in its instruction (required to be exactly 0x00). This fixed zero byte was relaxed to a 32-bit LEB to indicate which table the call_indirect instruction was using. For those unfamiliar LEB is a way of encoding multi-byte integers in a smaller number of bytes for smaller integers. For example the 32-bit integer 0 can be encoded as 0x00 with a LEB. LEBs are flexible to additionally allow "overlong" encodings so the integer 0 can additionally be encoded as 0x80 0x00.

LLVM's support of separate compilation of source code to a WebAssembly binary means that when an object file is emitted it does not know the final index of the table that is going to be used in the final binary. Before reference-types there was only one option, table 0, so 0x00 was always used when encoding call_indirect instructions. After reference-types, however, LLVM will emit an over-long LEB of the form 0x80 0x80 0x80 0x80 0x00 which is the maximal length of a 32-bit LEB. This LEB is then filled in by the linker with a relocation to the actual table index that is used by the final module.

When putting all of this together, it means that with LLVM 19, which has the reference-types feature enabled by default, any WebAssembly module with an indirect function call (which is almost always the case for Rust code) will produce a WebAssembly binary that cannot be decoded by engines and tooling that do not support the reference-types proposal. It is expected that this change will have a low impact due to the age of the reference-types proposal and breadth of implementation in engines. Given the multitude of WebAssembly engines, however, it's recommended that any WebAssembly users test out Rust 1.82 beta and see if the produced module still runs on their engine of choice.

LLVM, Rust, and Multiple Tables

One interesting point worth mentioning is that despite the reference-types proposal enabling multiple tables in WebAssembly modules this is not actually taken advantage of at this time by either LLVM or Rust. WebAssembly modules emitted will still have at most one table of functions. This means that the over-long 5-byte encoding of index 0 as 0x80 0x80 0x80 0x80 0x00 is not actually necessary at this time. LLD, LLVM's linker for WebAssembly, wants to process all LEB relocations in a similar manner which currently forces this 5-byte encoding of zero. For example when a function calls another function the call instruction encodes the target function index as a 5-byte LEB which is filled in by the linker. There is quite often more than one function so the 5-byte encoding enables all possible function indices to be encoded.

In the future LLVM might start using multiple tables as well. For example LLVM may have a mode in the future where there's a table-per-function type instead of a single heterogenous table. This can enable engines to implement call_indirect more efficiently. This is not implemented at this time, however.

For users who want a minimally-sized WebAssembly module (e.g. if you're in a web context and sending bytes over the wire) it's recommended to use an optimization tool such as wasm-opt to shrink the size of the output of LLVM. Even before this change with reference-types it's recommended to do this as wasm-opt can typically optimize LLVM's default output even further. When optimizing a module through wasm-opt these 5-byte encodings of index 0 are all shrunk to a single byte.

Enabling Multi-Value by Default

The second feature enabled by default in LLVM 19 is multivalue. The multi-value proposal to WebAssembly enables functions to have more than one return value for example. WebAssembly instructions are additionally allowed to have more than one return value as well. This proposal is one of the first to get merged into the WebAssembly specification after the original MVP and has been implemented in many engines for quite some time.

The consequences of enabling this feature by default in LLVM are more minor for Rust, however, than enabling the reference-types feature by default. LLVM's default C ABI for WebAssembly code is not changing even when multivalue is enabled. Additionally Rust's extern "C" ABI for WebAssembly is not changing either and continues to match LLVM's (or strives to, differences to LLVM are considered bugs to fix). Despite this though the change has the possibility of still affecting Rust users.

Rust for some time has supported an extern "wasm" ABI on Nightly which was an experimental means of exposing the ability of defining a function in Rust which returned multiple values (e.g. used the multi-value proposal). Due to infrastructural changes and refactorings in LLVM itself this feature of Rust has been removed and is no longer supported on Nightly at all. As a result there is no longer any possible method of writing a function in Rust that returns multiple values at the WebAssembly function type level.

In summary this change is expected to not affect any Rust code in the wild unless you were using the Nightly feature of extern "wasm" in which case you'll be forced to drop support for that and use extern "C" instead. Supporting WebAssembly multi-return functions in Rust is a broader topic than this post can cover, but at this time it's an area that's ripe for contribution from suitably motivated contributors.

Aside: ABI Stability and WebAssembly

While on the topic of ABIs and the multivalue feature it's perhaps worth also going over a bit what ABIs mean for WebAssembly. The current definition of the extern "C" ABI for WebAssembly is documented in the tool-conventions repository and this is what Clang implements for C code as well. LLVM implements enough support for lowering to WebAssembly as well to support all of this. The extern "Rust ABI is not stable on WebAssembly, as is the case for all Rust targets, and is subject to change over time. There is no reference documentation at this time for what extern "Rust" is on WebAssembly.

The extern "C" ABI, what C code uses by default as well, is difficult to change because stability is often required across different compiler versions. For example WebAssembly code compiled with LLVM 18 might be expected to work with code compiled by LLVM 20. This means that changing the ABI is a daunting task that requires version fields, explicit markers, etc, to help prevent mismatches.

The extern "Rust" ABI, however, is subject to change over time. A great example of this could be that when the multivalue feature is enabled the extern "Rust" ABI could be redefined to use the multiple-return-values that WebAssembly would then support. This would enable much more efficient returns of values larger than 64-bits. Implementing this would require support in LLVM though which is not currently present.

This all means that actually using multiple-returns in functions, or the WebAssembly feature that the multivalue enables, is still out on the horizon and not implemented. First LLVM will need to implement complete lowering support to generate WebAssembly functions with multiple returns, and then extern "Rust" can be change to use this when fully supported. In the yet-further-still future C code might be able to change, but that will take quite some time due to its cross-version-compatibility story.

Enabling Future Proposals to WebAssembly

This is not the first time that a WebAssembly proposal has gone from off-by-default to on-by-default in LLVM, nor will it be the last. For example LLVM already enables the sign-extension proposal by default which MVP WebAssembly did not have. It's expected that in the not-too-distant future the nontrapping-fp-to-int proposal will likely be enabled by default. These changes are currently not made with strict criteria in mind (e.g. N engines must have this implemented for M years), and there may be breakage that happens.

If you're using a WebAssembly engine that does not support the modules emitted by Rust 1.82 beta and LLVM 19 then your options are:

  • Try seeing if the engine you're using has any updates available to it. You might be using an older version which didn't support a feature but a newer version supports the feature.
  • Open an issue to raise awareness that a change is causing breakage. This could either be done on your engine's repository, the Rust repository, or the WebAssembly tool-conventions repository. It's recommended to first search to confirm there isn't already an open issue though.
  • Recompile your code with features disabled, more on this in the next section.

The general assumption behind enabling new features by default is that it's a relatively hassle-free operation for end users while bringing performance benefits for everyone (e.g. nontrapping-fp-to-int will make float-to-int conversions more optimal). If updates end up causing hassle it's best to flag that early on so rollout plans can be adjusted if needed.

Disabling on-by-default WebAssembly proposals

For a variety of reasons you might be motivated to disable on-by-default WebAssembly features: for example maybe your engine is difficult to update or doesn't support a new feature. Disabling on-by-default features is unfortunately not the easiest task. It is notably not sufficient to use -Ctarget-features=-sign-ext to disable a feature for just your own project's compilation because the Rust standard library, shipped in precompiled form, is still compiled with the feature enabled.

To disable on-by-default WebAssembly proposal it's required that you use Cargo's -Zbuild-std feature. For example:

$ export RUSTFLAGS=-Ctarget-cpu=mvp
$ cargo +nightly build -Zbuild-std=panic_abort,std --target wasm32-unknown-unknown

This will recompiled the Rust standard library in addition to your own code with the "MVP CPU" which is LLVM's placeholder for all WebAssembly proposals disabled. This will disable sign-ext, reference-types, multi-value, etc.

September Project Goals Update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Prepare Rust 2024 Edition (tracked in #117)

The Rust 2024 edition is on track to be stabilized on Nightly by Nov 28 and to reach stable as part of Rust v1.85, to be released Feb 20, 2025.

Over the last month, all the "lang team priority items" have landed and are fully ready for release, including migrations and chapters in the Nightly version of the edition guide:

Overall:

  • 13 items are fully ready for Rust 2024.
  • 10 items are fully implemented but still require documentation.
  • 6 items still need implementation work.

Keep in mind, there will be items that are currently tracked for the edition that will not make it. That's OK, and we still plan to ship the edition on time and without those items.

Async Rust Parity (tracked in #105)

We are generally on track with our marquee features:

  1. Support for async closures is available on Nightly and the lang team arrived at a tentative consensus to keep the existing syntax (written rationale and formal decision are in progress). We issued a call for testing as well which has so far uncovered no issues.
  2. Partial support for return-type notation is available on Nightly with the remainder under review.

In addition, dynamic dispatch for async functions and experimental async drop work both made implementation progress. Async WG reorganization has made no progress.

Read the full details on the tracking issue.

Stabilize features needed by Rust for Linux (tracked in #116)

We have stabilized extended offset_of syntax and agreed to stabilize Pointers to Statics in Constants. Credit to @dingxiangfei2009 for driving these forward. πŸ’œ

Implementation work proceeds for arbitrary self types v2, derive smart pointer, and sanitizer support.

RFL on Rust CI is implemented but still waiting on documented policy. The first breakage was detected (and fixed) in #129416. This is the mechanism working as intended, although it would also be useful to better define what to do when breakage occurs.

Selected updates

Begin resolving cargo-semver-checks blockers for merging into cargo (tracked in #104)

@obi1kenobi has been working on laying the groundwork to enable manifest linting in their project. They have set up the ability to test how CLI invocations are interpreted internally, and can now snapshot the output of any CLI invocation over a given workspace. They have also designed the expansion of the CLI and the necessary Trustfall schema changes to support manifest linting. As of the latest update, they have a working prototype of manifest querying, which enables SemVer lints such as detecting the accidental removal of features between releases. This work is not blocked on anything, and while there are no immediate opportunities to contribute, they indicate there will be some in future updates.

Expose experimental LLVM features for automatic differentiation and GPU offloading (tracked in #109)

@ZuseZ4 has been focusing on automatic differentiation in Rust, with their first two upstreaming PRs for the rustc frontend and backend merged, and a third PR covering changes to rustc_codegen_llvm currently under review. They are especially proud of getting a detailed LLVM-IR reproducer from a Rust developer for an Enzyme core issue, which will help with debugging. On the GPU side, @ZuseZ4 is taking advantage of recent LLVM updates to rustc that enable more GPU/offloading work. @ZuseZ4 also had a talk about "When unsafe code is slow - Automatic Differentiation in Rust" accepted for the upcoming LLVM dev meeting, where they'll present benchmarks and analysis comparing Rust-Enzyme to the C++ Enzyme frontend.

Extend pubgrub to match cargo's dependency resolution (tracked in #110)

@Eh2406 has achieved the milestone of having the new PubGrub resolver and the existing Cargo resolver accept each other's solutions for all crate versions on crates.io, which involved fixing many bugs related to optional dependencies. Significant progress has also been made in speeding up the resolution process, with over 30% improvements to the average performance of the new resolver, and important changes to allow the existing Cargo resolver to run in parallel. They have also addressed some corner cases where the existing resolver would not accept certain records, and added a check for cyclic dependencies. The latest updates focus on further performance improvements, with the new resolver now taking around 3 hours to process all of crates.io, down from 4.3 hours previously, and a 27% improvement in verifying lock files for non-pathological cases.

Optimizing Clippy & linting

@blyxyas has been working on improving Clippy, the Rust linting tool, with a focus on performance. They have completed a medium-sized objective to use ControlFlow in more places, and have integrated a performance-related issue into their project. A performance-focused PR has also been merged, and they are remaking their benchmarking tool (benchv2) to help with ongoing efforts. The main focus has been on resolving rust-lang/rust#125116, which is now all green after some work. Going forward, they are working on moving the declare_clippy_lint macro to a macro_rules implementation, and have one open proposal-level issue with the performance project label. There are currently no blockers to their work.

Completed goals

The following goals have been completed:

Stalled or orphaned goals

Several goals appear to have stalled or not received updates:

One goal is still waiting for an owner:

Conclusion

This is a brief summary of the progress towards our a subset of the 2024 project goals. There is a lot more information available on the website, including the motivation for each goal, as well as detailed status updates. If you'd like more detail, please do check it out! You can also subscribe to individual tracking issues (or the entire rust-project-goals repo) to get regular updates.

The current set of goals target the second half of 2024 (2024H2). Next month we also expect to begin soliciting goals for the first half of 2025 (2025H1).

❌