Normal view

There are new articles available, click to refresh the page.
Today — 22 December 2024FOSS

Raspberry Pi CM5 review with different cooling solutions (and camera tribulations)

22 December 2024 at 17:30
Raspberry Pi CM5 IO board cooling heatsink active cooler

The day of Raspberry Pi CM5 release, I published a mini review of the Raspberry Pi Development Kit for CM5 showing how to assemble the kit and boot Raspberry Pi OS, and I also ran sbc-bench benchmark to evaluate the performance. Sadly, the Broadcom BCM2712 CPU did throttle during the test meaning cooling was not optimal when the CM5 IO board was inside the IO Case and the Compute Module 5 was only cooled by the fan. So today, I’ll repeat the same test with other cooling solutions namely the official Raspberry Pi Cooler for CM5 (that’s a heatsink only),  and EDATEC’s CM5 active cooler similar to the active cooler for the Raspberry Pi 5, but designed for the CPU module. But before that, I’ll do some house cleaning so to speak since last time, I booted Raspberry Pi OS from an NVMe SSD and I noticed the camera did [...]

The post Raspberry Pi CM5 review with different cooling solutions (and camera tribulations) appeared first on CNX Software - Embedded Systems News.

Flatpak XDG-Desktop-Portal 1.19.1 Brings USB Portal & Notification v2 Portal

22 December 2024 at 08:19
Debuting as a new development release today was XDG-Desktop-Portal 1.19.1 as this portal front-end service for Flatpak sandboxed apps and other desktop containment frameworks. The XDG-Desktop-Portal 1.19.1 milestone is exposing new and expanded portal capabilities for dealing with various hardware devices and APIs...
Yesterday — 21 December 2024FOSS

Rockchip RK3588 mainline Linux support – Current status and future work for 2025

21 December 2024 at 11:30
Rockchip RK3588 mainline linux status

The Rockchip RK3588 is one of the most popular Arm SoCs for single board computers, and while good progress has been made with regards to mainline u-boot and Linux support, the SoC is quite complex and it takes time to port all its features even though it was first teased in 2020 and the first Rockchip RK3588 SBCs were introduced in 2022. While the simpler Rockchip RK3566 and RK3568 SoCs are already fairly well supported in mainline Linux, more work is needed to upstream code, and as noted before in posts and comments here, Collabora keeps track of the status on Gitlab, and the company recently posted an article about the progress and future plans related to upstream Linux support for Rockchip RK3588. Rockchip RK3588 mainline Linux progress in 2024 Linux 6.7 kernel – Network support on the Radxa ROCK 5B using a 2.5GbE PCIe controller. Linux 6.8 kernel – [...]

The post Rockchip RK3588 mainline Linux support – Current status and future work for 2025 appeared first on CNX Software - Embedded Systems News.

TrueNAS Electric Eel Performance Sizzles

21 December 2024 at 06:41

After a successful release and the fastest software adoption in TrueNAS history, TrueNAS SCALE 24.10 “Electric Eel” is now widely deployed. The update to TrueNAS 24.10.1 has delivered the quality needed for general usage. Electric Eel’s performance is also up to 70% better than TrueNAS 13.0 and Cobia and ahead of Dragonfish, which previously provided dramatic performance improvements of 50% more IOPS and 1000% better metadata. This blog dives into how we test and optimize TrueNAS Electric Eel performance.

While the details can get technical, you don’t have to handle everything yourself. TrueNAS Enterprise appliances come pre-configured and performance-tested, so you can focus on your workloads with confidence that your system is ready to deliver. For our courageous and curious Community members, we’ve outlined the steps to defining, building, and testing a TrueNAS system to meet performance requirements.

Step 1: Setting Your Performance Target

Performance targets are typically defined using a combination of bandwidth (measured in GB/s) and IOPS (short for “Input/Output Operations Per Second.”) For video editing and backups, the individual file and IO size is large, but the number of IOPS is typically low. When supporting virtualization or transactional databases, the IO size is much smaller, but significantly more IOPS are needed.

Bandwidth needs are often best estimated by looking at file sizes and transfer time expectations. High-resolution video files can range from 1 GB to several hundred GB in size. When multiple editors are reading directly from files on the storage, bandwidth needs can easily reach 10GB/s or more; and in the opposite direction, a business may have a specific time window that all backup jobs must complete in.

IOPS requirements can be more challenging, but are often expressed as an expectation from a software vendor or end-user in terms of responsiveness. If a database query needs to return in less than 1 ms, one might think that this means 1000 IOPS is the minimum – but that database query might result in authentication, a table lookup, and an audit or access log update in addition to returning the data itself – a single query might be responsible for a factor of 10 or more IOPS generated. Consider the size of IO that will be sent as well – smaller IO sizes may only be able to be stored on or read from a smaller number of disks in your array.

Client count and concurrency also impacts performance. If a single client requires a given amount of bandwidth or IOPS, but only a handful of clients will access your NAS simultaneously, the requirements can be fulfilled with a much smaller system than if ten or a hundred clients are concurrently making those same demands.

Typically, systems that need more IOPS may also need lower latency. It’s essential to determine whether reliable and consistent sub-millisecond latency or low cost per TB is more important, and find the ideal configuration.

After deciding on your performance target, it’s time to move on to selecting your media and platform.

Step 2: Choosing Your Media

Unlike many other storage systems, TrueNAS supports all-flash (SSD), Hard Drive (HDD) configurations, and Hybrid (mixed SSD and HDD) systems. Choosing the media also determines the system capacity and price point.

With current technology, SSDs best meet high IOPS needs. NVMe SSDs are even faster and becoming increasingly economical. TrueNAS deploys with SSDs up to 30TB in size today, with larger drives planned for availability in the future. Each of these high-performance NVMe SSDs can deliver well over 1 GB/s and over 10,000 IOPS.

Hard drives provide the best cost per TB for capacity, but are limited in two performance dimensions. Sustained bandwidth is typically around 100 MB/s for many drives, and IOPS are around 100. The combination of OpenZFS’s transactional behavior and adaptive caching technology allow for the aggregation of these drives into larger, better-performing systems. The TrueNAS M60 can support over 1,000 HDDs to deliver 10 GB/s and 50,000 IOPS from as low as $60/TB. For high-capacity storage, magnetic hard drives offer an unbeatable cost per TB.

When your performance target is consistent sub-millisecond latency, and IOPS numbers are critical, systems like the all-NVMe TrueNAS F100 bring 24 NVMe drives. With directly connected NVMe drives, there’s no added latency or PCI Express switching involved, giving you maximum performance. With a 2U footprint, and the ability to expand with up to six additional NVMe-oF (NVMe over Fabric) 2U shelves, the F100 is the sleek, high-performance sports car to the M60’s box truck – lighter, nimble, and screaming fast, but at the cost of less “cargo capacity.”

While TrueNAS and OpenZFS cannot make HDDs faster than all-Flash, the Adaptive Replacement Cache (ARC) and optional read cache (L2ARC) and write log (SLOG) devices can help make sure each system meets its performance targets. Read more about these devices in the links to the TrueNAS Documentation site above, or tune in to the TrueNAS Tech Talk (T3) Podcast episode, where the iX engineering team gives some details about where and when these cache devices can help increase performance.

Step 3: Choosing the Platform

After selecting suitable media, the next step to achieving a performance target is by selecting the proper hardware platform. Choose a platform balanced with the CPU, memory size, HBAs, network ports, and media drives needed to achieve a target performance level. Ensure that when designing your system to consider any requirements for power delivery and cooling in order to ensure overall stability.

Depending on the number and type of storage media selected, this may drive your platform decisions in a certain direction. A system designed for a high-bandwidth backup ingest with a hundred spinning disks will have a drastically different design from one that needs a few dozen NVMe devices. Each system will only perform as fast as its slowest component; software cannot fix a significant hardware limitation.

Each performance level has different platforms for different capacity and price points. Our customers typically choose the platforms based on the bandwidth and capacity required now or in the future. For systems where uptime and availability are crucial, platforms supporting High Availability (HA) are typically required.

 

TrueNAS Platforms and Bandwidth

Community users can build their own smaller systems using the same principles. Resources such as the TrueNAS Hardware Guide can offer excellent guidance for system component selection, as well as the TrueNAS Community Forums.

A key feature of TrueNAS is that all of our systems run the same software, from our all-NVMe F-series to the compact Mini line. While TrueNAS Enterprise and High-Availability systems carry some additional, hardware-specific functionality, the same key features and protocols are supported by TrueNAS Community Edition. There’s no need to re-learn or use a different interface – simply build or buy the hardware platform that supports your performance and availability requirements, and jump right into the same familiar interface that users around the world already know and love.

Step 4: Configuring a Test Lab

Not many users have the opportunity to build a full test lab to run a comprehensive performance test suite. At the TrueNAS Engineering lab, we maintain a performance lab for our customers and for the benefit of the broader TrueNAS community user base.

There are three general categories of tests that the TrueNAS team runs:

Single Client: A single client (Linux, Windows, Mac) connects via a higher-speed LAN (faster than the target bandwidth by 50%) to the NAS. The test suite (e.g., fio) runs on the client. This approach often tests the client operating system and software implementation as much as the NAS, and IOPS and bandwidth results are frequently client-limited. For example, a client may be restricted to less than 3GB/s even though the NAS itself has been verified as capable of greater than 10GB/s total. TCP and storage session protocols (iSCSI, NFS, SMB) can limit the client’s performance; but this test is important to conduct as it is a realistic use-case.

Multi-client: Given that each client is usually restricted to 2-3GB/s, a system capable of 10 or 20 GB/s needs more than 10 clients to test a NAS simultaneously. The only approach is to have a lab with tens of virtual or physical clients running each of the protocols. Purely synthetic tests like fio are used, as well as more complicated real-world workload tests like virtualization and software-build tests. The aggregate bandwidth and IOPS served to all clients are the final measures of success in this test.

Client Scalability: The last class of tests is needed to simulate use cases with thousands of clients accessing the same NAS. Thousands of users in a school, university, or large company may use a shared storage system, typically via SMB. How the NAS handles those thousands of TCP connections and sessions is important to scalability and reliable operation. To set up this test, we’ve invested in virtualizing thousands of Windows Active Directory (AD) and SMB clients.

Step 5: Choosing a Software Test Suite

There are many test suites out there. Unfortunately, most are for testing individual drives. We recommend the following to get useful results:

Test with a suite that is intended for NAS systems. Synthetic tests like fio fall into this category, providing many options for identifying performance issues.

Do not test by copying data. Copying data goes through a different client path than reading and writing data. Depending on your client, copying data can be very single-threaded and latency-sensitive. Using dd or copying folders will give you poor measurements compared with fio, and in this scenario you may be testing your copy software, not the NAS.

Pick a realistic IO size for your workload. The storage industry previously fixated on 4KB IOPS because applications like Oracle would use this size IO – but unless you’re using Oracle or a similar transactional database, it’s likely your standard IO size is between 32 KB and 1 MB. Test with that to assess your bandwidth and IOPS.

Look at queue depth. A local SSD will often perform better than a network share because of latency differences. Unless you use 100Gbe networking, networks will restrict bandwidth and add latency. Storage systems overcome latency issues by increasing “queue depth”, the number of simultaneous outstanding IOs. If your workload allows for multiple outstanding IOs, increase the testing queue depth. Much like adding more lanes on a highway, latency remains mostly the same, but with potentially greater throughput and IOPS results.

Make sure your network is solid. Ensure that the network path between testing clients and your NAS is reliable with no packet loss, jitter, or retransmissions. Network interruptions or errors impact TCP performance and reduce bandwidth. Using lossy mediums like Wi-Fi to test is not recommended.

In the TrueNAS performance labs, we run these tests across a range of platforms and media. Our goals are to confidently measure and predict the performance of Enterprise systems, as well as ensuring optimizations across the hardware and software stack of TrueNAS. We can also experiment with tuning options for specific workloads to offer best practices for our customers and community.

Electric Eel delivers Real Performance Improvements

Electric Eel benefits from improvements in OpenZFS, Linux, Samba, and of course optimizations in TrueNAS itself. Systems with an existing hardware bottleneck may not see obvious performance changes, but larger systems need software that scales its performance with hardware such as increasing CPU core and drive counts.

TrueNAS 24.10 builds on the 24.04 base and increases performance for basic storage services. Typically, we have measured up to a 70% IOPS improvement for all 3 major storage protocols (iSCSI, SMB, and NFS) when compared to TrueNAS 13.0. The improvement was measured on an identical hardware configuration, implying that the previous level of performance can be achieved with 30% fewer drives and processor cores for a budget-constrained use case.

iSCSI Mixed Workload with VDIv2 Benchmark“iSCSI Mixed Workload with VDIv2 Benchmark”

These performance gains are the result of tuning at each level of the software stack. The Linux OS has improved management of threads and interrupts, the iSCSI stack has lower latency and better parallelism, and code paths in OpenZFS 2.3 have made their own improvements to parallelism and latency. In the spirit of open source, the TrueNAS Engineering team helped contribute to the iSCSI and OpenZFS endeavours, ensuring that community members of both upstream projects can benefit.

Additionally, we also observed more than 50% performance improvements from changing media to NVMe SSDs vs SAS SSDs. Platforms like the all-NVMe F-Series can deliver 150% more performance than the previous generation of SAS-based storage.

Other highlights of the Electric Eel testing include:

Exceeding 20GB/s read performance on the F100 for all three storage protocols. The storage protocols all behave similarly over TCP. Write performance is about half as much due to the need to both write to the SLOG device and the pool for data integrity.

Exceeding 250K IOPS for 32KB block sizes on the F100. 32KB is a typical block size for virtualization workloads or more modern databases. This performance was observed over all three primary storage protocols.

Exceeding 2.5GB/s on a single client for each storage protocol (SMB, NFS, iSCSI) for read, write, and mixed R/W workloads. The F-Series is the lowest latency and offers the greatest throughput, but other platforms are typically above 2GB/s.

Each platform met its performance target across all three primary storage protocols, which is a testament not only to OpenZFS’s tremendous scalability, but the refinement of their implementation within TrueNAS to extract maximum performance.

Future Performance Improvements

Electric Eel includes an experimental version of OpenZFS Fast Dedup. After confirming stability and performance, we plan to introduce new TrueNAS product configurations for optimal use of this feature. The goal of this testing is to allow Fast Dedup to have a relatively low impact on performance if the system is well configured.

The upcoming OpenZFS 2.3 release (planned for availability with TrueNAS 25.04 “Fangtooth”) also includes Direct IO for NVMe, which enables even higher maximum bandwidths when using high-performance storage devices with workloads that don’t benefit as strongly from caching. Tests for this feature are still pending completion, so stay tuned for future updates and information on the upcoming TrueNAS 25.04 as we move forward with development.

The TrueNAS Apps ecosystem has moved to a Docker back end, which has significantly reduced base CPU load and memory overhead. This reduced overhead has enabled better performance for systems running Apps like Minio and Syncthing. While we don’t have quantified measurements in terms of bandwidth and IOPS, our community users have reported an overall positive perceived impact.

Evolution of TrueNAS

Given the quality, security, performance, and App improvements, we recommend that new TrueNAS users start their journey with “Electric Eel” to benefit from the latest changes. We will begin shipping TrueNAS 24.10 as the default software installed on our TrueNAS products in Q1 2025.

With the explosive popularity of Electric Eel, already more popular than Dragonfish and CORE 13.0, nearly all new deployments should deploy TrueNAS 24.10. Current TrueNAS CORE users can elect to remain on CORE or upgrade to Electric Eel. Performance has now exceeded 13.0 and the software is expected to mature further in 2025.

Join the Growing SCALE Community

With the release of TrueNAS SCALE 24.10, there’s never been a better time to join the growing TrueNAS community. Download the SCALE 24.10 installer or upgrade from within the TrueNAS web UI and experience True Data Freedom. Then, ensure you’ve signed up for the newly relaunched TrueNAS Community Forums to share your experience. The TrueNAS Software Status page advises which TrueNAS version is right for your systems.

The post TrueNAS Electric Eel Performance Sizzles appeared first on TrueNAS - Welcome to the Open Storage Era.

Go Developer Survey 2024 H2 Results

20 December 2024 at 07:00

The Go Blog

Go Developer Survey 2024 H2 Results

Alice Merrick
20 December 2024

Background

Go was designed with a focus on developer experience, and we deeply value the feedback we receive through proposals, issues, and community interactions. However, these channels often represent the voices of our most experienced or engaged users, a small subset of the broader Go community. To ensure we’re serving developers of all skill levels, including those who may not have strong opinions on language design, we conduct this survey once or twice a year to gather systematic feedback and quantitative evidence. This inclusive approach allows us to hear from a wider range of Go developers, providing valuable insights into how Go is used across different contexts and experience levels. Your participation is critical in informing our decisions about language changes and resource allocation, ultimately shaping the future of Go. Thank you to everyone who contributed, and we strongly encourage your continued participation in future surveys. Your experience matters to us.

This post shares the results of our most recent Go Developer Survey, conducted from September 9–23, 2024. We recruited participants from the Go blog and through randomized prompts in the VS Code Go plug-in and GoLand IDE, allowing us to recruit a more representative sample of Go developers. We received a total of 4,156 responses. A huge thank you to all those who contributed to making this possible.

Along with capturing sentiments and challenges around using Go and Go tooling, our primary focus areas for this survey were on uncovering sources of toil, challenges to performing best practices, and how developers are using AI assistance.

Highlights

  • Developer sentiment towards Go remains extremely positive, with 93% of survey respondents saying they felt satisfied while working with Go during the prior year.
  • Ease of deployment and an easy to use API/SDK were respondents’ favorite things about using Go on the top three cloud providers. First-class Go support is critical to keeping up with developer expectations.
  • 70% of respondents were using AI assistants when developing with Go. The most common uses were LLM-based code completion, writing tests, generating Go code from natural language descriptions, and brainstorming. There was a significant discrepancy between what respondents said they wanted to use AI for last year, and what they currently use AI for.
  • The biggest challenge for teams using Go was maintaining consistent coding standards across their codebase. This was often due to team members having different levels of Go experience and coming from different programming backgrounds, leading to inconsistencies in coding style and adoption of non-idiomatic patterns.

Contents

Overall satisfaction

Overall satisfaction remains high in the survey with 93% of respondents saying they were somewhat or very satisfied with Go during the last year. Although the exact percentages fluctuate slightly from cycle to cycle, we do not see any statistically significant differences from our 2023 H2 or 2024 H1 Surveys when the satisfaction rate was 90% and 93%, respectively.

Chart of developer satisfaction with Go

The open comments we received on the survey continue to highlight what developers like most about using Go, for example, its simplicity, the Go toolchain, and its promise of backwards compatibility:

“I am a programming languages enjoyer (C-like) and I always come back to Go for its simplicity, fast compilation and robust toolchain. Keep it up!”

“Thank you for creating Go! It is my favorite language, because it is pretty minimal, the development cycle has rapid build-test cycles, and when using a random open source project written in Go, there is a good chance that it will work, even 10 years after. I love the 1.0 compatibility guarantee.”

Development environments and tools

Developer OS

Consistent with previous years, most survey respondents develop with Go on Linux (61%) and macOS (59%) systems. Historically, the proportion of Linux and macOS users has been very close, and we didn’t see any significant changes from the last survey. The randomly sampled groups from JetBrains and VS Code were more likely (33% and 36%, respectively) to develop on Windows than the self-selected group (16%).

Chart of operating systems respondents
use when developing Go software Chart of operating systems respondents
use when developing Go software, split by difference sample sources

Deployment environments

Given the prevalence of Go for cloud development and containerized workloads, it’s no surprise that Go developers primarily deploy to Linux environments (96%).

Chart of operating systems
respondents deploy to when developing Go software

We included several questions to understand what architectures respondents are deploying to when deploying to Linux, Windows or WebAssembly. The x86-64 / AMD64 architecture was by far the most popular choice for those deploying to both Linux (92%) and Windows (97%). ARM64 was second at 49% for Linux and 21% for Windows.

Linux architecture usage Windows architecture
usage

Not many respondents deployed to Web Assembly (only about 4% of overall respondents), but 73% that do said they deploy to JS and 48% to WASI Preview 1.

Web assembly architecture usage

Editor awareness and preferences

We introduced a new question on this survey to assess awareness and usage of popular editors for Go. When interpreting these results, keep in mind that 34% of respondents came to the survey from VS Code and 9% of respondents came from GoLand, so it is more likely for them to use those editors regularly.

VS Code was the most widely used editor, with 66% of respondents using it regularly, and GoLand was the second most used at 35%. Almost all respondents had heard of both VS Code and GoLand, but respondents were much more likely to have at least tried VS Code. Interestingly, 33% of respondents said they regularly use 2 or more editors. They may use different editors for different tasks or environments, such as using Emacs or Vim via SSH, where IDEs aren’t available.

Level of familiarity with each
editor

We also asked a question about editor preference, the same as we have asked on previous surveys. Because our randomly sampled populations were recruited from within VS Code or GoLand, they are strongly biased towards preferring those editors. To avoid skewing the results, we show the data for the most preferred editor here from the self-selected group only. 38% preferred VS Code and 35% preferred GoLand. This is a notable difference from the last survey in H1, when 43% preferred VS Code and 33% preferred GoLand. A possible explanation could be in how respondents were recruited this year. This year the VS Code notification began inviting developers to take the survey before the Go blog entry was posted, so a larger proportion of respondents came from the VS Code prompt this year who might have otherwise come from the blog post. Because we only show the self-selected respondents in this chart, data from respondents from the VS Code prompt data are not represented here. Another contributing factor could be the slight increase in those who prefer “Other” (4%). The write-in responses suggest there is increased interest in editors like Zed, which made up 43% of the write-in responses.

Code editors respondents most prefer to
use with Go

Code analysis tools

The most popular code analysis tool was gopls, which was knowingly used by 65% of respondents. Because gopls is used under-the-hood by default in VS Code, this is likely an undercount. Following closely behind, golangci-lint was used by 57% of respondents, and staticcheck was used by 34% of respondents. A much smaller proportion used custom or other tools, which suggests that most respondents prefer common established tools over custom solutions. Only 10% of respondents indicated they don’t use any code analysis tools.

Code analysis tools respondents
use with Go

Go in the Clouds

Go is a popular language for modern cloud-based development, so we typically include survey questions to help us understand which cloud platforms and services Go developers are using. In this cycle, we sought to learn about preferences and experiences of Go developers across cloud providers, with a particular focus on the largest cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. We also included an additional option for “Bare Metal Servers” for those who deploy to servers without virtualization.

Similar to previous years, almost half of respondents (50%) deploy Go programs to Amazon Web Services. AWS is followed by self-owned or company-owned servers (37%), and Google Cloud (30%). Respondents who work at large organizations are a little more likely to deploy to self-owned or company-owned servers (48%) than those who work at small-to-medium organizations (34%). They‘re also a little more likely to deploy to Microsoft Azure (25%) than small-to-medium organizations (12%).

Cloud providers where
respondents deploy Go software

The most commonly used cloud services were AWS Elastic Kubernetes Service (41%), AWS EC2 (39%), and Google Cloud GKE (29%). Although we’ve seen Kubernetes usage increase over time, this is the first time we’ve seen EKS become more widely used than EC2. Overall, Kubernetes offerings were the most popular services for AWS, Google Cloud, and Azure, followed by VMs and then Serverless offerings. Go’s strengths in containerization and microservices development naturally align with the rising popularity of Kubernetes, as it provides an efficient and scalable platform for deploying and managing these types of applications.

Cloud platforms where respondents
deploy Go software

We asked a followup question to respondents who deployed Go code to the top three cloud providers, Amazon Web Services, Google Cloud, and Microsoft Azure on what they like most about deploying Go code to each cloud. The most popular response across different providers was actually Go’s performance and language features rather than something about the cloud provider.

Other common reasons were:

  • Familiarity with the given cloud provider compared to other clouds
  • Ease of deployment of Go applications on the given cloud provider
  • The cloud provider’s API/SDK for Go is easy to use
  • The API/SDK is well documented

Other than familiarity, the top favorite things highlight the importance of having first class support for Go to keep up with developer expectations.

It was also fairly common for respondents to say they don’t have a favorite thing about their cloud provider. From a previous version of the survey that involved write-in responses, this often meant that they did not interact directly with the Cloud. In particular, respondents who use Microsoft Azure were much more likely to say that “Nothing” was their favorite thing (51%) compared to AWS (27%) or Google Cloud (30%).

What respondents liked most
about each of the top 3 Clouds

AI assistance

The Go team hypothesizes that AI assistance has the potential to alleviate developers from tedious and repetitive tasks, allowing them to focus on more creative and fulfilling aspects of their work. To gain insights into areas where AI assistance could be most beneficial, we included a section in our survey to identify common developer toil.

The majority of respondents (70%) are using AI assistants when developing with Go. The most common usage of AI assistants was in LLM-based code completion (35%). Other common responses were writing tests (29%), generating Go code from a natural language description (27%), and brainstorming ideas (25%). There was also a sizable minority (30%) of respondents who had not used any LLM for assistance in the last month.

Most common tasks used with AI
assistance

Some of these results stood out when compared to findings from our 2023 H2 survey, where we asked respondents for the top 5 use cases they would like to see AI/ML support Go developers. Although a couple new responses were introduced in the current survey, we can still do a rough comparison between what respondents said they wanted AI support for, and what their actual usage was like. In that previous survey, writing tests was the most desired use case (49%). In our latest 2024 H2 survey, about 29% of respondents had used AI assistants for this in the last month. This suggests that current offerings are not meeting developer needs for writing tests. Similarly, in 2023, 47% respondents said they would like suggestions for best practices while coding, while only 14% a year later said they are using AI assistance for this use case. 46% said they wanted help catching common mistakes while coding, and only 13% said they were using AI assistance for this. This could indicate that current AI assistants are not well-equipped for these kinds of tasks, or they’re not well integrated into developer workflows or tooling.

It was also surprising to see such high usage of AI for generating Go code from natural language and brainstorming, since the previous survey didn’t indicate these as highly desired use cases. There could be a number of explanations for these differences. While previous respondents might not have explicitly wanted AI for code generation or brainstorming initially, they might be gravitating towards these uses because they align with the current strengths of generative AI—natural language processing and creative text generation. We should also keep in mind that people are not necessarily the best predictors of their own behavior.

Tasks used AI assistance
in 2024 compared to those wanted in 2023

We also saw some notable differences in how different groups responded to this question. Respondents at small to medium sized organizations were a little more likely to have used LLMs (75%) compared to those at large organizations (66%). There could be a number of reasons why, for example, larger organizations may have stricter security and compliance requirements and concerns about the security of LLM coding assistants, the potential for data leakage, or compliance with industry-specific regulations. They also may have already invested in other developer tools and practices that already provide similar benefits to developer productivity.

Most common tasks used
with AI assistance by org size

Go developers with less than 2 years of experience were more likely to use AI assistants (75%) compared to Go developers with 5+ years of experience (67%). Less experienced Go developers were also more likely to use them for more tasks, on average 3.50. Although all experience levels tended to use LLM-based code completion, less experienced Go developers were more likely to use Go for more tasks related to learning and debugging, such as explaining what a piece of Go code does, resolving compiler errors and debugging failures in their Go code. This suggests that AI assistants are currently providing the greatest utility to those who are less familiar with Go. We don’t know how AI assistants affect learning or getting started on a new Go project, something we want to investigate in the future. However, all experience levels had similar rates of satisfaction with their AI assistants, around 73%, so new Go developers are not more satisfied with AI assistants, despite using them more often.

Most common tasks used
with AI assistance by experience with Go

To respondents who reported using AI assistance for at least one task related to writing Go code, we asked some follow up questions to learn more about their AI assistant usage. The most commonly used AI assistants were ChatGPT (68%) and GitHub Copilot (50%). When asked which AI assistant they used most in the last month, ChatGPT and Copilot were about even at 36% each, so although more respondents used ChatGPT, it wasn’t necessarily their primary assistant. Participants were similarly satisfied with both tools (73% satisfied with ChatGPT, vs. 78% with GitHub CoPilot). The highest satisfaction rate for any AI assistant was Anthropic Claude, at 87%.

Most common AI
assistants used Most common primary AI assistants used

Challenges for teams using Go

In this section of the survey, we wanted to understand which best practices or tools should be better integrated into developer workflows. Our approach was to identify common problems for teams using Go. We then asked respondents which challenges would bring them the most benefit if they were “magically” solved for them. (This was so that respondents would not focus on particular solutions.) Common problems that would provide the most benefit if they were solved would be considered candidates for improvement.

The most commonly reported challenges for teams were maintaining consistent coding standards across our Go codebase (58%), identifying performance issues in a running Go program (58%) and identifying resource usage inefficiencies in a running Go program (57%).

Most common challenges for
teams

21% of respondents said their team would benefit most from maintaining consistent coding standards across their Go codebase. This was the most common response, making it a good candidate to address. In a follow-up question, we got more details as to why specifically this was so challenging.

Most benefit to
solve

According to the write-in responses, many teams face challenges maintaining consistent coding standards because their members have varying levels of experience with Go and come from different programming backgrounds. This led to inconsistencies in coding style and the adoption of non-idiomatic patterns.

“There’s lots of polyglot engineers where I work. So the Go written is not consistent. I do consider myself a Gopher and spend time trying to convince my teammates what is idiomatic in Go”—Go developer with 2–4 years of experience.

“Most of the team members are learning Go from scratch. Coming from the dynamically typed languages, it takes them a while to get used to the new language. They seem to struggle maintaining the code consistency following the Go guidelines.”—Go developer with 2–4 years of experience.

This echoes some feedback we’ve heard before about teammates who write “Gava” or “Guby” due to their previous language experiences. Although static analysis was a class of tool we had in mind to address this issue when we came up with this question, we are currently exploring different ways we might address this.

Single Instruction, Multiple Data (SIMD)

SIMD, or Single Instruction, Multiple Data, is a type of parallel processing that allows a single CPU instruction to operate on multiple data points simultaneously. This facilitates tasks involving large datasets and repetitive operations, and is often used to optimize performance in fields like game development, data processing, and scientific computing. In this section of the survey we wanted to assess respondents’ needs for native SIMD support in Go.

The majority of respondents (89%) say that work on projects where performance optimizations are crucial at least some of the time. 40% said they work on such projects at least half the time. This held true across different organization sizes and experience levels, suggesting that performance is an important issue for most developers.

How often respondents work on
performance critical software

About half of respondents (54%), said they are at least a little familiar with the concept of SIMD. Working with SIMD often requires a deeper understanding of computer architecture and low-level programming concepts, so unsurprisingly we find that less experienced developers were less likely to be familiar with SIMD. Respondents with more experience and who worked on performance-crucial applications at least half the time were the most likely to be familiar with SIMD.

Familiarity with SIMD

For those who were at least slightly familiar with SIMD, we asked some follow -up questions to understand how respondents were affected by the absence of native SIMD support in Go. Over a third, about 37%, said they had been impacted. 17% of respondents said they had been limited in the performance they could achieve in their projects, 15% said they had to use another language instead of Go to achieve their goals, and 13% said they had to use non-Go libraries when they would have preferred to use Go libraries. Interestingly, respondents who were negatively impacted by the absence of native SIMD support were a little more likely to use Go for data processing and AI/ML. This suggests that adding SIMD support could make Go a better option for these domains.

Impacts of lack of native Go support
for SIMD What impacted respondents build with Go

Demographics

We ask similar demographic questions during each cycle of this survey so we can understand how comparable the year-over-year results may be. For example, if we saw changes in who responded to the survey in terms of Go experience, it’d be very likely that other differences in results from prior cycles were due to this demographic shift. We also use these questions to provide comparisons between groups, such as satisfaction according to how long respondents have been using Go.

We didn’t see any significant changes in levels of experience among respondents during this cycle.

Experience levels of respondents

There are differences in the demographics of respondents according to whether they came from The Go Blog, the VS Code extension, or GoLand. The population who responded to survey notifications in VS Code skews toward less experience with Go; we suspect this a reflection of VS Code’s popularity with new Go developers, who may not be ready to invest in an IDE license while they’re still learning. With respect to years of Go experience, the respondents randomly selected from GoLand are more similar to our self-selected population who found the survey through the Go Blog. Seeing consistencies between samples allows us to more confidently generalize findings to the rest of the community.

Experience with Go by survey
source

In addition to years of experience with Go, we also measured years of professional coding experience. Our audience tends to be a pretty experienced bunch, with 26% of respondents having 16 or more years of professional coding experience.

Overall levels of professional
developer experience

The self-selected group was even more experienced than the randomly selected groups, with 29% having 16 or more years of professional experience. This suggests that our self-selected group is generally more experienced than our randomly selected groups and can help explain some of the differences we see in this group.

Levels of professional developer
experience by survey source

We found that 81% of respondents were fully employed. When we look at our individual samples, we see a small but significant difference within our respondents from VS Code, who are slightly more likely to be students. This makes sense given that VS Code is free.

Employment status Employment status by survey
source

Similar to previous years, the most common use cases for Go were API/RPC services (75%) and command line tools (62%). More experienced Go developers reported building a wider variety of applications in Go. This trend was consistent across every category of app or service. We did not find any notable differences in what respondents are building based on their organization size. Respondents from the random VS Code and GoLand samples did not display significant differences either.

What respondents build with Go

Firmographics

We heard from respondents at a variety of different organizations. About 29% worked at large organizations with 1,001 or more employees, 25% were from midsize organizations of 101–1,000 employees, and 43% worked at smaller organizations with fewer than 100 employees. As in previous years, the most common industry people work in was technology (43%) while the second most common was financial services (13%).

Organization sizes where respondents
work Industries
respondents work in

As in previous surveys, the most common location for survey respondents was the United States (19%). This year we saw a significant shift in the proportion of respondents coming from Ukraine, from 1% to 6%, making it the third most common location for survey respondents. Because we only saw this difference among our self-selected respondents, and not in the randomly sampled groups, this suggests that something affected who discovered the survey, rather than a widespread increase in Go adoption across all developers in Ukraine. Perhaps there was increased visibility or awareness of the survey or the Go Blog among developers in Ukraine.

Where respondents are located

Methodology

We announce the survey primarily through the Go Blog, where it is often picked up on various social channels like Reddit, or Hacker News. We also recruit respondents by using the VS Code Go plugin to randomly select users to whom we show a prompt asking if they’d like to participate in the survey. With some help from our friends at JetBrains, we also have an additional random sample from prompting a random subset of GoLand users to take the survey. This gave us two sources we used to compare the self-selected respondents from our traditional channels and help identify potential effects of self-selection bias.

57% of survey respondents “self-selected” to take the survey, meaning they found it on the Go blog or other social Go channels. People who don’t follow these channels are less likely to learn about the survey from them, and in some cases, they respond differently than people who do closely follow them. For example, they might be new to the Go community and not yet aware of the Go blog. About 43% of respondents were randomly sampled, meaning they responded to the survey after seeing a prompt in VS Code (25%) or GoLand (11%). Over the period of September 9–23, 2024, there was roughly a 10% chance users of the VS Code plugin would have seen this prompt. The prompt in GoLand was similarly active between September 9–20. By examining how the randomly sampled groups differ from the self-selected responses, as well as from each other, we’re able to more confidently generalize findings to the larger community of Go developers.

Chart of different sources of survey
respondents

How to read these results

Throughout this report we use charts of survey responses to provide supporting evidence for our findings. All of these charts use a similar format. The title is the exact question that survey respondents saw. Unless otherwise noted, questions were multiple choice and participants could only select a single response choice; each chart’s subtitle will tell the reader if the question allowed multiple response choices or was an open-ended text box instead of a multiple choice question. For charts of open-ended text responses, a Go team member read and manually categorized all of the responses. Many open-ended questions elicited a wide variety of responses; to keep the chart sizes reasonable, we condensed them to a maximum of the top 10-12 themes, with additional themes all grouped under “Other”. The percentage labels shown in charts are rounded to the nearest integer (e.g., 1.4% and 0.8% will both be displayed as 1%), but the length of each bar and row ordering are based on the unrounded values.

To help readers understand the weight of evidence underlying each finding, we included error bars showing the 95% confidence interval for responses; narrower bars indicate increased confidence. Sometimes two or more responses have overlapping error bars, which means the relative order of those responses is not statistically meaningful (i.e., the responses are effectively tied). The lower right of each chart shows the number of people whose responses are included in the chart, in the form “n = [number of respondents]”. In cases where we found interesting differences in responses between groups, (e.g., years of experience, organization size, or sample source) we showed a color-coded breakdown of the differences.

Closing

Thanks for reviewing our semi-annual Go Developer Survey! And many thanks to everyone who shared their thoughts on Go and everyone who contributed to making this survey happen. It means the world to us and truly helps us improve Go.

— Alice (on behalf of the Go team at Google)

Progress and Future Plans for Upstream Support of Rockchip RK3588

21 December 2024 at 03:29
As 2024 concludes, the Rockchip RK3588 platform has seen substantial progress in upstream support. Collabora’s latest announcement highlights advancements in kernel integration, hardware enablement, and foundational software, driven by the open-source community.   Recent Kernel Advancements The release of the 6.7 Linux kernel was a significant milestone, providing network support for boards such as the […]

$6.80 LILYGO T7-C6 Board Leverages RISC-V Single-Core Processor & 4MB Integrated Flash Memory

21 December 2024 at 02:49
The LILYGO T7-C6 is a compact development board built around the ESP32-C6-MINI-1 module, offering versatile features designed for IoT and wireless communication applications. The board is available with either an onboard PCB antenna or an external antenna and supports modern wireless protocols, including 2.4 GHz Wi-Fi 6, Bluetooth 5 (LE), and IEEE 802.15.4. The ESP32-C6-MINI-1 […]
❌
❌