Normal view

There are new articles available, click to refresh the page.
Today — 5 February 2025Arm

Arm Adopting New “Cristal intelligence” to Drive Innovation and Boost Productivity

5 February 2025 at 01:48

Initial findings from Arm’s upcoming AI Barometer survey reveal that over 90 percent of global business leaders have put AI into practice in some way. The widespread, fast adoption of AI into commercial environments is already transforming how businesses operate, enabling a range of productivity and efficiency improvements. A McKinsey report estimated that AI’s impact on productivity could add trillions of dollars to the global economy.

Introducing Cristal intelligence

The ability of AI to drive business improvements is part of the reason why OpenAI and the SoftBank Group announced a new partnership to develop and market Advanced Enterprise AI called “Cristal intelligence.” This features the latest and most advanced models developed by OpenAI, with all SoftBank Group companies gaining priority access.

Arm will be one of the companies that benefit from Cristal intelligence, adopting the technologies to drive innovation and boost productivity across the business. It will help to transform existing management and operational practices through automating everyday tasks, allowing employees to focus more on creative and strategic decision-making. Moreover, existing AI tools, like ChatGPT Enterprise, will be available to all Arm employees.

The rise of AI agents

At the core of Cristal intelligence is OpenAI’s models that are evolving from those that are capable of reasoning to new AI agents that can execute tasks independently. Arm, OpenAI and SoftBank have a shared vision to enable these AI agents at scale to make every worker more effective and productive, while empowering them to solve ever more complex problems. Looking ahead, Cristal intelligence’s AI agents will lay the groundwork for even more advanced systems that can learn and adapt to any company’s needs.

Rene Haas, Arm CEO, speaking at SoftBank Group event

Arm is the compute platform for AI

Alongside the internal adoption of Cristal intelligence, the Arm compute platform provides the performance, power-efficiency and scalability to meet the high computing demands of new AI agents, from cloud to edge. For developers, Arm’s KleidiAI libraries will allow their AI agents’ workloads to seamlessly run across all major frameworks and common software operating systems with no or minimal development work required. As Rene Haas, Arm CEO, says: “We are at the forefront of the AI evolution, and our high-performance, power-efficient compute is going to be critical to advancing Cristal intelligence.

Moreover, the pervasiveness of the Arm compute platform, which touches 100 percent of the connected global population, is supporting the ongoing democratization of AI. As a result, new AI tools and technologies, like Cristal intelligence, are more accessible to all employees and job types – beyond AI researchers and data scientists – who can then integrate these into their everyday workflows, from financial reporting to customer service queries.

Driving the AI revolution across businesses

The rise of AI is providing unprecedented improvement opportunities across all industries. As the evolution of new models and business investment continues to grow, AI will be commonplace throughout the workplaces of the future, driving innovation and productivity.  

Arm is committed to working with OpenAI and SoftBank to unleash the full potential of AI, with Cristal intelligence helping to revolutionize how businesses operate globally. We cannot wait to see the full benefits of the technologies in action across the Arm business and beyond, with the Arm compute platform at the heart of these improvements.

The post Arm Adopting New “Cristal intelligence” to Drive Innovation and Boost Productivity appeared first on Arm Newsroom.

Yesterday — 4 February 2025Arm

What Innovations Did Arm Deliver Between December 2024 and January 2025?

3 February 2025 at 23:23

During the recent festive season, Arm continued to be at the forefront of technological innovation, driving advancements across AI, IoT, gaming, and developer tools. From empowering developers at Google DevFest Lagos to unveiling next-generation compilers for safety-critical systems, the Arm Editorial Team has recapped the various innovation highlights and contributions made by Arm between December 2024 and January 2025.  

Empowering developers with AI at Google DevFest Lagos

With the Arm compute platform at the heart of all AI experiences, we are empowering developers to build efficient and scalable AI-powered applications. Shola Akinrolie, Senior Manager of the Arm Developer Program, highlights Arm’s participation in Google DevFest Lagos, Africa’s largest developer-focused event, showcasing how various innovations, such as Arm instances on the Google Cloud Platform and AI frameworks like MediaPipe and ExecuTorch, are driving advancements in AI. 

Image: Google DevFest in Lagos

The event featured workshops on building generative AI applications and how to leverage Arm-based servers, demonstrating Arm’s commitment to supporting the developer community with cutting-edge tools and resources. 

Video: Arm at AI Expo Africa in 2024

Building vision-enabled devices to fuel IoT innovations

Many IoT devices today are equipped with vision capabilities that transform how they interact with their environments by allowing them to interpret their surroundings. Diya Soubra, Director, of IoT Solutions, discusses how this advancement enhances device interaction and simplifies implementation for use cases like parking space detection.  

Through a combination of various Arm technologies including the Arm Cortex-M85 CPU, Arm Mali-C55 ISP, and Arm Ethos-U85 NPU, the Arm Corstone-320 is playing a critical role in developing secure, AI-capable IoT systems.  

Showcasing real-world applications of Arm technology

Developers are using Arm-based platforms to tackle real-world challenges with innovative solutions. Fidel Makatia, a PhD student in Electrical Engineering at Texas A&M University and an Arm Ambassador, recaps the IEEE Arm Community Technothon held at Texas A&M University, where participants showcased innovative projects built on Arm technology.  

Image: A bicycle anti-theft presentation and demo 

Other highlights include a smart cat feeder using ML algorithms, a 3D printer ecosystem with internet connectivity, and an IoT fridge for remote temperature control.  

Arm’s role in enhancing player experiences and game design

A keen use case of AI can be seen in gaming, where the technology is helping enhance player experiences by creating more dynamic and immersive gaming experiences and environments along with improved game design.  

Ian Bolton, Staff Developer Relations Manager, explores the future of AI in gaming and explains Arm’s role in making this more advanced gaming experience a reality through processor technologies, which offer the necessary performance and efficiency to support AI-driven innovations in gaming. 

Experiencing MKSU Hackfest 2024 through the eyes of an Arm ambassador

Participating in events like the MKSU Hackfest provides developers with valuable insights and hands-on experience with Arm’s tools and resources, enhancing their skills to create innovative solutions in fields like IoT and AI. Nicabed Gathaba K, AI Engineer, and an Arm Ambassador explains how this event empowered students and developers to explore embedded systems, smart solutions, and sustainable tech advancements. 

Image: MKSU Hackfest

Unveiling the next-generation compilers for safety-critical and high-performance embedded systems

With a long history of creating embedded compilers for safety development at Arm, we understand the importance of stability during minimal changes in updates to ensure reliable and secure software development. To that effect, Paul Black, Director of Product Management, introduced Arm Compiler for Embedded FuSa 6.22LTS, designed for functional safety development in automotive systems, medical devices, and other safety-critical applications. This compiler aims to reduce development costs and risks by providing a TÜV SÜD-certified toolchain, along with a comprehensive Qualification Kit that includes a Development Process report, Safety Manual, and Testing report.  

Paul also introduces the Arm Toolchain for Embedded (ATfE), the seventh generation Arm embedded C/C++ cross-compiler, designed to meet the evolving needs of high-performance Arm-based embedded projects.  

Boosting Developer Productivity with Arm Development Studio 2024.1

The latest Arm Performance Studio 2024.6 release includes quality-of-life improvements and bug fixes, such as enhanced Frame Advisor visualizations and optimized Streamline’s Timeline view renderer. Peter Harris, Technical Product Director, and Distinguished Engineer highlights these updates and explains how they significantly improve the efficiency and usability of the tool, enabling developers to optimize their software for a wider range of platforms, from embedded devices to servers. 

Revolutionizing technical training with Arm’s AI-enhanced learning for partners

Arm has significantly improved its technical training for partners by leveraging AI technologies, including AI-enhanced search capabilities and accurate AI-produced transcripts. Matt Rushton, Director of Product Management, discusses the evolution of Arm’s technical training, highlighting the shift from live, in-person sessions to live virtual and on-demand video training.  

Image: Arm On-Demand active users 2023 vs 2024 

Arm has also expanded access to its on-demand platform, removing fees and lifting user limits for partners, ensuring broader availability of technical training resources. 

Using Machine Learning for song translations with Arm

The potential of machine learning (ML) in creative applications is immense as it helps in delivering practical insights when developing and deploying ML models for real-world use cases. Virginia Cangelosi, Graduate Engineer, explores how to create an ML pipeline to translate songs from English to Mandarin in a multi-part blog series. Part 1 covers building the ML pipeline using open-source models, while Part 2 discusses the challenges and solutions for porting this pipeline to Android devices. 

Running Llama Models with PyTorch on Arm Servers 

KleidiAI has made running Llama 3.1 and 3.2 models with PyTorch more efficient, especially on an Arm-based AWS instance. With Torchchat and Streamlit, the models can be operated through a user-friendly web interface, simplifying your machine-learning workflow.  

Gabriel Peterson, Senior ML Engineer, and a Developer Evangelist, guides us through a learning path on how to run a large language model chatbot with PyTorch using KleidiAI on Arm servers.

Video: Run a LLM chatbot with PyTorch using Kleidi on Arm servers

Streamlining your workflow with Arm Developer Evangelists

Docker Buildx and GitHub Actions workflows automate the creation of multi-architecture Docker images. Avin Zarlez, Staff Software Engineer, helps us learn how to use Docker buildx and GitHub Actions workflows to automate the building of multi-architectural docker images. Inspired by Arm’s Learning Path, this tutorial provides practical insights and leverages example code available on GitHub to streamline your workflow.  

Video: Multi-architecture Docker image builds

In another YouTube video, Avin talks about KubeArchInspect, an open-source tool from Arm, which is designed to automate compatibility checks for Kubernetes containers. This tutorial explains how to use it to analyze images in the source registry and identify available architectures, simplifying the initial steps of migration.  

Video: Checking Arm compatibility in Kubernetes containers using KubeArchInspect

Finally, Avin, Gabriel, and Michael Hall, Principal Software Engineer and Developer Evangelist, present highlights from the latest Learning Paths available on learn.arm.com. These include setting up a virtual large language model (LLM) using Hugging Face, performing sentiment analysis on Arm-based EKS clusters, and quick installation guides for Helm, Docker, and other developer tools.  

Video: Highlights from the latest learning paths

They also cover creating and training PyTorch models, building an Android app with MediaPipe for facial and gesture recognition, managing development environments with Daytona, deploying .NET apps on Arm-based VMs, and efficiently encoding videos using VVenC H266 on Arm servers. 

The post What Innovations Did Arm Deliver Between December 2024 and January 2025? appeared first on Arm Newsroom.

Before yesterdayArm

Arm at Davos 2025: Driving AI Transformation Though Academic and Industry Collaborations

28 January 2025 at 16:00

The annual meeting of the World Economic Forum brings together world leaders to address key global and regional challenges, with technology a vital part of these discussions. This year’s meeting at Davos focused on “Collaboration for the Intelligent Age” and the promises of AI innovation, with Arm leaders invited to give perspective on this theme.

During the week-long event, Arm recognized the importance of collaboration between universities and industry as part the ongoing development of AI. Universities are at the forefront of AI research, helping to drive advancements that will define the next generation of applications used in the real-world. Meanwhile, industry provides the infrastructure, guidance and tools that accelerate this research and ensure it delivers true benefits to society and businesses.

Arm hosted a panel with world-leading academics in the field of AI which explored a wide variety of topics, including what makes a successful academic and industry partnership, how to align research in the lab with real-world AI applications, and how to prepare students (the future workforce) for an AI-driven world.

Accelerating AI innovation with university and industry partnerships

As Ami Badani, Arm Chief Marketing Officer, noted at the start of the panel session, Arm has a proud track record of collaborations and engagements with academic institutions to both drive technology-based innovation and train the technology workforce of the future. Just one example of these collaborations is when the United States and Japan announced two new university-corporate AI Partnerships worth $110 million, which involved support from private sector companies, including Arm, Microsoft and SoftBank.

Farnam Jahanian, President, Carnegie Mellon University, called out these partnerships, stating that the AI transformation will be amplified through collaborations between academic institutions, private sector and public sector. Through one of the university-corporate AI partnerships, Carnegie Mellon University is working with Keio University to bring together faculty, researchers and students at the forefront of AI to collaborate on new ideas and solutions.

Kohei Itoh, President, Keio University, reflected on the IBM Quantum Hub which started in 2018 and has led to gradual technological developments in the field of quantum computing. While progress has been incremental, Kohei stated that the support of private sector companies, like IBM, has been crucial to the research and will help to accelerate the deployment of practical applications involving quantum computing in the future.

The third panelist Eric Xing, President of the Mohamed bin Zayed University of Artificial Intelligence, the world’s first ever AI university, outlined the unique dual role that universities play as the “inventor and educator of technologies”, which highlights their importance in the development of technologies and the workforce of the future. 

Video: The full panel session at Davos 2025

The transformative impact of AI

Unsurprisingly, all panelists were in broad agreement on the potential transformative impact of AI. Jahanian reflected on AI being one the most “fundamental intellectual developments of our time”, with an “undeniable” impact across every aspect of the economy.

This statement was supported by Xing who described AI as “the new engine of a future economy and technology.” He reflected on the potential of AI to provide profound societal impacts, with one example being the acceleration of life-changing scientific breakthroughs, like the development of new medicines and vaccines.

2025, “the year of practicality for AI”

However, AI is not just about its enormous potential for future life-changing innovations. With the World Economic Forum’s view that 2025 will be the “year of practicality for AI”, the panel discussed the practical applications that can benefit from AI in the short-term. Jahanian talked about the importance of AI being integrated into existing applications to deliver commercial improvements in business workflows. Referring to these AI developments as “low hanging fruit”, Jahanian highlighted the significant benefit to businesses in terms of costs and wider efficiencies.

We are already seeing the integration of AI into existing internal and external business applications and operations. Badani noted that the initial findings from Arm’s upcoming AI Barometer survey reveal that over 90 percent of global business leaders have put AI into practice in some way. This is a huge figure that highlights the quick adoption of AI into commercial environments.

Meanwhile, Itoh reflected on the ability of AI to support the collection of vast amounts of data, particularly for biological and medical research, with this helping to accelerate the research process and, ultimately, bring new breakthroughs to light quicker.

Preparing the future workforce

A startling figure stated by Badani during the panel session was that, according to the World Economic Forum, there will be 170 million new jobs created by 2030. However, meeting this figure requires universities to deliver courses and training that prepare students for these future careers.

Jahanian noted that AI is leading to a “re-imagining of the curriculum” across universities. This is not just around technical engineering-based skills, but also key foundational skills that teach students how to collaborate and communicate with AI-based tools.

Itoh commented on his university’s own experience where students are learning and adopting AI tools far quicker than the teachers. In some cases, this is leading to students becoming the teachers, educating others in classes about how to properly use these tools.

Finally, Xing noted the ability of AI to create personalized learning for students, with this helping to improve their educational outcomes as they prepare for future careers.

Unlocking the full transformative opportunities from AI

The research work of universities provides a glimpse into the endless possibilities from the ongoing evolution and rollout of AI technologies. Alongside our industry partners, Arm’s partnerships with universities aim to accelerate the development of this research, helping to move innovations from the lab to real-world applications at a faster rate, so society can feel the true benefits of the AI transformation.

With Arm providing the compute platform for AI, we are working to unlock the enormous potential of AI for good, with industry and university partnerships absolutely vital to this overall mission.

The post Arm at Davos 2025: Driving AI Transformation Though Academic and Industry Collaborations appeared first on Arm Newsroom.

Arm Chiplet System Architecture Makes New Strides in Accelerating the Evolution of Silicon 

21 January 2025 at 23:00

AI has the potential to steer a new industrial revolution, permeating all markets unlike anything we’ve seen before. To get there, we must be able to address the wide variation in AI workloads across a diverse set of markets. For example, the appetite for latency changes depending on application. The response times for a system designed to help researchers simulate proteins will be different to the response times required for a system built to operate in a passenger vehicle. This wide range of compute requirements means we need to provide more than one type of compute solution; each optimized for specific market needs. The increasing demand for custom silicon, combined with the costs and complexities of silicon production, is driving the trend toward wider adoption of chiplets

Video: What are chiplets explainer

Advancing the chiplet ecosystem through standardization and collaboration 

The reuse of specialized chiplets to create multiple custom systems-on-chip (SoCs) can deliver systems with better performance and lower power consumption, at lower overall design cost compared to monolithic chips. However, without industry-wide standards and frameworks, variations in chipsets could lead to compatibility issues that ultimately slow innovation. To address this fragmentation, last year Arm introduced the Chiplet System Architecture (CSA). The CSA provides a set of system partitioning and chiplet connectivity standards that have been co-developed with the ecosystem, aligning the industry on the foundational choices of building chiplets. Using the CSA, new chiplet designs can be created with confidence that they can be adapted and reused in any compliant system, expediting chiplet-based system innovation while reducing the risk of fragmentation. 

CSA’s first public specification is now available, supported by 60 leading players 

Today, it’s my pleasure to share a significant milestone for CSA: The first public specification is now available. Representatives from over 60 companies are now engaged with the CSA, contributing to and applying standards in silicon strategies across multiple market segments. These companies include ADTechnology, Alphawave Semi, AMI, Cadence, Jaguar Micro, Kalray, Rebellions, Siemens, Synopsys, and others.  

The breadth of engagement from these innovative technology companies forms the foundation of an Arm-based chiplet ecosystem set to revolutionize system design, making SoCs more flexible, accessible, and cost-effective while reducing the risk of fragmentation. With a public spec now available, designers have a shared understanding of how to define and connect chiplets into composable SoCs that can address the variance in AI workloads and ensure silicon is fit for specific markets.  

Several vendors engaging with the CSA are already building solutions as part of Arm Total Design, an ecosystem dedicated to frictionless delivery of custom silicon powered by Arm Neoverse Compute Subsystems (CSS). To date, Arm Total Design has seen success in the deployment of chiplet-based compute subsystems that enable market-specific strategies including: 

  • Tailoring AI workloads for diverse markets: Alphawave Semi has customers that require performant chips for AI workloads, including networking, edge computing, storage, and security. By combining a chiplet powered by Arm Neoverse CSS with proprietary I/O dies, Alphawave Semi can use AMBA CHI C2C to connect accelerators tailored to the specific needs of each market. Custom SKUs for specific markets are derived from a standard base, amortizing the cost of the compute die whilst maintaining the flexibility to build multiple systems. 
  • Revolutionizing large-scale AI training and inference workloads: ADTechnology, Samsung Foundry, Rebellions and Arm have combined technologies to create an AI CPU chiplet platform for training and inference of large-scale AI workloads in the datacenter, with an estimated 2-3x efficiency advantage for GenAI workloads (Llama3.1 405B parameter LLMs). This multi-vendor chiplet platform combines Rebellions’ REBEL AI accelerator with coherent NPU’s using AMBA CHI C2C interconnect, and is built with a Neoverse CSS V3-powered compute chiplet from ADTechnology which can now be implemented with Samsung Foundry 2nm Gate-All-Around (GAA) advanced process technology as a result of the standardization work done with the CSA to date.  

The role of chiplets in custom silicon for growing AI workloads  

These are just a couple examples of how CSA can help address the wide variation of workloads being driven by AI across all markets, from infrastructure to automotive and consumer technologies. We believe the Arm-based chiplet ecosystem is uniquely positioned to meet the challenge of growing AI demands in all markets, leveraging the flexibility of the Arm compute platform, the seamless communication enabled by standards like AMBA CHI C2C, and the integration enabled by CSA. As the ecosystem around CSA continues to grow, so does the collaboration on standards and the impact we can make as an industry to significantly reduce fragmentation and enable faster development and deployment of custom silicon solutions.  

To access the first public CSA spec at no cost, please visit this link.

Webinar on the Chiplet Marketplace

To learn more about enabling a marketplace of chiplet-based custom silicon, Arm is hosting a webinar with the Open Compute Project (OCP).

The post Arm Chiplet System Architecture Makes New Strides in Accelerating the Evolution of Silicon  appeared first on Arm Newsroom.

Arm: The Partner of Choice for Academic Engagements

15 January 2025 at 18:34

At Arm, we recognize the importance of the latest research findings from academia and how they can help to shape future technology roadmaps. As a global semiconductor ambassador, we play a key role in academic engagements and refining the commercial relevance of their respective research.

Our approach is multifaceted, involving broad engagements with entire departments and focused collaborations with individual researchers. This ensures that we are not only advancing the field of computing but also fostering the talent that will lead the industry in the future.

Pioneering research and strategic investments in academia

A prime example of our broad engagement is our long-standing relationship with the University of Cambridge’s Department of Computer Science and Technology. We’ve announced critical investment into the Department’s new CASCADE (Computer Architecture and Semiconductor Design) Centre. To realize the potential of AI through next-gen processor designs, this initiative will fund 15 PhD students over the next five years who will undertake groundbreaking work in intent-based programming.

Morello is a research and prototyping program to create a more secure hardware architecture for next-generation Arm devices.

Meanwhile, our work with the Morello program continues to push the boundaries of secure computing. This is a research initiative aimed at creating a more secure hardware architecture for future Arm devices. The program is based on the CHERI (Capability Hardware Enhanced RISC Instructions) model, which has been developed in collaboration with the University of Cambridge since 2015. By implementing CHERI architectural extensions, Morello aims to mitigate memory safety vulnerabilities and enhance the overall security of devices.

In the United States, our membership in the SRC JUMP2.0 program, a public-private partnership alongside DARPA (Defense Advanced Research Projects Agency) and other noted semiconductor companies, enables us to support pathfinding research across new and emerging technical challenges. One notable investment is the PRISM Centre (Processing with Intelligent Storage and Memory), which is led by the University of California San Diego, where we are deeply engaged in advancing the computing field.

Fuelling innovation through strategic PhD investments

Arm’s broad academic engagements are complemented by specific investments in emerging research areas, where the commercial impact is still being defined. PhD studentships are ideal for these exploratory studies, providing the necessary timeframe to progress ideas and early-stage concepts toward potential commercial viability. Examples include:

Shaping the future through research and technology

In areas where challenges are just being identified, Arm convenes workshops with academic thought leaders to scope future use cases and the fundamental experimental work needed to advance the field. Moreover, our white papers on Ambient Intelligence and the Metaverse are helping the academic community develop future research programs, acting as a springboard for further innovation.

Given our position in the ecosystem, we are often invited to provide thought leadership at academic conferences. Highlights from this year include:

  • A keynote by Rob Dimond, a System Architect and Fellow at Arm, at the DATE conference in Valencia, a major event for our industry and academia.

Reinforcing academic engagements by investing in future talent

Investing in PhDs is not just about research; it’s about nurturing the future talent pipeline for our industry. We also engage with governments and funding agencies to ensure that university research funding is targeted appropriately.

For instance, Andrea Kells sits on the EPSRC (UK) Science Advisory Team for Information and Communication Technologies and the Semiconductor Research Corporation (US) Scientific Advisory Board, which both link with the Arm Government Affairs team on research investments.

Check out Andrea’s webinar on advances and challenges in semiconductor design

Expanding global collaborations to drive technological marvels

Arm’s commitment to academic engagements spans the globe, reflecting our dedication to fostering innovation worldwide. In Asia for instance, we have initiated collaborations with leading institutions to explore new frontiers in semiconductor technology. Our partnership with the National University of Singapore focuses on developing power-efficient computing solutions, which are crucial for the next generation of mobile and IoT devices.

In Europe, beyond our engagements in the U.K. and Spain, we are also working with the Technical University of Munich on advanced research in quantum computing. This collaboration aims to address some of the most challenging problems in computing today, paving the way for breakthroughs that could revolutionize the industry.

At the World Economic Forum’s Davos event, Arm hosted a panel session to discuss the collaboration and innovation between universities and industry in the age of AI. The session, hosted by Arm’s Chief Marketing Officer Ami Badani, featured prominent academic experts including Farnam Jahanian, President of Carnegie Mellon University; Kohei Itoh, President of Keio University; and Eric Xing, President of Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI university.

The panel explored a wide range of topics, from AI-based research collaborations to how universities prepare students for future careers, highlighting the crucial role of academia and industry in driving the future of AI and technological innovation.

Bridging academics and the industry for a brighter future

innovation and supporting the next generation of technology leaders. Our investments in academic engagements not only advance the field of semiconductor technology but also ensure that we remain at the forefront of technological progress.

As we continue to nurture upcoming talent, support groundbreaking research, and foster global collaborations, we are shaping the future of computing.

For more information

For more details about Arm’s academic engagements and partnerships contact Andrea Kells, Arm’s Research Ecosystem Director at Andrea.Kells@arm.com

The post Arm: The Partner of Choice for Academic Engagements appeared first on Arm Newsroom.

Redefining Audio Experiences with Eclipsa Audio by Google, Samsung, Arm and the Alliance for Open Media

9 January 2025 at 01:00

Imagine sitting in your living room and watching a movie. As the helicopter flies overhead, you can hear it moving seamlessly above you, and then from one side of the room to the other, creating a truly immersive experience. This is now possible because of Eclipsa Audio, based on Immersive Audio Model and Formats (IAMF), a new open-source audio technology that uses fast and efficient processing for a variety of common consumer products – from high-end cinema systems featuring premium TVs to entry-level mobile devices and TVs.

The technology was developed by Google, Samsung, Arm, and the Alliance for Open Media (the organization behind the popular AV1 video format), which was launched at the Consumer Electronics Show (CES) 2025

What is Eclipsa Audio? 

Eclipsa Audio is a multi-channel audio surround format that leverages IAMF to produce an immersive listening experience. It will revolutionize the way we experience sound by spreading audio vertically as well as horizontally. This creates a three-dimensional soundscape that closely mimics natural settings, bringing movies, TV shows, and music to life.  

Eclipsa Audio dynamically adjusts audio levels for different scenes, ensuring optimal sound quality. Additionally, it offers customization features that allow listeners to tweak the sound to their preferences, helping to ensure that every listening experience is personalized and unique. 

An Eclipsa Audio bitstream can contain up to 28 input channels, which are rendered to a set of output speakers or headphones. These input channels can be fixed, like a microphone in an orchestra, or dynamic, like a helicopter moving through a sound field in an action movie. 

Eclipsa Audio also features binaural rendering, which is essential for mobile applications when delivering immersive audio through headphones. Finally, the new audio technology supports content creation across consumer devices, enabling users to create their own immersive audio experiences. 

How Arm worked with Google during IAMF development

Arm has been a strategic partner throughout the development of IAMF, working closely with Google’s team to optimize the technology’s performance. Our contributions focused on enhancing the efficiency of the Opus codec and the IAMF library (libiamf), ensuring that it delivers the best possible performance on Arm CPUs that are pervasive across today’s mobile devices and TVs. 

Arm CPUs have included the NEON SIMD extension since 2005 and evolved significantly since then, providing remarkable performance boosts for DSP tasks like audio and video processing. For IAMF specifically, Arm’s engineers have focused on optimizations that allow real-time decoding of complex bitstreams with minimal CPU usage, ensuring reliable performance even when CPUs are busy processing other elements of the experience. This is particularly important for mobile applications where power efficiency is crucial. 

Performance Enhancements

Arm has been upstreaming patches to the opus-codec and libiamf, focusing on floating point implementations for optimal performance. These enhancements include: 

  • NEON Intrinsic Optimizations: Supporting various Arm architectures (armv7+neon, armv8-a32, armv8-a64, and armv9), these optimizations speed up float to int16 conversion and soft clipping, and provide CPU-specific optimizations for matrix multiplication and channel unzipping in multi-channel speaker output. 
  • Performance Improvements: Significant performance uplifts were observed across different speaker configurations (Stereo, 5.1, 9.1.6) on devices like Quartz64 and Google Pixel 7. For instance, 9.1.6 output showed over 160% improvement on the Arm Cortex-A55 CPU cores in the Pixel 7.  
  • Decoding Efficiency: After optimizations, all test files decode in less than 16% of real-time on Aarch64 and less than 23% on Arm32, making them highly efficient on the Cortex-A55. 

Core Technologies

IAMF supports several codecs, including LPCM, AAC, FLAC, and Opus. Opus, being the most modern codec, is likely to be the preferred choice. We have further optimized Opus for the Arm architecture, ensuring it performs efficiently within the IAMF framework. 

The IAMF library (libiamf) decodes IAMF bitstreams and produces speaker output for various sound systems. Arm introduced a framework for CPU specializations, using compile-time feature detection to ensure the library is optimized for the platform it runs on. 

By optimizing key components like the Opus codec and libiamf library, our engineers ensured that IAMF delivers unparalleled performance on Arm CPUs. This not only enhances the user experience but also demonstrates the value of our technology in cutting-edge applications. 

IAMF’s open standard approach, supported by the Alliance for Open Media, aligns with Arm’s vision of broad accessibility and innovation. This partnership highlights our role in driving the future of immersive audio, making high-quality sound experiences available across a wide range of existing and future devices, from high-end home cinema systems to entry-level mobile devices and TVs. 

The future of immersive audio is here

IAMF represents a significant leap forward in immersive audio technology, offering a versatile and high-quality audio experience using AI and deep-learning techniques coupled with robust performance optimizations that are supported by Arm.  

The future of immersive audio is here and is now more accessible than ever. Whether you’re a casual listener or an audiophile, IAMF promises to transform audio experiences, bringing you closer to the action than ever before. 

The post Redefining Audio Experiences with Eclipsa Audio by Google, Samsung, Arm and the Alliance for Open Media appeared first on Arm Newsroom.

Arm-powered NVIDIA Project DIGITS Puts High-Performance AI in the Hands of Millions of Developers 

7 January 2025 at 11:30

One of the most exciting trends we see today is the rapid expansion and availability of AI-based applications and features across a variety of edge devices. As AI continues to grow and advance, it is crucial that AI researchers, data scientists, developers and students have access to high performance compute that can be used to develop or run the latest models, whether they are language, vision or multi-modal. With the pace of AI innovation moving faster than ever, we need to enable access to this performance beyond the cloud at the edge, bringing new capabilities directly to developers.

Putting game-changing AI performance at every developers’ fingertips

A big step towards the vision of developing and deploying AI everywhere is NVIDIA Project DIGITS, a personal AI supercomputer announced during NVIDIA founder and CEO Jensen Huang’s keynote at the Consumer Electronics Show (CES) 2025 today (Monday 6th January 2025). The Project DIGITS Linux-based system featuring Arm-powered CPU cores makes it possible for every AI developer to have a high performance AI system on their desk.

NVIDIA Project DIGITS is powered by the NVIDIA GB10 Grace Blackwell Superchip, bringing together the NVIDIA Grace CPU and NVIDIA Blackwell GPU with the latest-generation CUDA cores and fifth-generation Tensor Cores connected via NVLink®-C2C chip-to-chip interconnect and 128GB of unified memory. The NVIDIA Grace CPU features our leading-edge, highest performance Arm Cortex-X and Cortex-A technology, with 10 Arm Cortex-X925 and 10 Cortex-A725 CPU cores. The NVIDIA GB10 delivers up to one petaflop¹ (1000 TFLOPs) of AI computing performance at FP4 precision, enabling developers to prototype, fine-tune and run inferencing with large AI models and work in conjunction with the cloud or data center.

The value of the Arm compute platform

Leveraging the ubiquitous Arm compute platform allows new AI models and applications to run more efficiently and faster at the edge. In consumer technology markets, our CPU technologies are found at the heart of today’s edge devices and designed to target the most performant devices entering the market, whether that’s the latest Arm CPUs such as those used in Project DIGITS, or as part of Arm Compute Subsystems (CSS) for Client. All of these technologies are optimized for maximum performance throughout and peak efficiency across real-world applications and workloads.

The Arm compute platform also offers the flexibility to use different computational engines for different AI use cases. In NVIDIA Project DIGITS, the Arm-based NVIDIA Grace CPU and NVIDIA Blackwell GPU serve complementary roles, enabling developers to use these components for a variety of workloads. This heterogeneous computing approach is essential to achieving maximum AI performance, while managing memory utilization and power consumption.

“Our collaboration with Arm on the GB10 Superchip will fuel the next generation of innovation in AI, combining NVIDIA’s AI expertise with Arm’s scalable compute platform to deliver exceptional performance and efficiency,” said Ashish Karandikar, VP of SoC Products at NVIDIA. “Now, with the introduction of Project DIGITS, every AI developer and researcher can have a powerful supercomputer at their fingertips.”

Unlocking software innovation

For developers, it is critical to have a fully integrated hardware and software AI platform. Project DIGITS uses the open-source Linux operating system, and users can access an extensive library of NVIDIA AI software, including software development tools, libraries, frameworks and AI models available in the NVIDIA NGC catalog and the NVIDIA Developer portal to accelerate their generative AI workflows. 

Arm’s presence in datacenters with NVIDIA Grace Hopper and Grace Blackwell provides a consistent platform architecture across both datacenter and edge environments, allowing developers to seamlessly use the same set of tools for AI application development. Moreover, Arm has been supporting and driving critical work in open-source developer communities to enable the software needed to deploy AI everywhere. As a result, over 20 million software developers worldwide are building their applications on the Arm compute platform, enabling a growing open-source community that is innovating at a rapid scale.

The ideal platform for high performance AI compute

Arm is the world’s leading, most pervasive compute platform for AI now and in the future, making it an ideal platform for the GB10 Superchip used in Project DIGITS, a powerful PC desktop platform that can run large AI models of up to 200B parameters, which has not been possible until now. This influence across the AI ecosystem delivers flexible, performant and power-efficient AI capabilities to millions of developers worldwide. Working with NVIDIA and our leading software ecosystem, we cannot wait to see the next generation of highly innovative AI applications deployed.

¹ This is a petaflop based on FP4, as referenced here.

The post Arm-powered NVIDIA Project DIGITS Puts High-Performance AI in the Hands of Millions of Developers  appeared first on Arm Newsroom.

The Tech Trends to Look Out For at CES 2025

19 December 2024 at 00:12

Each year kicks off with the Consumer Electronics Show (CES), which showcases the latest and greatest tech innovations from the world’s technology companies, big and small. At CES 2024, AI took center stage, with attendees demoing their latest AI-based tech solutions, including many of Arm’s partners from automotive, consumer technology and IoT markets.

At CES 2025, we anticipate that AI will remain front and center at the event, as it continues to expand and grow at a rapid rate. In fact, Ami Badani, Arm’s Chief Marketing Officer, will be talking with industry leaders, including Meta and NVIDIA, on how to power a sustainable AI revolution. However, we also expect to see more specific tech trends emerging that will set the tone for innovation for the rest of the year.

This blog outlines these trends, and how Arm and our partners are playing leading roles across each one. These include:

  • Autonomous driving innovations;
  • More AI coming to the car;
  • Accelerating automotive software development;
  • AI-powered smart home devices, including the TV;
  • Momentum around Arm-based PCs and laptops;
  • Driving XR tech adoption; and
  • The rise of high-performance edge AI.

Autonomous driving innovations

2024 saw various technology innovations that are set to take us closer to fully-fledged autonomous vehicles on the roads. The collaboration between Arm and Nuro is helping to accelerate this future, with the Nuro Driver™ integrating Arm’s Automotive Enhanced (AE) solutions for more intelligent and advanced autonomous experiences in cars. NVIDIA will bring Arm Neoverse-V3AE to its upcoming DRIVE Thor for next-generation software-defined vehicles (SDVs). Several leading OEMs have already announced plans to adopt the chipset for their automotive solutions, including BYD, Nuro, XPENG, Volvo and Zeekr. We expect CES 2025 to highlight the latest technology solutions and collaborations that will define the future of autonomous driving in the year ahead and beyond.

Video outlines the Arm Nuro partnership

More AI coming to the car to enhance the driver experience

Across in-vehicle infotainment (IVI) and advanced driver assistance systems (ADAS), there have been various OEM innovations in the past year, with AI models being integrated into these systems. For example, Mercedes-Benz is using Chat-GPT for intelligent virtual assistants within its vehicles. It will be fascinating to see the broad range of OEM innovations on display at CES 2025, with 94 percent of global automakers using Arm-based technology for automotive applications. This is alongside the top 15 automotive semiconductor suppliers in the world adopting Arm technologies in their silicon solutions.

As a technology thought leader in the automotive industry, Dipti Vachani, Arm’s SVP and GM for the Automotive Line of Business, will be participating in a CES 2025 panel with leading OEMs, including BMW, Honda and Rivian, as well as Nuro, on revolutionizing the future of driving through unleashing the power of AI. The panel will discuss the technological impacts of AI on future vehicle designs.

However, hardware innovation is only as strong the software to run on it, which is why we are looking forward to AWS, Elektrobit, LeddarTech, and Plus.AI highlighting their latest AI-enabled solutions at CES 2025. AWS will be showcasing its new generative AI-powered voice-based user guide for inside the vehicle, which runs on virtual hardware in the cloud before running on the physical instance in the car. The chatbot-based solution allows users to interact with the car on its features and dashboard information, with the AI small language model (SLM) being continuously kept up-to-date via software.

Other AI-based in-vehicle demos include the US debut of Elektrobit’s first functional safety compliant Linux operating system (OS) for automotive applications, the EB corbos Linux, which has been announced as a 2025 Honoree in Vehicle Tech and Advanced Mobility in the CES 2025 Innovation Awards. LeddarTech, which is already optimizing its ADAS perception and fusion algorithms through utilizing the latest Arm AE solutions and virtual platforms, will display its latest LeddarVision software for SDVs. Meanwhile, Plus.AI will be highlighting their latest AI-based autonomous driving software solutions, demonstrating how its autonomous driving technology stack can scale across all levels of autonomy for passenger cars and commercial vehicles, with this running on any Arm-based hardware.  

Magnus Östberg, Chief Software Officer at Mercedes-Benz, sits down with Arm’s Dipti Vachani to explore the role of software in transforming the automotive industry.

Accelerating automotive software development

As the automotive industry evolves to introducing more SDVs on the road, accelerating software development is becoming critical. In 2024, as part of our launch of new automotive technologies, we announced a range of new virtual platforms from our partners. These are transforming the silicon design, development and deployment process, as the virtual platforms allow our partners to develop and test their software before physical hardware is ready. This accelerates development times and leads to a faster time-to-market.

Tata Technologies will be presenting its cloud-to-car demo, which showcases technologies from all four members of the SDV Alliance – which was launched at CES 2024 – that run both on physical Arm-based hardware and on virtual platforms running in an AWS Graviton-powered cloud instance. Meanwhile, AWS will also be showcasing its Graviton G4 hosted Arm RD-1AE reference implementation running on a Corellium virtual platform. Finally, QNX is using CES 2025 to show how developers can create their own innovative cross-platform solutions through its highly accessible software.

The value of ecosystem collaborations in automotive

At CES 2025, we expect to see a range of automotive partners highlighting the value of ecosystem collaborations to support the development and deployment of software in vehicles. This includes Mapbox, a leading platform for powering location experiences for automakers such as BMW, General Motors, Rivian and Toyota, which recently launched its own virtual platform solution, the Virtual Head Unit (VHU), in partnership with Arm and Corellium. The solution empowers leading automakers to expedite the integration, testing, and validation of their navigation systems.

There will also be a range of SOAFEE members highlighting their latest Blueprints at the event. LG will be introducing the LG PICCOLO, which enhances its Battery Management System (BMS) from a solution that has limited update capabilities to one that can be continuously updated and customized with new scenarios at any time. We have been working with LG to integrate BMS and LG PICCOLO into the cloud virtual platform for the Arm RD1-AE, allowing for virtual validation, lower costs and a quicker time-to-market before deployment to the vehicles. In addition, Tier IV and Denso will showcase their SOAFEE Open AD kit Blueprints for autonomous driving, and Red Hat will highlight its mixed-critical demo to improve security and safety in SDVs.

Video highlighting the Arm Red Hat partnership on software for SDVs

AI-powered smart home devices, including the TV

Previous CES events have demonstrated the possibilities of true integration across smart home devices and applications, like heating, lighting and security, with the TV at the center of these experiences. This is likely to continue at CES 2025, as the smart home effectively becomes a “smart assistant” that adjusts settings in the home based on user preferences, from temperature and light settings to playing music.

It will also be interesting to see the range of new AI-powered features and applications in next-generation TVs on display at CES. This started with picture quality enhancements and content recommendations, but AI in the TV is now powering a range of new use cases, including health and fitness through body tracking via the smart camera. CES 2025 is likely to unearth yet more fascinating AI use cases for the TV, including new immersive experiences.  

Moreover, as with previous CES events, the latest premium TVs will be on full display. These include new leading-edge Arm-powered TVs from LG, Hisense, Samsung and TCL. CES 2024’s “showstopper” TV product was LG’s transparent TV, so it will be interesting to see what will take the crown in 2025.

The LG TV display at CES 2024

Momentum around Arm-based PCs and Laptops

In 2024, there was significant progress with the Windows on Arm (WoA) ecosystem with the most widely used applications on PC and laptop now providing Arm-native versions. Most recently, Google released an Arm-native version of Google Drive for WoA. This continuous momentum means WoA is an increasingly attractive area of tech for the wider ecosystem. We also expect a range of hardware for AI PCs to be highlighted at the event. This includes MediaTek’s Kompanio SoCs for Chromebook devices that are increasingly adopting new AI-based features.

Driving XR tech adoption

2024 saw significant XR tech innovation, with new AR smart glasses, like Snap’s fifth-generation Spectacles, Meta’s next-generation Ray-Ban, and Meta’s Orion smart glasses, being launched and announced. Hardware advancements, including touch screens and camera miniaturization, as well as software improvements in applications and operating systems, have created opportunities for XR wearable devices to become more mainstream.

CES 2025 will provide the perfect platform to highlight further innovation in the XR space, whether this is new wearable devices or supporting tech and apps. For example, SoftBank-backed ThinkAR will be showcasing its range of wearable devices, including AI smart glasses and wearable AI assistants. Meanwhile, there will be AI updates to current generation XR wearable products, like Meta’s Ray-Ban AR smart glasses.

Concept for future AR smart glasses

The rise of high-performance edge AI

CES 2024 saw a range of low power IoT products from Arm partners showcasing edge AI capabilities, enabling use cases like presence, face and gesture detection, and natural language processing. At CES 2025, we expect a step-up in edge AI through higher performance use cases on IoT devices, like localized decision-making, real-time data processing and responses, and autonomous navigation. These are particularly beneficial for applications servicing primary industries, as well as smart cities, industrial IoT and robotics, where quick responses to environments are crucial for functionality and safety.

Looking at the shortlist for the CES 2025 Innovation Awards, there are a range of innovative Arm-powered tech products across IoT industries that are showcasing advanced edge AI use cases. For industrial IoT and robotics, R2C2 ARIII is a robot brain that enhances autonomous industrial inspection, while DeepRobotics is demoing its Lynx four-foot robotic dog for diverse terrains. Elsewhere, SoftBank-backed Aizip is highlighting its on-device edge AI application for high-accuracy fish counting in underwater environments.

CES runs on Arm

With unmatched scale that touches 100 percent of the connected global population, we fully expect the Arm compute platform to feature heavily across many of the technologies on display at CES 2025. We will be kicking off the new year through showing the world that Arm is at the heart of AI experiences, with CES running on Arm-powered technology.

To get the latest Arm CES 2025 updates visit here.

We look forward to meeting with you at the event!

The post The Tech Trends to Look Out For at CES 2025 appeared first on Arm Newsroom.

Accelerating Cloud Innovation with AWS Graviton4 Processors, Powered by Arm Neoverse

2 December 2024 at 22:00

The cloud computing landscape is undergoing a dramatic transformation, driven by the explosive growth of AI. As AI applications become increasingly sophisticated and demanding, the need for powerful, efficient, and cost-effective computing solutions has never been greater. Customers deploying their workloads in the cloud are rethinking what infrastructure they need to meet the requirements of these modern workloads. Their requirements range from achieving better performance and reduced costs to achieving new benchmarks in energy efficiency for regulatory or sustainability goals.

Arm and AWS have a long-standing collaboration aimed at providing specialized silicon and compute, paving the way for a more efficient, sustainable, and powerful cloud. This week at AWS re:Invent 2024, you’ll see more evidence for how Graviton4 marks a significant leap forward, empowering developers and businesses to unlock the full potential of their cloud workloads.

Exceptional Performance Benefits

The latest Arm Neoverse V2 based AWS Graviton4 processors provide up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than previous generation Graviton3 processors. Thanks to these advantages, we are now seeing a significant adoption of AWS Graviton processors in the ecosystem and by customers.

The Arm Neoverse V2 platform includes new capabilities of the Armv9 architecture, such as high-performance floating-point and vector instruction support, with features like SVE/SVE2, Bfloat16, and Int8 MatMul delivering strong performance for AI/ML and HPC workloads.

AI/ML Workloads

To further drive adoption of AI workloads Arm launched Arm Kleidi earlier this year, collaborating with leading AI frameworks and the software ecosystem to ensure the full ML stack can benefit from out-of-the-box inference performance optimizations on Arm, allowing developers to build their workloads without needing extra Arm-specific expertise. We’ve showcased how these optimizations in Pytorch enable running LLMs such as Llama 3 70B and Llama 3.1 8B on AWS Graviton4 with significantly improved tokens/sec and time-to-first-token metrics.

Llama LLMs on AWS Graviton4

The performance metrics have been documented in details in these blogs for LLM Inferencing with PyTorch and LLM3 on Graviton4.

HPC and EDA Workloads

For HPC workloads, Graviton4 marks a significant leap forward in capability compared to Graviton3E providing 16% more main-memory bandwidth per core, and a doubling of L2 cache per vCPU. These are significant for HPC application performance which is often memory-bandwidth bound, and AWS has managed to achieve benefits across these areas as shown below.

For EDA workloads, Graviton4 delivers up to 37% higher performance over Graviton3 for RTL simulation workloads as measured by production runs conducted by Arm’s engineering teams.

HPC and EDA workloads benefits on AWS Graviton4

Ecosystem Adoption

Over the last few years, we have seen a continual ramp in adoption across the software ecosystem with end customers deploying a wide range of cloud workloads on AWS Graviton processors. Customers are saving money, seeing better performance, and improving their carbon and sustainability footprints. Here are a few examples:

Ecosystem benefits of adopting AWS Graviton3, powered by Arm Neoverse

Upcoming AWS re:Invent 2024

If you are visiting AWS re:Invent 2024, you can check out the following key sessions on a wide range of topics related to AWS Graviton processors. For a full list of more than 60+ sessions on AWS Graviton, check out the event’s official agenda.

Key AWS Graviton sessions at re:Invent 2024

Get ready to harness the power of Graviton

We believe the future of cloud computing is undoubtedly Arm-powered, and are proud to support AWS in placing Graviton at the forefront of this revolution. Arm continues to invest in further strengthening our software ecosystem and removing any friction for developers to build on Arm – and to access all the performance and efficiency benefits the Arm compute platform delivers.

Developer resources

Here are some key resources and avenues to engage directly with us and AWS Graviton teams:

To arrange a meet-up with an Arm representative at AWS re:Invent 2024, please contact sw-ecosystem@arm.com

The post Accelerating Cloud Innovation with AWS Graviton4 Processors, Powered by Arm Neoverse appeared first on Arm Newsroom.

Beyond the Newsroom: Exploring Tech Innovations from Arm in November 2024

2 December 2024 at 17:00

As we move into the era of advanced computing, Arm is leading the charge with groundbreaking tech innovations. November 2024 has been a month of significant strides in technology innovation, particularly in AI, machine learning (ML), Arm Neoverse-based Kubernetes clusters, and system-on-chip (SoC) architecture.   

The Arm Editorial Team has highlighted the cutting-edge tech innovations that happened at Arm in November 2024 – all of which are set to shape the next generation of intelligent, secure, and high-performing computing systems. 

Harnessing SystemReady to drive software interoperability on Arm-based hardware

Arm’s SystemReady program ensures interoperability and standardization across Arm-based devices. Dong Wei, Standards Architect and Fellow at Arm, talks about how the program benefits the industry by reducing software fragmentation, lowering development costs, and enabling faster deployment of applications on a wide range of Arm hardware. Meanwhile, Pere Garcia, Technical Director at Arm, discusses the certification benefits of SystemReady, including how it simplifies software deployment across different Arm devices, reduces fragmentation, and enhances the reliability of embedded systems.   

Building safe, secure, and versatile software with Rust on Arm

The Rust programming language enhances safety and security in software development for Arm-based systems by eliminating common programming errors at compile time. In the first blog of this three-part Arm Community series, Jonathan Pallant, Senior Engineer and Trainer at Ferrous Systems, explains why Rust’s unique blend of safety, performance, and productivity has gained attention from government security agencies and the White House.

This blend delivers a robust toolchain for mission-critical applications, reduces vulnerabilities, and improves overall software reliability. Part 2 and Part 3 from Jonathan Pallant provides more insights.  

Boosting efficiency with Arm Performance Libraries 24.10

Arm Performance Libraries 24.10 enhance math libraries for 64-bit Arm processors, boosting efficiency in numerical applications with improved matrix-matrix multiplication and Fast Fourier Transforms. Chris Goodyer, Director, Technology Management at Arm, highlights significant speed improvements, including a 100x increase in the Mersenne Twister random number generation. This offers faster, more accurate computations for engineering, scientific, and ML applications across Linux, macOS, and Windows.  

Enabling real-Time sentiment analysis on Arm Neoverse-based Kubernetes clusters

Real-time sentiment analysis on Arm Neoverse-based Kubernetes clusters enables businesses to efficiently process social media data for actionable insights. Na Li, ML Solutions Architect at Arm, demonstrates how AWS Graviton instances with tools like Apache Spark, Elasticsearch, and Kibana provide a scalable, cost-effective framework adaptable across cloud providers to enhance analytics performance and energy efficiency. 

Enhancing control flow integrity with PAC and BTI on AArch64

Enabling Pointer Authentication (PAC) and Branch Target Identification (BTI) on the AArch64 architecture strengthens control flow integrity and reduces gadget space, making software more resilient to attacks. In the first blog of this three-part Arm Community series, Bill Roberts, Principal Software Engineer at Arm, explores how these features enhance security, minimize vulnerabilities, and improve software reliability. Part 2 and Part 3 provide further insights around PAC and BTI. 

Harnessing Arm’s Scalable Vector Extension (SVE) in C#

.NET 9’s support for Arm Scalable Vector Extension (SVE) allows developers to write more efficient vectorization code. Alan Hayward, Staff Software Engineer, highlights how using SVE in C# improves performance and ease of use, enabling more optimized applications on Arm-based systems.  

Expanding Arm on Arm with the NVIDIA Grace CPU

The NVIDIA Grace CPU, built on Arm Neoverse V2 cores, enhances performance, reduces costs, and improves energy efficiency for high-performance computing (HPC) and data center workloads. Tim Thornton, Director Arm on Arm, discusses how the NVIDIA Grace CPU Superchip-based servers enable Arm to deploy high-performance Arm compute in their own data centers, providing access to the same Neoverse V2 cores used in AWS and Google Cloud.  

Accelerating ML development with Corstone-320 FVP: A guide for Arm Ethos-U85 and Cortex-M85

The benefits of the new Arm Corstone-320 Fixed Virtual Platform (FVP) include developing and testing ML applications without physical hardware. Zineb Labrut, Software Product Owner in Arm’s IoT Line of Business, highlights how this platform accelerates development, reduces costs, and mitigates risks associated with hardware dependencies, making it a valuable tool for developers in the embedded and IoT space.

Real-time Twitter/X sentiment analysis on the Arm Neoverse CPU: A KubeCon NA 2024 demo 

Pranay Bakre, a Principal Solutions Engineer at Arm, demonstrates the power of an Arm Neoverse-based CPU as it runs a real-time Twitter/X sentiment analysis program using a StanfordNLP model during KubeCon North America 2024. More information can also be found in this blog.

Learning all about the Fragment Prepass for Arm Immortalis and Mali GPUs

The latest Arm GPUs for consumer devices – the Immortalis-G925, Mali-G725 and Mali-G625 – all adopt a new feature called the Fragment Prepass. Tord Øygard, a Principal GPU Architect at Arm, provides more details about this Hidden Surface Removal technique in this Arm Community blog, which leads to improved performance and power efficiency when processing geomerty workloads for graphics and gaming.

Elsewhere with graphics and gaming at Arm, Ian Bolton, Staff Developer Relations Manager, summarized Arm’s involvement in the inaugural AI and Games Conference in this Arm Community blog.

Exploring .NET 9 and Arm’s SVE with Microsoft and VectorCamp’s simd.info

In the latest “Arm Innovation Coffee”, Kunal Pathak from Microsoft dives into .NET 9 and Arm’s SVE, while Konstantinos Margaritis and Georgios Mermigkis from VectorCamp showcase simd.info, a cutting-edge online reference tool for C intrinsics across major SIMD engines. 

Built on Arm partner stories: AI, automotive, cloud and Windows on Arm

Insyde Software CTO Tim Lewis highlights their groundbreaking AI BIOS product, which leverages AI to simplify firmware settings, and showcases their expertise in developing firmware for everything from laptops to servers. Lewis also explains how collaborating with Arm is driving innovation in power management and security. 

Tilo Schwarz, VP and Head of Autonomy at Nuro, discusses the innovative Nuro Driver, a state-of-the-art software and hardware solution powering autonomous driving technology, and explains how its partnership with Arm is shaping the future of modern transportation. 

Bruce Zhang, Computing Product Architect at Alibaba, discuss how Arm and Alibaba are accelerating AI workloads in the cloud and enabling robust applications that are transforming industries.

Francis Chow, VP and GM of the Edge Business at Red Hat, explains how Red Hat’s solutions, which run on Arm technologies, incorporate leading-edge performance, power efficiency and real-time data processing features as part of the industry-wide move to modern software-defined vehicles.

Aidan Fitzpatrick, CEO and Founder of Reincubate, a London-based software company, talks about the benefits of Windows on Arm to their flagship application Camo, which enables users to create high-quality video content. Main benefits include advanced AI-powered features and performance without sacrificing battery life.

Exploring the latest Trends in the automotive and connected vehicle space

As part of a panel at Reuters Event Automotive USA, Dipti Vachani, SVP and GM of the Automotive Line of Business at Arm, highlights the latest trends and challenges in the automotive and connected vehicle space.  

The post Beyond the Newsroom: Exploring Tech Innovations from Arm in November 2024 appeared first on Arm Newsroom.

Arm Tech Symposia: AI Technology Transformation Requires Unprecedented Ecosystem Collaborations

22 November 2024 at 15:57

The Arm Tech Symposia 2024 events in China, Japan, South Korea and Taiwan were some of the biggest and best attended events ever held by Arm in Asia. The size of all the events was matched by the enormity of the occasion that is being faced by the technology industry.

As Chris Bergey, SVP and GM of Arm’s Client Line of Business, said in the Tech Symposia keynote presentation in Taiwan: “This is the most important moment in the history of technology.”  

There are significant opportunities for AI to transform billions of lives around the world, but only if the ecosystem works together like never before.

Chris Bergey, SVP and GM of the Arm Client Line of Business, welcomes attendees to Arm Tech Symposia 2024

A re-thinking of silicon

At the heart of these ecosystem collaborations is a broad re-think of how the industry approaches the development and deployment of technologies. This is particularly applicable to the semiconductor industry, with silicon no longer a series of unrelated components but instead becoming “the new motherboard” to meet the demands of AI.

This means multiple components co-existing within the same package, providing better latency, increased bandwidth and more power efficiency.

Silicon technologies are already transforming the everyday lives of people worldwide, enabling innovative AI features on smartphones, like the real-time translation of languages and text summarization, to name a few.

As James McNiven, VP of Product Management for Arm’s Client Line of Business, stated in the South Korea Tech Symposia keynote: “AI is about making our future better. The potential impact of AI is transformative.”

The importance of the Arm Compute Platform

The Arm Compute Platform is playing a significant role in the growth of AI. This combines hardware and for best-in-class technology solutions for a wide range of markets, whether that’s AI smartphones, software-defined vehicles or data centers.

This is supported by the world’s largest software ecosystem, with more than 20 million software developers writing software for Arm, on Arm. In fact, all the Tech Symposia keynotes made the following statement: “We know that hardware is nothing without software.”

Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, outlines the software benefits of the Arm Compute Platform

How software “drives the technology flywheel”

Software has always been an integral part of the Arm Compute Platform, with Arm delivering the ideal platform for developers to “make their dreams (applications) a reality” through three key ways.

Firstly, Arm’s consistent compute platform touches 100 percent of the world’s connected population. This means developers can “write once and deploy everywhere.”

The foundation of the platform is the Arm architecture and its continuous evolution through the regular introduction of new features and instruction-sets that accelerate key workloads to benefit developers and the end-user.

SVE2 is one feature that is present across AI-enabled flagship smartphones built on the new MediaTek Dimensity 9400 chipset. It incorporates vector instructions to improve video and image processing capabilities, leading to better quality photos and longer-lasting video.

The Arm investment into AI architectural features at Arm Tech Symposia Shanghai

Secondly, through having acceleration capabilities to deliver optimized performance for developers’ applications. This is not just about high-end accelerator chips, but having access to AI-enabled software to unlock performance.

One example of this is Arm Kleidi, which seamlessly integrates with leading frameworks to ensure AI workloads run best on the Arm CPU. Developers can then unlock this accelerated performance with no additional work required.

At the Arm Tech Symposia Japan event, Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, said: “We are committed to abstracting away the hardware from the developer, so they can focus on creating world changing applications without having to worry about any technical complexities around performance or integration.”

This means that when the new version of Meta’s Llama, Google AI Edge’s MediaPipe and Tencent’s Hunyuan come online, developers can be confident that no performance is being left on the table with the Arm CPU.

Kleidi integrations are set to accelerate billions of AI workloads on the Arm Compute Platform, with the recent PyTorch integration leading to 2.5x faster time-to-first token on Arm-based AWS Graviton processors when running the Llama 3 large language model (LLM).

James McNiven, VP of Product Management for Arm’s Client Line of Business, discusses Arm Kleidi

Finally, developers need a platform that is easy to access and use. Arm has made this a reality through significant software investments that ensure developing on the Arm Compute Platform is a simplified, seamless experience that “just works.”

As each Arm Tech Symposia keynote speaker summarized: “The power of Arm and our ecosystem is that we deliver what developers need to simplify the process, accelerate time-to-market, save costs and optimize performance.”

The role of the Arm ecosystem

The importance of the Arm ecosystem in making new technologies a reality was highlighted throughout the keynote presentations. This is especially true for new silicon designs that require a combination of core expertise across many different areas.

As Dermot O’Driscoll, VP, Product Management for Arm’s Infrastructure Line of Business, said at the Arm Tech Symposia event in Shanghai, China: “No one company will be able to cover every single level of design and integration alone.”

Dermot O’Driscoll, VP, Product Management for Arm’s Infrastructure Line of Business, speaks at the Arm Tech Symposia event in Shanghai, China

Empowering these powerful ecosystem collaborations is a core aim of Arm Total Design, which enables the ecosystem to accelerate the development and deployment of silicon solutions that are more effective, efficient and performant. The program is growing worldwide, with the number of members doubling since the program was launched in late 2023. Each Arm Total Design partner offers something unique that accelerates future silicon designs, particularly those that are built on Arm Neoverse Compute Subsystems (CSS).

One company that exemplifies the spirit and value of Arm Total Design is South Korea-based Rebellions. Recently, it announced the development of a new large-scale AI platform, the REBEL AI platform, to drive power efficiency for AI workloads. Built on Arm Neoverse V3 CSS, the platform uses a 2nm process node and packaging from Samsung Foundry and leverages design services from ADtechnology. This demonstrates true ecosystem collaboration, with different companies offering different types of highly valuable expertise.

Dermot O’Driscoll said: “The AI era requires custom silicon, and it’s only made possible because everyone in this ecosystem is working together, lifting each other up and making it possible to quickly and efficiently meet the rising demands of AI.”

Chris Bergey at the Arm Tech Symposia event in Taiwan talks about the new chiplet ecosystem being enabled on Arm

Arm Total Design is also helping to enable a new thriving chiplet ecosystem that already involves over 50 leading technology partners who are working with Arm on the Chiplet System Architecture (CSA). This is creating the framework for standards that will enable a thriving chiplet market, which is key to meeting ongoing silicon design and compute challenges in the age of AI.

The journey to 100 billion Arm-based devices running AI

All the keynote speakers closed their Arm Tech Symposia keynotes by reinforcing the commitment that Arm CEO Rene Haas made at COMPUTEX in June 2024: 100 billion Arm-based devices running AI by the end of 2025.

James McNiven closes the Arm Tech Symposia keynote in Shenzhen

However, this goal is only possible if ecosystem partners from every corner of the technology industry work together like never before. Fortunately, as explained in all the keynotes, there are already many examples of this work in action.

The Arm Compute Platform sits at the center of these ecosystem collaborations, providing the technology foundation for AI that will help to transform billions of lives around the world.

The post Arm Tech Symposia: AI Technology Transformation Requires Unprecedented Ecosystem Collaborations appeared first on Arm Newsroom.

Unlocking New Possibilities in Cloud Deployment with Arm at KubeCon NA 2024

21 November 2024 at 23:00

As developers and platform engineers seek greater performance, efficiency, and scalability for their workloads, Arm-based cloud services provide a powerful and trusted solution. At KubeCon NA 2024, we had the pleasure of meeting many of these developers face-to-face to showcase Arm solutions as they migrate to Arm.   

Today, all major hyperscalers, including Amazon Web Services (AWS), Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure (OCI), offer Arm-based servers optimized for modern cloud-native applications. This shift offers a significant opportunity for organizations to improve price-performance ratios, deliver a lower total cost of ownership (TCO), and meet sustainability goals, while gaining access to a robust ecosystem of tools and support.  

At KubeCon NA, it was amazing to hear from those in the Arm software ecosystem share their migration stories and the new possibilities they’ve unlocked. 

Arm from cloud to edge at KubeCon

Building on Arm unlocks a wide range of options from cloud to edge. It enables developers to run their applications seamlessly in the cloud, while tapping into the entire Arm software and embedded ecosystem and respective workflows. 

Arm-based servers are now integrated across leading cloud providers, making them a preferred choice for many organizations looking to enhance their infrastructure. At KubeCon NA 2024, attendees learned about the latest custom Arm compute offerings available from major cloud service providers including: 

  • AWS Graviton series for enhanced performance and energy efficiency; 
  • Microsoft Azure Arm-based VMs for scalable, cost-effective solutions; 
  • Google Cloud’s Tau T2A instances for price-performance optimization; and 
  • OCI Ampere A1 Compute for flexible and powerful cloud-native services.
Developer kit based on the Ampere Altra SoC512

Ampere showcased their Arm-based hardware in multiple form factors across different partner booths at the show to demonstrate how the Arm compute platform is enabling server workloads both in the cloud and on premises.

System76 ‘s Thelio Astra, an Arm64 developer desktop, featuring Ampere Altra processors, was also prominently displayed in booths across the KubeCon NA show floor. The workstation is streamlining developer workflows for Linux development and deployment across various markets, including automotive and IoT.

System76’s Thelio Astra

During the show, the Thelio Astra showcased its IoT capabilities by aggregating and processing audio sensor data from Arduino devices to assess booth traffic. This demonstrated cloud-connected IoT workloads in action. 

Arm Cloud-to-edge workloads with Arm-based compute from Arduino endpoints to Ampere servers running lightweight Fermyon Kubernetes and WASM

Migrating to Arm has never been easier

Migrating workloads to Arm-based servers is more straightforward than ever. Today, 95% of graduated CNCF (Cloud Native Computing Foundation) projects are optimized for Arm, ensuring seamless, efficient, and high-performance execution.  

Companies of all sizes visited the Arm booth at KubeCon NA to tell us about their migration journey and learn how to take advantage of the latest developer technologies. They included leading financial institutions, global telecommunications providers and large retail brands.  

For developers ready to add multi-architecture support to their deployments, we demonstrated a new tool – kubearchinspect – that can be deployed on a Kubernetes cluster and scan for container images to check for Arm compatibility. Check out our GitHub repo to get started and how to validate Arm support for your container images. 

Hundreds of independent software vendors (ISVs) are enabling their applications and services on Arm, with developers easily monitoring application performance and managing their workloads via the Arm Software Dashboard.  

For developers, the integration of GitHub Actions, GitHub Runners, and the soon to be available Arm extension for GitHub Copilot, means a seamless cloud-native CI/CD workflow is now fully supported on Arm. Graduated projects can scale using cost-effective Arm runners, while incubating projects benefit from lower pricing and improved support from open-source Arm runners. 

Extensive Arm ecosystem and Kubernetes support 

As Kubernetes continues to grow, with 5.6 million developers worldwide, expanding the contributor base is essential to sustaining the cloud-native community and supporting its adoption in technology stacks. Whether developers are using AWS EKS, Azure AKS, or OCI’s Kubernetes service, Arm is integrated to provide native support. This enables the smooth deployment and management of containerized applications. 

Scaling AI workloads and optimizing complex inference pipelines can be challenging across different architectures. Developers can deploy their AI models across distributed infrastructure, seamlessly integrating with the latest AI frameworks to enhance processing efficiency.  

Through a demonstration at the Arm booth, Pranay Bhakre, a Principal Solutions Engineer at Arm, showcased AI over Kubernetes. This brought together Kubernetes, Prometheus and Grafana open-source projects into a power-efficient real-time, scalable, sentiment analysis application. More information about how to enable real-time sentiment analysis on Arm Neoverse-based Kubernetes clusters can be found in this Arm Community blog

Pranay Bakre explains the real-time sentiment analysis demo at KubeCon NA 2024

Pranay also showcased the ability to run AKS on the very latest Arm Neoverse-powered Microsoft Azure Cobalt 100 processors. To jumpstart running Kubernetes via AKS on Microsoft Azure Cobalt 100, check out this learning path and the corresponding GitHub repo

Additionally, at Kubecon 2024, we launched a pilot expansion of our “Works on Arm” program into the CNCF community. This offers comprehensive resources to help scale and optimize cloud-native projects on the Arm architecture. Developers can click here to take a short survey and request to be included in this new initiative. 

Switch to Arm for smarter deployment and scalable performance 

As demonstrated at KubeCon 2024, Arm is transforming cloud-native deployment and accelerating the developer migration to Arm. 

In fact, now is the perfect time to harness Arm-based cloud services for better performance, lower costs, and scalable flexibility. Developers can start building or migrating today to deploy smarter, optimized cloud-native applications on Arm, for Arm. 

Developers are welcome to join us at KubeCon Europe in April 2025 to learn more about our latest advancements in platform engineering and cloud-native technologies.

The post Unlocking New Possibilities in Cloud Deployment with Arm at KubeCon NA 2024 appeared first on Arm Newsroom.

Igniting a New Era of Cloud Computing for AI

20 November 2024 at 23:54

We’re living in a generation of compute that is being defined by AI – a transformation that is happening at a pace unlike anything we’ve seen before. Arm remains on the critical path to enabling this AI-accelerated future in a sustainable and scalable way, providing new engineering innovation and developments to make it happen. It’s clear to me that this vision is shared across our ecosystem, including at this week’s Microsoft Ignite event.

Across the many AI advancements announced by Microsoft, it’s evident they are on the path to building a sustainable, scalable, and secure platform for AI and that they’re dedicated to changing the way developers build, deploy, and scale their applications in the cloud. Arm’s collaboration with Microsoft on Azure Cobalt 100 has already shifted the landscape of cloud data centers and the services offered by Microsoft in just one year since its launch in 2023. By leveraging the flexibility and power-efficiency of Arm Neoverse Compute Subsystems (CSS), Microsoft is pushing the boundaries of compute with Cobalt 100, establishing a capable and flexible infrastructure supporting a wide variety of mission critical modern applications — from media servers and open-source databases to CI/CD pipelines. 

AI has not only opened the world’s eyes to the power challenge in the datacenter, but it has unlocked a greater emphasis on the need for more specialized silicon. Every watt counts, and for change-makers like Microsoft, this means taking greater control over the entire infrastructure stack from silicon to cloud service deployment with sustainability in focus.  

As mentioned in the Microsoft keynote, 100% of Microsoft Teams’ media processing capabilities now run on Cobalt 100, which is a testament to purpose-built compute delivering the required performance as efficiently as possible. This is the mission that Neoverse CSS was built for. Through tailored solutions like Cobalt 100, Microsoft is setting the stage for a future-ready cloud, capable of handling the growing demands of AI-enabled workloads without pushing energy consumption to unsustainable levels. To dig in on the impressive performance gains delivered by Cobalt 100-powered VMs to date, I encourage you to check out this week’s Arm Viewpoints podcast with Arpita Chatterjee, Senior Product Manager for Azure Platforms. And if you happen to tune into the Microsoft Ignite digital event, check out Arm’s virtual booth.

In addition to the impressive Cobalt 100 momentum to date, Microsoft announced they will be the first cloud vendor to make instances based on Nvidia’s Grace Blackwell platform available. Consisting of 72 Arm Neoverse V2 cores connected through a high-bandwidth coherent link to Nvidia’s latest Blackwell accelerator, Grace Blackwell is a great example of the kind of specialized silicon the Arm platform enables our partners to build, in this case targeting the most demanding AI training and inference workloads. 

The groundwork for an AI-powered future

Arm’s longstanding partnership with Microsoft has been instrumental in our mission to enable a modern AI-enabled data center with specialized silicon, but silicon is not the limit of our work together. We’re partnering to make it as easy as possible for developers to transition their workloads to optimized, Arm-based platforms. With tools like the Arm Software Ecosystem Dashboard and a robust library of Azure-specific tutorials and resources, developers are getting access to a comprehensive view of software packages supported on Arm and hands-on instructions to seamlessly migrate and run their applications on Arm-based Microsoft Azure instances. One example I’m particularly excited about is the new Arm extensions for GitHub Copilot which will offer specialized tools for AI and standard code development, such as code migration, containerization, CI/CD workflows, and performance optimization. We’ll be releasing it in the Github marketplace this year, so watch this space for more updates on availability! 

Cobalt 100 is only one example of a movement toward Arm-based purpose-built computing solutions that is happening across the broader data center landscape. The Arm architecture is becoming the foundation for specialized silicon needed to achieve the performance and efficiency required to succeed in the AI era. Alongside decades of investment in a robust software ecosystem to help developers bring their AI innovations to life, this is the groundwork for an AI-powered future that brings innovative advances in sciences, commerce, productivity and more. 

The post Igniting a New Era of Cloud Computing for AI appeared first on Arm Newsroom.

Building the Future of AI on Arm at AI Expo Africa 2024

19 November 2024 at 21:00

At AI Expo Africa 2024, Arm brought together AI developers, enthusiasts, and industry leaders through immersive workshops, insightful talks, exclusive networking opportunities, and an engaging booth experience. The event is Africa’s largest AI conference and trade show, with over 2,000 delegates from all over the African continent.

Arm has been attending AI Expo Africa for the past three years, and this year we noted a significant uptick in AI applications running on Arm and a definite thirst for knowledge in how to best to deploy and accelerate AI on Arm. Held at the Sandton Convention Centre in Johannesburg, South Africa, Arm’s presence at the event left a strong impact on the AI developer ecosystem, fostering connections and sparking innovation, with a range of expert insights from Arm tech leaders and Ambassadors from the Arm Developer Program.

Arm Ambassadors are a group of experts and community leaders developing on Arm who support and help lead the Developer Program through a host of Arm-endorsed activities like the various talks, workshops and engagements at AI Expo Africa. At the event, there were Arm Ambassadors from Ghana, Kenya, Switzerland and, of course, South Africa in attendance.

Day 1: Workshops and live demos

Arm kicked off with a high-energy workshop that saw an incredible turnout. Shola Akinrolie, Senior Manager for the Arm Developer Program, opened the session with a keynote introduction, setting the stage for a deep dive into Arm’s AI technology and its community-driven initiatives.

Distinguished Arm Ambassador Peter Ing then took the spotlight, showing how to run AI models at the edge on the Arm Compute Platform. He demonstrated the Llama 3.2 1B model running on a Samsung mobile device, showcasing real-time AI inference capabilities and illustrating how Arm is creating new opportunities for running small language models on the edge. The live demo left the audience captivated by the performance and efficiency of the Arm Compute Platform.

Arm’s keynote introduction at the AI Expo Africa 2024

Another standout session was led by Distinguished Arm Ambassador Dominica Abena Oforiwaa Amanfo, who shared her expertise on the Grove Vision AI V2 microcontroller (MCU), which is powered by a dual-core Arm Cortex-M55 CPU and Ethos-U55 NPU NN unit. Dominica highlighted the TinyML’s capabilities, as well as its compatibility with PyTorch and ExecuTorch. This showcased the reach and versatility of low-power, high impact AI innovations that are powered by Arm.

Developer session led by Distinguished Arm Ambassador Dominica Abena Oforiwaa Amanfo

The Arm booth: A hub of innovation

At AI Expo Africa, the Arm booth was bustling with energy, drawing hundreds of developers eager to experience Arm’s technology first-hand. The team engaged with visitors in discussions and hands-on demos. The booth was packed with excitement, from insightful tech exchanges to exclusive SWAG giveaways, including a highly sought-after Raspberry Pi MCU!

To end the day, Arm hosted an exclusive Arm Developer Networking Dinner. The evening was filled with lively discussions led by Arm’s Director of Software Technologies Rod Crawford and Arm Developer Program Ambassadors, as they shared their insights on AI’s future and the impact of edge computing across various industries.

The Arm booth at AI Expo Africa

Day 2: Inspiring talks and networking

On day two of the event, Arm’s Rod Crawford, captivated the audience with a powerful talk on “Empowering AI from Cloud to Edge.” Rod shared how Arm supports developers in harnessing the full potential of AI, from efficient cloud computing to high-performance, edge-based AI solutions. This means developers can create more powerful applications that work better and faster.

The tallk demonstrated how both generative AI and classic AI workloads could run across the entire spectrum of computing on Arm, from powerful cloud services to mobile and IoT devices. Through Arm Kleidi, Arm is engaging with leading AI frameworks, like MediaPipe, ExecuTorch and PyTorch, to ensure developers can seamlessly take advantage of AI acceleration on Arm CPUs without any changes to their code. Rod’s insights were met with enthusiasm as developers learned how Arm’s technologies accelerate AI deployment, even for the most demanding applications.

The final day wrapped up with a high-spirited “Innovation Coffee” session, offering attendees a relaxed environment to connect and reflect on Arm’s advancements. Stay tuned for highlights of this session on the Arm Software Developers YouTube channel.

A heartfelt thanks

Arm extends its deepest gratitude to everyone who contributed to and joined us at AI Expo Africa. Special thanks to the Arm team—Rod Crawford, Gemma Platt, and Stephen Ozoigbo—as well as the incredible Arm Developer Program Ambassadors Peter Ing, Dominica Amanfo, Derrick Sosoo, Brenda Mboya, and Tshega Mampshika for their hard work and passion. We also appreciate Marvin Rotermund, Nomalungelo Maphanga, Stephania Obaa Yaa Bempomaa, and Mia Muylaert for their energy and support at the booth.

Here are what some of the Arm Developer Program Ambassadors had to say about the event:

Brenda Mboya: “One of my favorite moments at the event was seeing the lightbulb go off for attendees who visited the Arm booth and realized how integral Arm has been in their lives. It was an honor to engage with young people interested in utilizing Arm-based technology in their school initiatives and I am glad that I was able to direct them to sign-up to be part of the Arm Developer Program.”

Derrick Sosoo: “Arm’s presence at AI Expo Africa 2024 marked a significant shift towards building strong connections with developers through immersive experiences. Our engaging workshops, insightful talks, Arm Developer meetup, and interactive booth showcase left an indelible mark on attendees.”

Dominica Amanfo: “We witnessed overwhelming interest from visitors eager to learn about AI on Arm and our Developer Program. I’m particularly grateful for the opportunity to collaborate with fellow Arm Ambassadors alongside our dedicated support team at the booth, which included students from the DUT Arm (E³)NGAGE Student Club.”

The future of AI is built on Arm

By uniting innovators, developers, and enthusiasts, Arm is leading the charge in shaping the future of AI. Together, we’re building a community that will drive the future of AI on Arm, empowering developers worldwide to innovate and bring cutting-edge technology to life.

Learn more about Arm’s developer initiatives and join the journey at Arm Developer Program.

The post Building the Future of AI on Arm at AI Expo Africa 2024 appeared first on Arm Newsroom.

Arm Ethos-U85 NPU: Unlocking Generative AI at the Edge with Small Language Models

13 November 2024 at 16:30

As artificial intelligence evolves, there is increasing excitement about executing AI workloads on embedded devices using small language models (SLM).  
 
Arm’s recent demo, inspired by Microsoft’s “Tiny Stories” paper and Andrej Karpathy’s TinyLlama2 project, where a small language model trained on 21 million stories generates text, showcases endpoint AI’s potential for IoT and edge computing. In the demo, a user inputs a sentence, and the system generates an extended children’s story based on it. 
 
Our demo featured Arm’s Ethos-U85 NPU (Neural Processing Unit) running a small language model on embedded hardware. While large language models (LLMs) are more widely known, there is growing interest in small language models due to their ability to deliver solid performance with significantly fewer resources and lower costs, making them easier and cheaper to train.  

Implementing A Transformer-based Small Language Model on Embedded Hardware

Our demo showcased the Arm Ethos-U85 as a small, low-power platform capable of running generative AI, highlighting that small language models can perform well within narrow domains. Although TinyLlama2 models are simpler than the larger models from companies like Meta, they are ideal for showcasing the U85’s AI capabilities. This makes them a great fit for endpoint AI workloads. 

Developing the demo involved significant modeling efforts, including the creation of a fully integer int8 (and int8x16) Tiny Llama2 model, which was converted to a fixed-shape TensorFlow Lite format suitable for the Ethos-U85’s constraints.  
 
Our quantization approach has shown that fully integer language models can successfully balance the tradeoff between maintaining strong accuracy and output quality. By quantizing activation, normalization functions, and matrix multiplications, we eliminated the need for floating-point computations, which are more costly in terms of silicon area and energy—key concerns for constrained embedded devices.  
 
The Ethos-U85 ran a language model on an FPGA platform at only 32 MHz, achieving text generation speeds of 7.5 to 8 tokens per second—matching human reading speed—while using just a quarter of its compute capacity. In a real system-on-chip (SoC), performance could be up to ten times faster, significantly enhancing speed and energy efficiency for AI processing at the edge. 

The children’s story-generation feature used an open-source version of Llama2, running the demo on TFLite Micro with an Ethos-NPU back-end. Most of the inference logic was written in C++ at the application level. Adjusting the context window enhanced narrative coherence, ensuring smooth, AI-driven storytelling.  
 
The team’s adaptation of the Llama2 model to run efficiently on the Ethos-U85 NPU required careful consideration of performance and accuracy due to the hardware limitations. Using mixed int8 and int16 quantization demonstrates the potential of fully integer models, encouraging the AI community to optimize generative models for edge devices and expand neural network accessibility on power-efficient platforms like the Ethos-U85. 

Showcasing the Power of the Arm Ethos-U85

Scalable from 128 to 2048 MAC units (multiply-accumulate units), the Ethos-U85 achieves a 20% power efficiency improvement over its predecessor, the Ethos-U65. A standout feature of the Ethos-U85 is its native support for transformer networks, which earlier versions could not support.  
 
The Ethos-U85 enables seamless migration for partners using previous Ethos-U NPUs, allowing them to capitalize on existing investments in Arm-based machine learning tools. Developers are increasingly adopting the Ethos-U85 for its power efficiency and high performance.

The Ethos-U85 can reach 4 TOPS (trillions of operations per second) with a 2048 MAC configuration in silicon. In the demo, however, a smaller configuration of 512 MACs on an FPGA was used to run the Tiny Llama2 small language model with 15 million parameters at just 32 MHz.   
 
This capability highlights the potential for embedding AI directly into devices. The Ethos-U85 effectively handles such workloads even with limited memory (320 KB of SRAM for caching and 32 MB for storage), paving the way for small language models and other AI applications to thrive in deeply embedded systems. 

Bringing Generative AI to Embedded Devices

Developers need better tools to navigate the complexities of AI at the edge, and Arm is addressing this with the Ethos-U85 and its support for transformer-based models. As edge AI becomes more prominent in embedded applications, the Ethos-U85 is enabling new use cases, from language models to advanced vision tasks.  
 
The Ethos-U85 NPU delivers the performance and power efficiency required for innovative, cutting-edge solutions. Like the “Tiny Stories” paper, our demo represents a significant advancement in bringing generative AI to embedded devices, demonstrating the ease of deploying small language models on the Arm platform.  
 
Arm is opening new possibilities for Edge AI across a wide range of applications, positioning the Ethos-U85 to power the next generation of intelligent, low-power devices.  

Read how Arm is accelerating real-time processing for edge AI applications in IoT with ExecuTorch.

The post Arm Ethos-U85 NPU: Unlocking Generative AI at the Edge with Small Language Models appeared first on Arm Newsroom.

Equal1’s Quantum Computing Breakthrough with Arm Technology

13 November 2024 at 15:00

When you’re driving hard to disrupt quantum computing paradigms, sometimes it’s smart to chill out. 

That’s Equal1’s philosophy. The Ireland-based company has notched another milestone on its journey deeper into the rapidly evolving field of quantum computing. Building on its success as winners of the “Silicon Startups Contest” in 2023, Equal1 has successfully tested the first chip incorporating an Arm Cortex processor at an astonishing temperature of 3.3 Kelvin (-269.85°C). That’s just a few degrees warmer than absolute zero, the theoretical lowest possible temperature where atomic motion nearly stops.

Equal1’s achievement is a crucial step in integrating classical computing components within the extremely power-constrained environment of a quantum cryo chamber. This brings the world closer to practical, scalable quantum computing systems. Cold temperatures reduce thermal noise that can cause errors in quantum computations and preserve quantum “coherence” – the ability of qubits to exist in multiple states simultaneously.

The Importance of Cryogenic Temperatures in Quantum Computing

What sets Equal1 apart in the quantum computing landscape is its pragmatic approach to quantum integration. Rather than creating entirely new infrastructure, Equal1’s vision was to build upon the foundation of the well-established semiconductor industry. This strategy became viable with the emergence of fully depleted silicon-on-insulator (FDSOI) processes, which the company’s founders recognized as having the potential to support quantum operations.

“Our thesis is that rather than tear up everything we’ve done and start anew, let’s try to build on top of what we’ve already built,” said Jason Lynch, CEO of Equal1. This philosophy has led to partnerships with industry leaders like Arm and NVIDIA, leveraging existing semiconductor expertise while pushing into quantum territory.

Cryo-Temperature Breakthrough

What makes this accomplishment particularly remarkable is the extensive engineering required to make it possible. 

“There is no such thing as a Spice Kit that works, that predicts what silicon is going to do at 3 Kelvin,” said Brendan Barry, Equal1’s CTO. “In fact, there’s no such thing as a methodology, no libraries you can get to make it happen.” 

Over five years, Equal1, which is part of the Arm Flexible Access program, developed its own internal Process Design Kit (PDK) and methodologies to predict and optimize logic behavior at cryogenic temperatures.

Equal1’s approach uses electrons or holes (the absence of electrons) as qubits, making their technology uniquely compatible with standard CMOS manufacturing processes. This choice wasn’t accidental; it’s fundamental to the company’s vision of creating practical, manufacturable quantum computers.

Arm silicon startup spotlight: Equal1

Working with commercial CMOS Fabs, Equal1 uses a standard process with proprietary design techniques developed over six years of research. These techniques enable operation at cryogenic temperatures while maintaining manufacturability. 

“We’re not changing anything in the process itself, but we are certainly pushing the limits of what the process can do,” Barry said.

Integrating the Arm Cortex-A55 Processor

Building on this success, Equal1 is now setting its sights even higher. The company plans to incorporate the more powerful Arm Cortex-A55 processor into its next-generation Quantum System-on-Chip (QSoC). This ambitious project aims to have silicon available by mid-2025, the company said.

The integration of Arm technology is crucial not just for processing power, but for power efficiency. At cryogenic temperatures, power management becomes critical as any heat generated can affect the quantum states. Arm’s advanced power-management features make it an ideal choice for this challenging environment.

Equal1’s technology targets three primary application areas:

  • Chemistry and drug discovery, potentially reducing the current 15-year, $1.3 billion average cost of bringing new drugs to market.
  • Optimization problems in finance, logistics, and other fields requiring complex variable management.
  • Quantum AI applications, where quantum computing could dramatically improve efficiency.

Perhaps most revolutionary is Equal1’s approach to deployment. Unlike traditional quantum computers that require specialized facilities, Equal1 envisions rack-mounted quantum computers that can be installed in standard data centers at a fraction of the cost of current solutions. 

“They just rack in like any other standard high-performance compute,” said Patrick McNally, Equal1’s marketing lead.

The Road Ahead for Quantum Computing and Equal1

Equal1’s progress brings the world closer to the reality of compact, powerful quantum computers that can be deployed in standard high-performance computing environments. The company’s integration of Arm technology at cryogenic temperatures opens new possibilities for quantum-classical hybrid systems, potentially creating increased demand for Arm adoption across the quantum computing industry.

As quantum computing continues to evolve, Equal1’s practical approach to integration with existing semiconductor technology and infrastructure could prove to be a game-changer. With applications ranging from drug discovery to financial modeling and beyond, the future of quantum computing is looking increasingly accessible and practical.

And that’s pretty cool.

The post Equal1’s Quantum Computing Breakthrough with Arm Technology appeared first on Arm Newsroom.

A New Game-Changer for Arm Linux Development in Automotive Applications

12 November 2024 at 23:30

The rising adoption of advanced driver-assistance systems (ADAS), autonomous driving (AD) features, and software capabilities in software-defined vehicles (SDVs) is leading to growing computing complexities, particularly for software and developers. This has created a demand for more efficient, reliable, and powerful tools that streamline and strengthen the automotive development experience.  

System76 and Ampere have responded to this need with Thelio Astra, an Arm64 developer desktop designed to revolutionize the Arm Linux development process for automotive applications. This innovative desktop offers developers the performance, compatibility, and reliability to push the boundaries of new and advancing automotive technologies. 

Unlocking the potential of automotive software with Thelio Astra 

Designed to meet the rigorous demands of ADAS, AD, and SDVs, the Thelio Astra uses the same architecture as Arm-based automotive electronic control units (ECUs). The architectural consistency ensures that the software developed for automotive applications runs efficiently on Arm-based systems without additional modifications.  

This native-development environment provides faster, more cost-effective, and more power-efficient software testing, promoting safer roads with smarter prototypes. Moreover, by leveraging the same architecture in build and deployment environments, developers can streamline their processes by avoiding cross-compilation, which simplifies the build, test, and deployment environments.  

Key benefits of Thelio Astra

  • Access to native performance: Developers can execute build and test cycles directly on Arm Neoverse processors, eliminating the performance overhead and complexities associated with instruction emulation and cross-compilation. 
  • Improved virtualization: Familiar virtualization and container tools on Arm simplify the development and test process. 
  • Better cost-effectiveness: Developers benefit from the ease of use and cost savings of having a local computer with a high core count, large memory, and plenty of storage. 
  • Enhanced compatibility: Out-of-the-box support for Arm64 and NVIDIA GPUs eliminates the need for Arm emulation, which simplifies the developer process and overall experience. 
  • Built for power efficiency: The system is engineered to prevent thermal throttling, ensuring reliable, sustained performance during the most intensive workloads, like AI-based AD and ADAS. 
  • Advanced AI: Developers can build AI-based applications using frameworks, such as PyTorch on Arm, enabling powerful AI capabilities for automotive. 
  • Optimized developer process: The development process can be optimized by enabling developers to run large software stacks on their local machine, making it easier to fix issues and improve performance. 
  • Unrivaled ecosystem support: The robust and dynamic Arm software ecosystem for automotive offers a comprehensive range of tools, libraries, and frameworks to support the development of high-performance, secure, and reliable automotive software.  
  • Accelerated time-to-market: Developers can create advanced software solutions without waiting for physical silicon, accelerating innovation and reducing development cycles. 

Cutting-edge configuration for efficient automotive workloads 

Thelio Astra is designed to handle intensive workloads. This is achieved through an advanced configuration with up to a 128-core Ampere® Altra® processor (3.0 GHz), 512GB of 8-channel DDR4 ECC memory (3200 MHz), an NVIDIA RTX 6000 Ada GPU, 8TB of PCIe 4.0 M.2 NVMe storage, and dual 25 Gigabit Ethernet SPF28. This setup guarantees that developers can tackle the most demanding tasks with ease, providing the performance and reliability that are essential for cutting-edge automotive development. 

Driving Innovation with SOAFEE and Arm Neoverse V3AE 

Thelio Astra will play a crucial role in the Scalable Open Architecture for Embedded Edge (SOAFEE) initiative, which aims to standardize automotive software development. By providing a native Arm64 development environment, Thelio Astra supports the SOAFEE reference stack, EWAOL, alongside other automotive software frameworks, with helping to accelerate innovation and shorten development cycles. 

Thelio Astra also capitalizes on the momentum from the introduction of the Arm Neoverse V3AE, the first server-class CPU designed for the automotive market. The Neoverse V3AE delivers robust performance and reliability, making it essential for AI-accelerated AD and ADAS workloads.  

Pioneering the future of automotive software development 

Thelio Astra represents a significant leap forward in Arm Linux development for the automotive industry. By addressing the growing complexities of ADAS, AD, and SDVs, System76 and Ampere have created an indispensable tool with Thelio Astra. This will provide the compatibility needed for automotive target hardware, while delivering the performance developers expect from a developer desktop. 

As the automotive landscape continues to evolve, tools like Thelio Astra will be essential in ensuring that developers have the resources they need to create the next generation of automotive applications and software. 

Access the new learning path

Looking for more information? Here’s an introductory learning path for automotive developers interested in local development using the System76 Thelio Astra Linux desktop computer.

The post A New Game-Changer for Arm Linux Development in Automotive Applications appeared first on Arm Newsroom.

Arm Founding CEO Inducted into City of London Engineering Hall of Fame

8 November 2024 at 22:00

Sir Robin Saxby, the founding CEO and former chairman of Arm, has been inducted into the City of London Engineering Hall of Fame. The ceremony, which took place on the High Walkway of Tower Bridge in London on October 31, 2024, announced the induction of seven iconic engineers who are from or connected to the City of London.

As Professor Gordon Masterton, Past Master Engineer, said: “The City of London Engineering Hall of Fame was launched in 2020 and now has 14 inductees whose lives tell the story of almost 500 years of world-beating engineering innovations that have created huge improvements in the quality of life and economy of the City of London, the United Kingdom and the world. Our mission is to celebrate these role models of exciting and inspirational engineering careers.”

(Left to right) Sir Robin Saxby, The Lord Mayor of London Michael Mainelli, Professor Gordon Masterton

Saxby joined Arm full-time as the first CEO in February 1991 where he led the transformation of the company from a 12-person startup to one of the most valuable tech companies in the UK with a market capitalization of over $10 billion.

As CEO, Saxby was the visionary behind Arm’s highly successful business model, which has been adopted by many other companies across the tech industry. Through this innovative business model, the Arm processor can be licensed to many different companies for an upfront license fee, with Arm receiving royalties based on the amount of silicon produced.

This paved the way for Arm to become the industry’s highest-performing and most power-efficient compute platform, with unmatched scale today touching 100 percent of the connected global population.

Under Saxby’s tenure at Arm, power-efficient technology became the foundation of the world’s first GSM mobile phones that achieved enormous commercial success during the 1990s, including the Arm-powered Nokia 6110. Today, more than 99 percent of the world’s smartphones are based on Arm technology. The success in the mobile market gave the company the platform to expand into other technology markets that require leading power-efficient technology from Arm, including IoT, automotive and datacenter.

Saxby stepped down as CEO in 2001 and chairman of Arm in 2006. In 2002, he was knighted in the 2002 New Year Honors List. Saxby is a visiting Professor at the University of Liverpool, a fellow of the Royal Academy of Engineering and an honorary fellow of the Royal Society.

Thanks to Saxby’s work, Arm has grown to be a global leader in technology, with nearly 8,000 employees worldwide today. Just as Saxby and the 12 founding members had originally envisioned, Arm remains committed to developing technology that will power the future of computing.

The post Arm Founding CEO Inducted into City of London Engineering Hall of Fame appeared first on Arm Newsroom.

Pioneering People-centric Leadership at Arm

7 November 2024 at 22:59

The dynamic world of technology has experienced incredible change in the last decade alone, and in this new era of AI is moving faster than ever. The pace of innovation is unprecedented, driven by companies like Arm which employs nearly 8,000 people who are doing inspiring, innovative and important work to deliver the foundational Arm compute platform. In the two decades I’ve been at Arm, including eight years as Chief People Officer (CPO), I’ve had the opportunity to navigate the complexities of a rapidly evolving industry, a dynamic geo-political landscape, widening inequity across the globe, an increasing climate crisis and shifting expectations around DEI, ESG and the fundamentals of what work means. All the while, being a champion of a people-centric approach to ensure everyone at Arm can do their best work.

Now, it is time to step away and pursue the next phase of my life. I am delighted to be passing the baton to Charlotte Eaton, who is returning to Arm as the next CPO.

A passion for people and purpose

I’ve always believed that people do their best work when driven by a sense of purpose, shared values, sense of community and solving challenging and important problems. I’ve had the opportunity to help people navigate some of the most pivotal moments in Arm’s history and define a culture built around high engagement and high performance embedded into how we work, our workspaces and how we operate as a responsible and sustainable business.

I’ve also had the privilege of building a world-leading People Group focused on ensuring, across our business, that people are seen as more than resources, and our culture, organization, processes, technology, workspaces and approach to sustainability reflect this. Transparency and authenticity are at the core of how our company communicates and engages, and I firmly believe that putting people at the center of our decisions is not only the right way to treat people, but drives a stronger and more valuable business outcome. People do extraordinary things when we create the environment for them to do so.  

Transformational leadership

I have served as CPO through some of Arm’s biggest changes, from taking the company public to private, then public again, through significant political and global challenges, the changing perception of what work means, a change in leadership and a step change in culture to enable a transformed business strategy. These transitions mean that two decades have been filled with new and interesting opportunities and the ability to deliver progressive people and workplace practices that ensure our people remain engaged, motivated and aligned with the company’s evolving goals and ambitions.

Together with my team we developed a shared culture and way of working, navigated periods of high growth, delivered thoughtful organizational changes, defined compelling reward propositions and implemented progressive policies around well-being, time-off and critical life events, so that people can perform to the highest level while also navigating life’s important moments.

And the success of these people strategies is shown across the organization. We currently have a growth rate of 15% per annum1, attrition at an annual rate of under 5%1, employee engagement at 84%2, with 95%2 of people being proud to work for Arm and 93%2 feeling their work is valued, having an impact and aligned with the business strategy.

Putting people first and creating a culture that is inclusive, supportive, challenging and that cares about the world we inhabit has propelled Arm forward. There is an incredible opportunity ahead for the company and I know Charlotte will bring her passion, expertise and commitment as an extraordinary business and people leader to ensure our teams are ready and able to deliver.

I love Arm, and it has been a truly extraordinary place to work. I will remain Arm’s biggest supporter and look forward to seeing what more extraordinary things are to come as our people build the future of computing on Arm.

1Data from Oct. 1, 2023-Sept. 30, 2024

2Data from Life at Arm Survey completed in October 2024.

The post Pioneering People-centric Leadership at Arm appeared first on Arm Newsroom.

What are the Latest Tech Innovations from Arm in October 2024?

1 November 2024 at 19:20

As we move further into the era of advanced computing, Arm is continuing to lead the charge with groundbreaking tech innovations. October 2024 has been a month of significant strides in technology, particularly in AI, machine learning (ML), security, and system-on-chip (SoC) architecture.  

The Arm Editorial Team has highlighted the cutting-edge tech innovations that happened at Arm in October 2024 – all to shape the next generation of intelligent, secure, and high-performing compute systems. 

Enhancing AI, ML, and Security for Next-Gen SoCs with Armv9.6-A

Arm’s latest CPU architecture, Armv9.6-A, introduces key enhancements to meet evolving computing needs, focusing on AI, ML, security, and chiplet-based systems-on-chip (SoCs). Martin Weidmann, Director Product Management, discusses the latest features in the Arm A-Profile architecture for 2024

The 2024 updates enhance Scalable Matrix Extension (SME) with structured sparsity and quarter-tile operations for efficient matrix processing while improving memory management, resource partitioning, secure data handling, and multi-chip system support. 

Streamlining PyTorch Model Deployment on Edge Devices with ExecuTorch on Arm

Arm’s collaboration with Meta has led to the introduction of ExecuTorch, enhancing support for deploying PyTorch models on edge devices, particularly with the high-performing Arm Ethos-U85 NPU. Robert Elliott, Director of Applied ML, highlights how this collaboration enables developers to significantly reduce model deployment time and utilize advanced AI inference workloads with better scalability. 

With an integrated GitHub repository providing a fully supported development environment, ExecuTorch simplifies compiling and running models, allowing users to create intelligent IoT applications efficiently.  

Accelerating AI with Quantized Llama 3.2 Models on Arm CPUs

Arm and Meta have partnered to empower the AI developer ecosystem by enabling the deployment of quantized Llama 3.2 models on Arm CPUs with ExecuTorch and KleidiAI. Gian Marco Iodice, Principal Software Engineer, details how this integration allows quantized Llama 3.2 models to run up to 20% faster on Arm Cortex-A CPUs, while maintaining model quality and reducing memory usage. 

With the ExecuTorch beta release and support for lightweight quantized Llama 3.2 models, Arm is simplifying the development of AI applications for edge devices, resulting in notable performance gains in prefill and decode phases.  

Optimizing Shader Performance with Arm Performance Studio 2024.4

Arm’s latest Frame Advisor enhancement helps mobile developers identify inefficient shaders, boosting performance, memory usage, and power efficiency. Julie Gaskin, Staff Developer Evangelist, details the new features in Arm Performance Studio 2024.4, including support for new CPUs, improved Vulkan and OpenGL ES integration, and expanded RenderDoc debugging tools.   

This update provides detailed shader metrics – like cycle costs, register usage, and arithmetic precision – enabling developers to optimize performance and lower costs.  

Boosting Performance and Security for Arm Architectures with LLVM 19.1.0 

LLVM 19.1.0, released in September 2024, introduces nearly 1,000 contributions from Arm, including new architecture support for Armv9.2-A cores and performance improvements for data-center CPUs like Neoverse-V3. Volodymyr Turanskyy, Principal Software Engineer, highlights the features of LLVM 19.1.0, which deliver better performance and enhanced security.   
  
The update optimizes shader performance and Fortran intrinsics, adds support for Guarded Control Stack (GCS), security mitigations for Cortex-M Security Extensions (CMSE), enhancements for OpenMP reduction, function multi-versioning, and new command-line options for improved code generation. 

Introducing System Monitoring Control Framework (SMCF) for Neoverse CSS

Arm’s System Monitor Control Framework (SMCF) streamlines sensor and monitor management in complex SoCs with a standardized software interface. Marc Meunier, Director of Ecosystem Development, highlights how it supports seamless integration of third-party sensors, flexible data sampling, and efficient data collection through DMA, reducing processor overhead.   
  
The SMCF enables distributed power management and improves system telemetry, offering insights for profiling, debugging, and remote management while ensuring secure, standards-compliant data handling.   

Achieving Human-Readable Speeds with Llama 3 70B on AWS Graviton4 CPUs  

AWS’s Graviton4 processors, built with Arm Neoverse V2 CPU cores, are designed to boost cloud performance for high-demand AI workloads. Na Li, ML Solutions Architect, explains how deploying the Llama 3 70B model on Graviton4 leverages quantization techniques to achieve token generation rates of 5-10 tokens per second.   

This innovation enhances cloud infrastructure, enabling more powerful AI applications and improving performance for tasks requiring advanced reasoning.   

Superior Performance on Arm CPUs with Pardiso Sparse Linear Solver

Panua Technologies optimized the Pardiso sparse linear solver for Arm CPUs, delivering significant performance gains over Intel’s MKL. David Lecomber, Senior Director Infrastructure Tools, highlights how Pardiso on Arm Neoverse V1 processors outperform MKL, demonstrating superior efficiency and scalability for large-scale scientific and engineering computations.   

This breakthrough positions Pardiso as a top choice for industries like automotive manufacturing and semiconductor design, offering unmatched speed and performance.   

Built on Arm Partner Stories

Vince Hu, Corporate Vice President, MediaTek, talks about the Arm MediaTek partnership, which drives ongoing tech innovation and delivers transformative technologies to enhance everyday life. 

Eben Upton, CEO of Raspberry Pi, shares how the company has evolved from an educational tool to a key player in industrial and embedded applications, all powered by Arm technology. He highlights the development of new tools over the past decade and his personal journey with the BBC Microcomputer.  

Clay Nelson, Industry Solutions Strategy Lead at GitHub, discusses the partnership between GitHub and Arm, which combines GitHub Actions with Arm native hardware to revolutionize software development, leading to faster development times and reduced costs. 

Sy Choudhury from Meta Platforms Inc. explains how the collaboration with Arm is optimizing AI on the Arm Compute Platform, enhancing digital interactions through devices like AR smart glasses, and impacting everyday experiences with advanced AI applications.  

Highlights from PyTorch Conference 2024 

To accelerate the development of custom silicon solutions, Arm partners are tapping into the latest industry expertise and resources. Principal Software Engineer, Gian Marco Iodice discusses this in, “Democratizing AI: Powering the Future with Arm’s Global Compute Ecosystem,” from PyTorch Conference 2024. 

Iodice highlights KleidiAI-accelerated demos, key AI tech innovations from cloud to edge, and the latest Learning Paths for developers. 

The post What are the Latest Tech Innovations from Arm in October 2024? appeared first on Arm Newsroom.

❌
❌