Normal view

There are new articles available, click to refresh the page.
Today — 13 November 2024FOSS

A New Game-Changer for Arm Linux Development in Automotive Applications

12 November 2024 at 23:30

The rising adoption of advanced driver-assistance systems (ADAS), autonomous driving (AD) features, and software capabilities in software-defined vehicles (SDVs) is leading to growing computing complexities, particularly for software and developers. This has created a demand for more efficient, reliable, and powerful tools that streamline and strengthen the automotive development experience.  

System76 and Ampere have responded to this need with Thelio Astra, an Arm64 developer desktop designed to revolutionize the Arm Linux development process for automotive applications. This innovative desktop offers developers the performance, compatibility, and reliability to push the boundaries of new and advancing automotive technologies. 

Unlocking the potential of automotive software with Thelio Astra 

Designed to meet the rigorous demands of ADAS, AD, and SDVs, the Thelio Astra uses the same architecture as Arm-based automotive electronic control units (ECUs). The architectural consistency ensures that the software developed for automotive applications runs efficiently on Arm-based systems without additional modifications.  

This native-development environment provides faster, more cost-effective, and more power-efficient software testing, promoting safer roads with smarter prototypes. Moreover, by leveraging the same architecture in build and deployment environments, developers can streamline their processes by avoiding cross-compilation, which simplifies the build, test, and deployment environments.  

Key benefits of Thelio Astra

  • Access to native performance: Developers can execute build and test cycles directly on Arm Neoverse processors, eliminating the performance overhead and complexities associated with instruction emulation and cross-compilation. 
  • Improved virtualization: Familiar virtualization and container tools on Arm simplify the development and test process. 
  • Better cost-effectiveness: Developers benefit from the ease of use and cost savings of having a local computer with a high core count, large memory, and plenty of storage. 
  • Enhanced compatibility: Out-of-the-box support for Arm64 and NVIDIA GPUs eliminates the need for Arm emulation, which simplifies the developer process and overall experience. 
  • Built for power efficiency: The system is engineered to prevent thermal throttling, ensuring reliable, sustained performance during the most intensive workloads, like AI-based AD and ADAS. 
  • Advanced AI: Developers can build AI-based applications using frameworks, such as PyTorch on Arm, enabling powerful AI capabilities for automotive. 
  • Optimized developer process: The development process can be optimized by enabling developers to run large software stacks on their local machine, making it easier to fix issues and improve performance. 
  • Unrivaled ecosystem support: The robust and dynamic Arm software ecosystem for automotive offers a comprehensive range of tools, libraries, and frameworks to support the development of high-performance, secure, and reliable automotive software.  
  • Accelerated time-to-market: Developers can create advanced software solutions without waiting for physical silicon, accelerating innovation and reducing development cycles. 

Cutting-edge configuration for efficient automotive workloads 

Thelio Astra is designed to handle intensive workloads. This is achieved through an advanced configuration with up to a 128-core Ampere® Altra® processor (3.0 GHz), 512GB of 8-channel DDR4 ECC memory (3200 MHz), an NVIDIA RTX 6000 Ada GPU, 8TB of PCIe 4.0 M.2 NVMe storage, and dual 25 Gigabit Ethernet SPF28. This setup guarantees that developers can tackle the most demanding tasks with ease, providing the performance and reliability that are essential for cutting-edge automotive development. 

Driving Innovation with SOAFEE and Arm Neoverse V3AE 

Thelio Astra will play a crucial role in the Scalable Open Architecture for Embedded Edge (SOAFEE) initiative, which aims to standardize automotive software development. By providing a native Arm64 development environment, Thelio Astra supports the SOAFEE reference stack, EWAOL, alongside other automotive software frameworks, with helping to accelerate innovation and shorten development cycles. 

Thelio Astra also capitalizes on the momentum from the introduction of the Arm Neoverse V3AE, the first server-class CPU designed for the automotive market. The Neoverse V3AE delivers robust performance and reliability, making it essential for AI-accelerated AD and ADAS workloads.  

Pioneering the future of automotive software development 

Thelio Astra represents a significant leap forward in Arm Linux development for the automotive industry. By addressing the growing complexities of ADAS, AD, and SDVs, System76 and Ampere have created an indispensable tool with Thelio Astra. This will provide the compatibility needed for automotive target hardware, while delivering the performance developers expect from a developer desktop. 

As the automotive landscape continues to evolve, tools like Thelio Astra will be essential in ensuring that developers have the resources they need to create the next generation of automotive applications and software. 

Access the new learning path

Looking for more information? Here’s an introductory learning path for automotive developers interested in local development using the System76 Thelio Astra Linux desktop computer.

The post A New Game-Changer for Arm Linux Development in Automotive Applications appeared first on Arm Newsroom.

Early Benchmarks: AMD EPYC 9005 Performance & Power Efficiency To Lead Further With Linux 6.13

12 November 2024 at 23:10
One of the many changes to look forward to with the upcoming Linux 6.13 kernel cycle is the AMD P-State driver to be used by default with the new EPYC 9005 series processors. While AMD Ryzen CPUs for a while now have been defaulting to the modern AMD P-State driver that makes use of ACPI CPPC platform support for allowing better power efficiency, AMD EPYC CPUs have kept to using the generic ACPI CPUFreq frequency scaling driver. But now AMD engineers have deemed amd_pstate ready for use with EPYC 9005 "Turin" CPUs and will be the default choice moving forward. Here is more information and power/performance benchmarks for this shift while testing using the EPYC 9755 processors.

Cost-Effective DART-MX91 SoM with 1.4GHz NXP i.MX 91 Processor Available from $35

12 November 2024 at 22:36
Variscite has introduced the DART-MX91 System on Module, a compact and cost-effective solution within the DART Pin2Pin family, targeting edge devices for IoT, smart cities, industrial applications, and more. Powered by a 1.4 GHz Cortex-A55 NXP i.MX 91 processor, the DART-MX91 supports up to 2GB of LPDDR4 memory and offers eMMC storage options ranging from […]

Red Hat OpenShift Incident Detection uses analytics to help you quickly detect issues

12 November 2024 at 07:00
While Red Hat OpenShift was built with observability in mind, the number of alerts that you can receive can be overwhelming. A high number of alerts can be generated by a small number of issues which leads to a burst of many alerts.Your Red Hat OpenShift subscription now includes access to an Incident Detection capability that uses analytics to group alerts into incidents and help you quickly and easily understand the underlying issue and how to address it. To learn more about the OpenShift Incident Detection capability and how to use it, read the full blog.To learn more about Insights visit
Yesterday — 12 November 2024FOSS

Better Together: Understanding the Difference Between Sign-In Enforcement and SSO

12 November 2024 at 21:57

Docker Desktop’s single sign-on (SSO) and sign-in enforcement (also called login enforcement) features work together to enhance security and ease of use. SSO allows users to log in with corporate credentials, whereas login enforcement ensures every user is authenticated, giving IT tighter control over compliance. In this post, we’ll define each of these features, explain their unique benefits, and show how using them together streamlines management and improves your Docker Desktop experience.

2400x1260 evergreen docker blog a

Before diving into the benefits of login alongside SSO, let’s clarify three related terms: login, single sign-on (SSO), and enforced login.

  • Login: Logging in connects users to Docker’s suite of tools, enabling access to personalized settings, team resources, and features like Docker Scout and Docker Build Cloud. By default, members of an organization can use Docker Desktop without signing in. Logging in can be done through SSO or by using Docker-specific credentials.
  • Single sign-on (SSO): SSO allows users to access Docker using their organization’s central authentication system, letting teams streamline access across multiple platforms with one set of credentials. SSO standardizes and secures login and supports automation around provisioning but does not automatically log in users unless enforced.
  • Enforced login: This policy, configured by administrators, ensures users are logged in by requiring login credentials before accessing Docker Desktop and associated tools. With enforced login, teams gain consistent access to Docker’s productivity and security features, minimizing gaps in visibility and control.

With these definitions in mind, here’s why being logged in matters, how SSO simplifies login, and how login enforcement ensures your team gets the full benefit of Docker’s powerful development tools.

Why logging in matters for admins and compliance teams

Enforcing sign-in with corporate credentials ensures that all users accessing Docker Desktop are verified and utilizing the benefits of your Docker Business subscription while adding a layer of security to safeguard your software supply chain. This policy strengthens your organization’s security posture and enables Docker to provide detailed usage insights, helping compliance teams track engagement and adoption. 

Enforced login will support cloud-based control over settings, allowing admins to manage application configurations across the organization more effectively. By requiring login, your organization benefits from greater transparency, control, and alignment with compliance standards. 

When everyone in your organization signs in with proper credentials:

  • Access controls for shared resources become more reliable, allowing administrators to enforce policies and permissions consistently.
  • Developers stay connected to their workspaces and resources, minimizing disruptions.
  • Desktop Insights Dashboard provides admins actionable insights into usage, from feature adoption to image usage trends and login activity, helping administrators optimize team performance and security.
  • Teams gain full visibility and access to Docker Scout’s security insights, which only function with logged-in accounts.

Read more about the benefits of login on our blog post, Maximizing Docker Desktop: How Signing In Unlocks Advanced Features.

Options for enforcing sign-in

Docker provides three options to help administrators enforce sign-in

  • Registry key method (Windows Only): Integrates seamlessly with Windows, letting IT enforce login policies within familiar registry settings, saving time on configuration. 
  • Plist or config profiles method (Mac): Provides an easy way for IT to manage access on macOS, ensuring policy consistency across Apple devices without additional tools. 
  • Registry.json method (all platforms): Works across Windows, macOS, and Linux, allowing IT to enforce login on all platforms with a single, flexible configuration file, streamlining policy management for diverse environments.

Each method helps IT secure access, restrict to authorized users, and maintain compliance across all systems. You can enforce login without setting up SSO. Read the documentation to learn more about Docker’s sign-in enforcement methods.  

Single sign-on (SSO) 

Docker Desktop’s SSO capabilities allow organizations to streamline access by integrating with corporate identity providers, ensuring that only authorized team members can access Docker resources using their work credentials. This integration enhances security by eliminating the need for separate Docker-specific passwords, reducing the risk of unauthorized access to critical development tools. With SSO, admins can enforce consistent login policies across teams, simplify user management, and gain greater control over who accesses Docker Desktop. Additionally, SSO enables compliance teams to track access and usage better, aligning with organizational security standards and improving overall security posture.

Docker Desktop supports SSO integrations with a variety of idPs, including Okta, OneLogin, Auth0, and Microsoft Entra ID. By integrating with these IdPs, organizations can streamline user authentication, enhance security, and maintain centralized access control across their Docker environments.

Differences between SSO enforcement and SSO enablement

SSO and SCIM give your company more control over how users log in and attach themselves to your organization and Docker subscription but do not require your users to sign in to your organization when using Docker Desktop. Without sign-in enforcement, users can continue to utilize Docker Desktop without logging in or using their personal Docker IDs or subscriptions, preventing Docker from providing you with insights into their usage and control over the application. 

SSO enforcement usually applies to identity management across multiple applications, enforcing a single, centralized login for a suite of apps or services. However, a registry key or other local login enforcement mechanism typically applies only to that specific application (e.g., Docker Desktop) and doesn’t control access across different services.

Better together: Sign-in enforcement and SSO 

While SSO enables seamless access to Docker for those who choose to log in, enforcing login ensures that users fully benefit from Docker’s productivity and security features.

Docker’s SSO integration is designed to simplify enterprise user management, allowing teams to access Docker with their organization’s centralized credentials. This streamlines onboarding and minimizes password management overhead, enhancing security across the board. However, SSO alone doesn’t require users to log in — it simply makes it more convenient and secure. Without enforced login, users might bypass the sign-in process, missing out on Docker’s full benefits, particularly in areas of security and control.

By coupling SSO with login enforcement, organizations strengthen their Registry Access Management (RAM), ensuring access is restricted to approved registries, boosting image compliance, and centralizing control. Encouraging login alongside SSO ensures teams enjoy a seamless experience while unlocking Docker’s complete suite of features.

Learn more

Celeron Powered Fanless Mini PC with Dual 4K Display Support and M.2 2280 Slot

12 November 2024 at 21:56
The EMP-100 is a compact, fanless mini PC built for versatile applications, including digital signage, industrial control, and retail environments. Designed with an aluminum chassis and VESA mounting support, it provides silent performance, dual 4K display support, and storage expansion via an M.2 2280 slot. The EMP-100 series offers a choice between Intel Celeron processors, […]

Accelerating AI Development with the Docker AI Catalog

12 November 2024 at 21:38

Developers are increasingly expected to integrate AI capabilities into their applications but they also face many challenges. Namely, the steep learning curve, coupled with an overwhelming array of tools and frameworks, makes this process too tedious. Docker aims to bridge this gap with the Docker AI Catalog, a curated experience designed to simplify AI development and empower both developers and publishers.

2400x1260 generic hub blog f

Why Docker for AI?

Docker and container technology has been a key technology used by developers at the forefront of AI applications for the past few years. Now, Docker is doubling down on that effort with our AI Catalog. Developers using Docker’s suite of products are often responsible for building, deploying, and managing complex applications — and, now, they must also navigate generative AI (GenAI) technologies, such as large language models (LLMs), vector databases, and GPU support.

For developers, the AI Catalog simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. This approach removes the hassle of evaluating numerous tools and configurations, allowing developers to focus on building innovative AI applications.

Key benefits for development teams

The Docker AI Catalog is tailored to help users overcome common hurdles in the evolving AI application development landscape, such as:

  • Decision overload: The GenAI ecosystem is crowded with new tools and frameworks. The Docker AI Catalog simplifies the decision-making process by offering a curated list of trusted content and container images, so developers don’t have to wade through endless options.
  • Steep learning curve: With the rise of new technologies like LLMs and retrieval-augmented generation (RAG), the learning curve can be overwhelming. Docker provides an all-in-one resource to help developers quickly get up to speed.
  • Complex configurations preventing production readiness: Running AI applications often requires specialized hardware configurations, especially with GPUs. Docker’s AI stacks make this process more accessible, ensuring that developers can harness the full power of these resources without extensive setup.

The result? Shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Empowering publishers

For Docker verified publishers, the AI Catalog provides a platform to differentiate themselves in a crowded market. Independent software vendors (ISVs) and open source contributors can promote their content, gain insights into adoption, and improve visibility to a growing community of AI developers.

Key features for publishers include:

  • Increased discoverability: Publishers can highlight their AI content within a trusted ecosystem used by millions of developers worldwide.
  • Metrics and insights: Verified publishers gain valuable insights into the performance of their content, helping them optimize strategies and drive engagement.

Unified experience for AI application development

The AI Catalog is more than just a repository of AI tools. It’s a unified ecosystem designed to foster collaboration between developers and publishers, creating a path forward for more innovative approaches to building applications supported by AI capabilities. Developers get easy access to essential AI tools and content, while publishers gain the visibility and feedback they need to thrive in a competitive marketplace.

With Docker’s trusted platform, development teams can build AI applications confidently, knowing they have access to the most relevant and reliable tools available.

The road ahead: What’s next?

Docker will launch the AI Catalog in preview on November 12, 2024, alongside a joint webinar with MongoDB. This initiative will further Docker’s role as a leader in AI application development, ensuring that developers and publishers alike can take full advantage of the opportunities presented by AI tools.

Stay tuned for more updates and prepare to dive into a world of possibilities with the Docker AI Catalog. Whether you’re an AI developer seeking to streamline your workflows or a publisher looking to grow your audience, Docker has the tools and support you need to succeed.

Ready to simplify your AI development process? Explore the AI Catalog and get access to trusted content that will accelerate your development journey. Start building smarter, faster, and more efficiently.

For publishers, now is the perfect time to join the AI Catalog and gain visibility for your content. Become a trusted source in the AI development space and connect with millions of developers looking for the right tools to power their next breakthrough.

Learn more

Red Hat OpenShift 4.17: What you need to know

12 November 2024 at 07:00
Red Hat OpenShift 4.17 is now generally available. Based on Kubernetes 1.30 and CRI-O 1.30, OpenShift 4.17 features expanded control plane options, increased flexibility for virtualization and networking, new capabilities to leverage generative AI, and continued investment in Red Hat OpenShift Platform Plus. These additions further accelerate innovation with OpenShift without compromising on security. OpenShift provides a trusted, comprehensive, and consistent application platform enabling enterprises to innovate faster across the hybrid cloud. Available in self-managed or fully managed cloud

Red Hat Device Edge for Industrial Applications: A Journey from Datacenter to Plant Floor

12 November 2024 at 07:00
For the last 30 years, Red Hat Enterprise Linux (RHEL) has been the solid foundation of datacenter and cloud infrastructures, powering everything from websites, databases, and applications. While there’s been huge adoption in these areas, Red Hat asked a question about two years ago: Could the same rock solid operating system also be used for industrial applications, and achieve the same dependability and performance demanded by the market?This blog is a look through the progress made over the last few years, the advancements and improvements, and a glimpse into the roadmap of Red Hat Device

Red Hat and Deloitte Collaborate to Modernize the Developer Experience

12 November 2024 at 07:00
Modern application development can be complex - fraught with disparate development systems and environments as well as distributed teams. As organizations increasingly turn their focus to data-driven intelligent applications, this complexity grows. AI-enabled applications require added data shaping capabilities and the flexibility to run centrally or at the edge. Enhancing the AI developer experience is paramount to delivering better and safer user experiences.To create a unified experience for developers across teams and infrastructure, Deloitte and Red Hat are announcing an expanded collabor

How to make generative AI more consumable

12 November 2024 at 07:00
Think about some of the past trends in technology, and you’ll start to see some patterns emerge. For example, with cloud computing there’s no one-size-fits-all approach. Combinations of different approaches, such as on premise and different cloud providers, have led to organizations taking advantage of hybrid infrastructure benefits in deploying their enterprise applications. When we think about the future, a similar structure will be essential for the consumption of artificial intelligence (AI) across diverse applications and business environments. Flexibility will be crucial as no single

Guide to Red Hat observability with OpenShift 4.17

12 November 2024 at 07:00
With Red Hat OpenShift 4.17, we continue to enhance the OpenShift observability offering. Observability plays a key role in monitoring, troubleshooting and optimizing OpenShift clusters. This article guides you through the latest features and integrations that help you improve the observability of your OpenShift environment. Single pane of glass for cluster observabilitySeptember 2024 saw the release of Cluster Observability Operator 0.4.0, which enables the installation of specific observability components, such as a less-opinionated monitoring stack and UI plugins. This includes platform, vi

Creating cost effective specialized AI solutions with LoRA adapters on Red Hat OpenShift AI

12 November 2024 at 07:00
Picture this: you have a powerful language model in production, but it struggles to provide satisfying answers for specific, niche questions. You try retrieval-augmented generation (RAG) and carefully crafted prompt engineering, but the responses still need to be revised. The next step might seem to be full model fine tuning—updating every layer of the model to handle your specialized cases—but that demands significant time and compute resources.What is LoRA?Low-rank adaptation (LoRA) is a faster and less resource-intensive fine tuning technique that can shape how a large language model re

An Introduction to TrustyAI

12 November 2024 at 07:00
TrustyAI is an open source community dedicated to providing a diverse toolkit for responsible artificial intelligence (AI) development and deployment. TrustyAI was founded in 2019 as part of Kogito, an open source business automation community, as a response to growing demand from users in highly regulated industries such as financial services and healthcare. With increasing global regulation of AI technologies, toolkits for responsible AI are an invaluable and necessary asset to any MLOps platform. Since 2021, TrustyAI has been independent of Kogito, and has grown in size and scope amidst the
❌
❌