Normal view

There are new articles available, click to refresh the page.
Today — 21 February 2025Main stream

Femtofox Pro v1 LoRa and Meshtastic development board runs Linux-based Foxbuntu OS on Rockchip RV1103 SoC

21 February 2025 at 02:00
Femtofox Pro v1 kit LoRa and Meshtastic development board

The Femtofox Pro v1 kit is a compact, low-power LoRa and Meshtastic development board running Linux specially designed for Meshtastic networks. Built around the Luckfox Pico Mini (Rockchip RV1103) SBC, this compact development platform supports USB host/device functionality, Ethernet, WiFi over USB, GPIO interfaces, I2C, UART, and a real-time clock (RTC). The most unique feature of this board is that it operates at very low power (0.27-0.4W), making it ideal for solar-powered applications. Additionally, Femtofox supports native Meshtastic client control, USB mass storage, and network reconfiguration via a USB flash drive. It also includes user-configurable buttons for WiFi toggling and system reboot, enhancing its usability. These features make Femtofox particularly useful for applications such as emergency response and off-grid messaging. Femtofox Pro v1 kit specifications Mainboard – Luckfox Pico Mini A SoC  – Rockchip RV1103 SoC CPU – Arm Cortex-A7 processor @ 1.2GHz + RISC-V core Memory – 64MB DDR2 [...]

The post Femtofox Pro v1 LoRa and Meshtastic development board runs Linux-based Foxbuntu OS on Rockchip RV1103 SoC appeared first on CNX Software - Embedded Systems News.

Announcing Rust 1.85.0 and Rust 2024

20 February 2025 at 07:00

The Rust team is happy to announce a new version of Rust, 1.85.0. This stabilizes the 2024 edition as well. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.85.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.85.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.85.0 stable

Rust 2024

We are excited to announce that the Rust 2024 Edition is now stable! Editions are a mechanism for opt-in changes that may otherwise pose a backwards compatibility risk. See the edition guide for details on how this is achieved, and detailed instructions on how to migrate.

This is the largest edition we have released. The edition guide contains detailed information about each change, but as a summary, here are all the changes:

Migrating to 2024

The guide includes migration instructions for all new features, and in general transitioning an existing project to a new edition. In many cases cargo fix can automate the necessary changes. You may even find that no changes in your code are needed at all for 2024!

Note that automatic fixes via cargo fix are very conservative to avoid ever changing the semantics of your code. In many cases you may wish to keep your code the same and use the new semantics of Rust 2024; for instance, continuing to use the expr macro matcher, and ignoring the conversions of conditionals because you want the new 2024 drop order semantics. The result of cargo fix should not be considered a recommendation, just a conservative conversion that preserves behavior.

Many people came together to create this edition. We'd like to thank them all for their hard work!

async closures

Rust now supports asynchronous closures like async || {} which return futures when called. This works like an async fn which can also capture values from the local environment, just like the difference between regular closures and functions. This also comes with 3 analogous traits in the standard library prelude: AsyncFn, AsyncFnMut, and AsyncFnOnce.

In some cases, you could already approximate this with a regular closure and an asynchronous block, like || async {}. However, the future returned by such an inner block is not able to borrow from the closure captures, but this does work with async closures:

let mut vec: Vec<String> = vec![];

let closure = async || {
    vec.push(ready(String::from("")).await);
};

It also has not been possible to properly express higher-ranked function signatures with the Fn traits returning a Future, but you can write this with the AsyncFn traits:

use core::future::Future;
async fn f<Fut>(_: impl for<'a> Fn(&'a u8) -> Fut)
where
    Fut: Future<Output = ()>,
{ todo!() }

async fn f2(_: impl for<'a> AsyncFn(&'a u8))
{ todo!() }

async fn main() {
    async fn g(_: &u8) { todo!() }
    f(g).await;
    //~^ ERROR mismatched types
    //~| ERROR one type is more general than the other

    f2(g).await; // ok!
}

So async closures provide first-class solutions to both of these problems! See RFC 3668 and the stabilization report for more details.

Hiding trait implementations from diagnostics

The new #[diagnostic::do_not_recommend] attribute is a hint to the compiler to not show the annotated trait implementation as part of a diagnostic message. For library authors, this is a way to keep the compiler from making suggestions that may be unhelpful or misleading. For example:

pub trait Foo {}
pub trait Bar {}

impl<T: Foo> Bar for T {}

struct MyType;

fn main() {
    let _object: &dyn Bar = &MyType;
}
error[E0277]: the trait bound `MyType: Bar` is not satisfied
 --> src/main.rs:9:29
  |
9 |     let _object: &dyn Bar = &MyType;
  |                             ^^^^ the trait `Foo` is not implemented for `MyType`
  |
note: required for `MyType` to implement `Bar`
 --> src/main.rs:4:14
  |
4 | impl<T: Foo> Bar for T {}
  |         ---  ^^^     ^
  |         |
  |         unsatisfied trait bound introduced here
  = note: required for the cast from `&MyType` to `&dyn Bar`

For some APIs, it might make good sense for you to implement Foo, and get Bar indirectly by that blanket implementation. For others, it might be expected that most users should implement Bar directly, so that Foo suggestion is a red herring. In that case, adding the diagnostic hint will change the error message like so:

#[diagnostic::do_not_recommend]
impl<T: Foo> Bar for T {}
error[E0277]: the trait bound `MyType: Bar` is not satisfied
  --> src/main.rs:10:29
   |
10 |     let _object: &dyn Bar = &MyType;
   |                             ^^^^ the trait `Bar` is not implemented for `MyType`
   |
   = note: required for the cast from `&MyType` to `&dyn Bar`

See RFC 2397 for the original motivation, and the current reference for more details.

FromIterator and Extend for tuples

Earlier versions of Rust implemented convenience traits for iterators of (T, U) tuple pairs to behave like Iterator::unzip, with Extend in 1.56 and FromIterator in 1.79. These have now been extended to more tuple lengths, from singleton (T,) through to 12 items long, (T1, T2, .., T11, T12). For example, you can now use collect() to fanout into multiple collections at once:

use std::collections::{LinkedList, VecDeque};
fn main() {
    let (squares, cubes, tesseracts): (Vec<_>, VecDeque<_>, LinkedList<_>) =
        (0i32..10).map(|i| (i * i, i.pow(3), i.pow(4))).collect();
    println!("{squares:?}");
    println!("{cubes:?}");
    println!("{tesseracts:?}");
}
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
[0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
[0, 1, 16, 81, 256, 625, 1296, 2401, 4096, 6561]

Updates to std::env::home_dir()

std::env::home_dir() has been deprecated for years, because it can give surprising results in some Windows configurations if the HOME environment variable is set (which is not the normal configuration on Windows). We had previously avoided changing its behavior, out of concern for compatibility with code depending on this non-standard configuration. Given how long this function has been deprecated, we're now updating its behavior as a bug fix, and a subsequent release will remove the deprecation for this function.

Stabilized APIs

These APIs are now stable in const contexts

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.85.0

Many people came together to create Rust 1.85.0. We couldn't have done it without all of you. Thanks!

Yesterday — 20 February 2025Main stream

Linux Finally Introducing A Standardized Way Of Informing User-Space Over Hung GPUs

20 February 2025 at 21:30
The upcoming Linux 6.15 kernel is set to finally introduce a standardized way of informing user-space of GPUs becoming hung or otherwise unresponsive. This is initially wired up for AMD and Intel graphics drivers on Linux so the user can be properly notified of problems and/or user-space software taking steps to address the hung/unresponsive graphics processor...

We're at a crossroads

20 February 2025 at 21:30

After a successful 2024 with a lot to be proud of, and a Matrix Conference that brought our community together to celebrate 10 years of Matrix, we step into 2025 with a light budget and a mighty team poised to make the most of it!

Our priorities remain to make Matrix a safer network, keep growing the ecosystem, make the most of our Governing Board, and drive a fruitful and friendly collaboration across all actors.

However, whether we will manage to get there is not fully a given.

🔗The Foundation is key to the success of Matrix

The Matrix.org Foundation has gone from depending entirely on Element, the company set up by the creators of Matrix, to having half of its budget covered by its 11 funding members, which is a great success on the road to financial independence! However half of the budget being covered means half of it isn’t. Or in other words: the Foundation is not yet sustainable, despite running on the strictest possible budget, and is burning through its (relatively small) reserves. And we are at the point where the end of the road is in sight.

Why does it matter?

The Foundation has a clear mission:

The Matrix.org Foundation exists to act as a neutral custodian for Matrix and to nurture it as efficiently as possible as a single unfragmented standard, for the greater benefit of the whole ecosystem, not benefiting or privileging any single player or subset of players.

Without the Foundation and its programs, the Matrix protocol itself faces existential threats:

  • Without Trust & Safety efforts, bad actors and communities would proliferate on the network and make it unlivable for the rest.
  • Without a canonical specification, the shared infrastructure and a Spec Core Team to maintain it, the protocol would become fragmented, losing its effective interoperability – increasing the costs on all downstream users.
  • Without a neutral entity as the custodian of the specification, the ecosystem would first shatter and then consolidate around the biggest (likely for-profit) actor.
  • Without advocacy, conferences, documentation and tutorials, Matrix would become a niche protocol used by a few enthusiasts for side projects, whilst big proprietary and siloed networks continue to hold the world’s communications.

🔗Implementing the vision

But there is light at the end of the tunnel! Concretely, the Foundation delivers most of its value by fostering a healthy, fair and fertile ecosystem around Matrix. It needs to strike the right balance between:

  • Making Matrix accessible & visible.
    • For the general public it means maintaining an easy default onboarding server (Matrix.org).
    • For server administrators it means providing the right tooling to keep their users (and themselves!) safe.
    • For developers it means making it easy to develop products using Matrix, via documentation, tutorials, and in-person events.
  • Making Matrix compelling to build on.
    • This means maintaining the Matrix Specification as a canonical, unencumbered, patent free and royalty free specification.
    • Being responsive and vendor-neutral when an organisation or individual contributes.
    • Promoting the good players within the ecosystem.
    • Ensuring the network grows and attracts more users.
  • Making Matrix a product that benefits the greater good.
    • This means ensuring that the general public can easily build safe & easy to use communities on Matrix.
    • Ensuring that bad actors are proactively chased and discouraged to use Matrix.

🔗Doing less to do better

Matrix has been here for 10 years, and will hopefully be here for many more! But to continue to grow and thrive, it needs the Foundation to be around and healthy, which means carefully allocating its budget in order to continue to exist and fulfill its mission. This is why it needs to focus on critical programs and shut down some of its activities.

We view the following programs as critical to the Foundation’s mission:

  • Maintaining the canonical, backwards compatible, stable Matrix Spec
  • Developing protocol enhancements and Trust and Safety tooling, making the tools available to the ecosystem and moderating the servers under its control (typically Matrix.org) - see our recent blog post
  • Running the Matrix.org homeserver as an initial home for newcomers
  • Promoting the Matrix protocol via online content, conferences and meet-ups and other marketing strategies

We might fine tune our approach, but we can't cease any of those programs without severe consequences for the ecosystem.

Meanwhile, bridges have been at the heart of Matrix for a long time. Public bridges hosted by the Matrix.org Foundation have been a very good resource to show the power of interoperability, connect communities together, and onboard many people into their Matrix journey.

However, these bridges require regular maintenance as the bridged platforms evolve their APIs, and significant engineering and moderation support to run. Luckily, the Matrix ecosystem is now more mature than it was at the time we spun up those public Slack, XMPP and IRC bridge instances. There are now commercial players like Beeper providing a user-friendly offering for people who want to get all their conversations in a single app, or IndieHosters and Fairkom offering hosting for Matrix server and bridge instances (and much more).

So unless the Foundation manages to raise $100,000 of funding by the end of March 2025, we will have to focus our resources on the critical lines of work, and consequently we will have to shut down all the remaining bridges hosted by the Matrix.org Foundation. This includes bridges to Slack, XMPP, OFTC (IRC), and Snoonet (IRC). We will also mark the software behind those bridges as archived, as we don't have the resources to accept new contributions.

In practice, the Foundation needs an additional $610K in revenue to break-even, but this $100K would extend our runway 1 month while we work on landing grants and new members. To put this in context, we nearly doubled our revenue in 2024, reaching $561K, but it was also the first year in which we carried the full cost of our operations: $1.2M. To make ends meet, we liquidated $283K worth of cryptocurrency donations and ended the year with a $356K deficit. We are currently on target for $587K revenue in 2025, with a modest increase in expenses.

🔗Growing the ecosystem and the network

Choosing to shut the bridges down is a difficult decision to make, but will allow us to focus on the critical projects which will keep the ecosystem growing. The success of Matrix depends on how widely it is used by the general public and by organisations – preferably natively rather than via bridges.

The more people and organisations rely on Matrix, the more attractive it becomes for organisations to build products and services on top of it, the more funding the Foundation gets, and the more the Foundation can in turn reinvest into the ecosystem and run initiatives that benefit all stakeholders for the growth of the network.

Once the Foundation is cashflow positive, it will be able to accelerate and eventually get on with the multiple projects the team and Governing Board have in mind to make Matrix fun, exciting, reliable, safe, easy to use, and above all useful. And we hope to get there by the end of the year.

Most importantly, despite the Trust and Safety team being the Foundation’s biggest expense, as explained in our blog post, the team is still underresourced: they are understaffed and under a lot of pressure to deliver protocol improvements, better tooling for server admins, and ensure Matrix.org is a good citizen of the open federation. T&S will be the first area to see increased funding.

Separately, the Foundation wants to continue executing on its mission! Among others, better connect the doers in the ecosystem with the people and organisations who need their energy, share the successes and learnings from the community: the Matrix Conference was an incredible success and we want to see more of that.

We’ve also seen a clear change in how many users and organisations were adopting Matrix in the last few months: the world needs a decentralised end-to-end encrypted network to communicate more than ever, and it shows! We want to uplift the good players which are driving this growth.

The Foundation would also love to support more public policy efforts, which give an opportunity to shape the world by educating regulators, like for the Digital Markets Act; or stronger involvement in standardisation: we had no choice but reduce the effort spent on participating in MIMI, the IETF working group for instant messaging interoperability due to lack of resources.

There is so much more that we could do to make Matrix better and realise its full potential.

🔗So what now?

Right now, the Foundation urgently needs your financial help. For the sake of a safe network, our primary focus today, but also to be able to deliver on the reason we all want Matrix to succeed.

Because we believe that:

  • People should have full control over their own communication.
  • People should not be locked into centralised communication silos, but instead be free to pick who hosts their communication without limiting who they can reach.
  • The ability to converse securely and privately is a basic human right.
  • Communication should be available to everyone as a free and open, unencumbered, standard and global network.

In short:

If you are an organisation building on top of Matrix, you can help by becoming a member, which also gives you the opportunity to be eligible to participate in the Governing Board, and other perks.

If you are an organisation buying Matrix services or products, you can help by ensuring that your vendor is financially contributing back to the project or becoming a member yourself.

If you are an individual using Matrix, you can help by making a donation or becoming a member.

If you are a philanthropist or other funder, you can help by getting in touch with us at funding@matrix.org to discuss funding options.

It isn’t the first time we’ve rung the alarm bell, and it is no fun to beg for help. We are at a crossroads, where the vibrancy of the ecosystem and enthusiasm around Matrix is not reflected in the support the Foundation gets, and we are at risk of losing this common resource and all it offers.

But all in all, we are optimists – we wouldn’t have begun this journey if we weren’t – and we believe that there are people out there who realise that sovereign and secure communication is as high on the list of today’s essential technology – if not higher – as ensuring AI is safe, so let’s spread the word and let’s continue working on a safer and more sovereign world!

STMicro expands the STM32C0 Cortex-M0+ MCU family with STM32C051, STM32C091, and STM32C092 (with CAN FD)

20 February 2025 at 20:56
STMicro Nucleo-64 board with STM32C092RC MCU

STMicro first introduced the STM32C0 32-bit Arm Cortex-M0+ MCU family as an 8-bit MCU killer in 2023, followed by the STM32C071 adding USB FS and designed for appliances with graphical user interfaces (GUI). The company has now added three new parts with the STM32C051, STM32C091, and STM32C092. The STM32C051 is similar to the original STM32C031 but adds more storage (64KB vs 32KB) and is offered in packages with up to 48 pins, while the STM32C09x parts offer flash densities up to 256 KB in packages up to 64 pins, and the STM32C092 also gains a CAN FD interface. The STM32C09x parts can be seen as an update to the STM32C071 where more flash memory is needed. That’s 30 new SKUs bringing the total to 55 when different packages and flash memory size/RAM size options are taken into account. The STM32C051 offers the same maximal amount of SRAM as the STM32C031 [...]

The post STMicro expands the STM32C0 Cortex-M0+ MCU family with STM32C051, STM32C091, and STM32C092 (with CAN FD) appeared first on CNX Software - Embedded Systems News.

Meet the Raspberry Pi team

20 February 2025 at 20:39

Over the next few months, members of the Raspberry Pi team will be popping up all over the place to talk to you about Raspberry Pi. They’ll be giving demos from across the full range of our products, from our single-board computers (including Raspberry Pi 500) and some RP2350-based solutions to our AI products, our cameras, and our latest industrial device: Raspberry Pi Compute Module 5.

At these events, you’ll be able to see how companies around the world use Raspberry Pi to support their industrial applications and discover how Raspberry Pi can help you with your own solutions. You’ll also find out about our Approved Design Partners, who can help take your product to market, and hear about the technical, product, and compliance support services we offer to industrial customers. Plus, you’ll get to meet us, which is arguably the best part.

Meet the team

Embedded World

We had a great time at Embedded World last year — just look at all those smiles

This is not the first time Raspberry Pi has been to Embedded World, and we’re very excited to be returning this year. Come meet us at stand 138 in Hall 3A at Messezentrum Nuremberg, Germany, from 11–13 March.

You can follow this link for more information, or to arrange a meeting with us.

Embedded World registration is required.

Gitex Africa

We will also be returning, for the second year in a row, to Gitex Africa in Marrakech, Morocco. Gitex is Africa’s biggest tech and startup show, so it’s fair to say we’re very much looking forward to this one too.

Come and see us at stand 3A-3 at Place Bab Jdid, Boulevard Al Yarmouk, Marrakech, from 14–16 April. You can click here for more information, or to arrange a meeting with the team.

Gitex Africa registration is required.

Hardware Pioneers

Now this one is new. For the very first time, Raspberry Pi will be attending Hardware Pioneers at the Business Design Centre in London, from 23–24 April. You’ll be able to find us at stand K7.

Click here for more information, or to arrange a meeting.

Hardware Pioneers registration is required.

Automate

We’re also headed to America to set up a stand at Automate in Detroit, Michigan, from 12–15 May. We’ll be at lucky stand number 9132 in Hall E at Huntington Place.

You know the drill: you can click here for more information or to arrange a meeting.

Automate 2025 registration is required (you should also know this by now).

Meet your (fellow) makers

Our wonderful community of makers and enthusiasts is always hosting events of their own. These are great places to meet fellow makers and learn more about Raspberry Pi — especially how our technology is being used in everyday life.

If, somehow, you can’t find the kind of event you’re looking for, you could even run one of your own! Just sign up here to learn how to submit an event and to hear about all of the support that’s available to you.

The post Meet the Raspberry Pi team appeared first on Raspberry Pi.

Event-driven architecture for modern applications

20 February 2025 at 07:00
There's a lot to manage as part of a software's deployment and maintenance. This includes virtual machine (VM) provisioning, patching, authentication and authorisation, web load-balancing, storage integration and redundancy, networking, security, backup and recovery and auditing. On top of all that, there's managing compliance and smooth governance of all those activities. Ideally, a modern architecture can make this simple for you.It can be difficult to integrate all of the tools needed to provide these functions, and to ensure that they all seamlessly and efficiently interoperate with each o

Top Python Libraries: A Comprehensive Guide

By: sandeep
20 February 2025 at 19:02

Starting with Python? Exploring its essential libraries will make your learning process much smoother. Python’s versatility comes from its wide range of libraries that help with everything from simple tasks to complex algorithms. These libraries save time, reduce complexity, and help you focus on the task rather than reinventing the wheel.

What Are Python Libraries?

A library in Python is like a toolbox filled with pre-written code that helps you complete tasks efficiently without starting from scratch. As a toolbox that contains specialized tools for different jobs, Python libraries provide ready-made functions and methods to save you time and effort.

For instance, instead of writing your functions for data manipulation or complex calculations, you can use NumPy or Pandas to handle these tasks instantly. It’s like building a house—you wouldn’t create your hammer or screwdriver; you pick the right tool from your toolbox. Similarly, in Python, you choose the correct library for the task, whether analyzing data, training a machine learning model, or developing a web application. These libraries act as pre-made solutions, allowing you to focus on solving problems and building projects efficiently.

In this blog, we’ll explore the top Python libraries every beginner should know, categorized by data science, deep learning, web development, and computer vision. Whether you’re working with data, building machine learning models, or developing web apps, this list will help you choose the right tools for your projects.

New to Python? Get structured guidance with our Free Python Course! Learn the fundamentals, best practices, and real-world applications to kickstart your programming journey. Sign up today and start coding!

Let’s dive in!

1. Data Science and Analysis Libraries

NumPy: Numerical Computing and Array Manipulation

  • What It Does: NumPy is a core library in Python for numerical computing, widely used for scientific and mathematical tasks. It introduces an object called ndarray, which stands for “n-dimensional array.” An ndarray is a special type of list in Python, but unlike regular lists, it can store data in multiple dimensions (e.g., 1D, 2D, 3D, etc.), making it incredibly powerful for working with large datasets.
  • Why It’s Important: When you’re working with large datasets or complex mathematical tasks, NumPy allows you to manipulate arrays, matrices, and tensors efficiently. It also integrates well with other libraries like Pandas and Matplotlib for data analysis and visualization.
  • Real-Life Example: If you’re building a recommendation system that needs to perform matrix operations to calculate the similarity between products, NumPy makes these operations simple and fast.

Pandas: Data Manipulation and Analysis

  • What It Does: Pandas makes working with structured data simple and efficient. It provides two key data structures:
  • DataFrame (tables with rows and columns)
  • Series (a single column of data)

With these, you can clean, analyze, and manipulate data easily—whether it’s filtering rows, modifying values, or performing calculations.

  • Why It’s Important: Pandas is great for working with tabular data like CSV files, SQL tables, and spreadsheets. It’s the go-to library when you’re dealing with data cleaning, transforming, and analyzing datasets.
  • Real-Life Example: You have a spreadsheet with sales data and want to calculate the average sales per month, filter by region, or find trends in sales over time. Pandas allow you to load the data and perform these tasks quickly and without complex code.

Matplotlib: Data Visualization

  • What It Does: Matplotlib is the go-to library for generating plots, charts, and visualizations. From bar charts to line graphs and scatter plots, Matplotlib allows you to plot data and get visual insights.
  • Why It’s Important: In data science and analytics, it’s crucial to visualize data patterns to understand the trends, outliers, and correlations better. Matplotlib helps make your data understandable and accessible through visual means.
  • Real-Life Example: If you’re tracking website traffic over time, Matplotlib helps you generate a line graph to visualize changes, identify peak traffic days, and assess trends.

Seaborn: Statistical Data Visualization

  • What It Does: Seaborn is built on top of Matplotlib and makes it easier to create beautiful, informative statistical visualizations. It integrates closely with Pandas, making it easy to visualize datasets.
  • Why It’s Important: Seaborn helps you create more advanced plots like heatmaps, pair plots, and violin plots without needing to write a lot of extra code. It’s ideal for analyzing statistical relationships in data.
  • Real-Life Example: Seaborn is perfect for visualizing correlations in a dataset, like income vs education level. Its heatmap function makes it easy to see which variables are most correlated.

Scikit-learn: Machine Learning

  • What It Does: Scikit-learn is a Python library that provides tools for data mining and machine learning. It includes a variety of algorithms for classification, regression, clustering, and more.
  • Why It’s Important: If you’re working with data-driven projects, Scikit-learn provides pre-built algorithms to analyze and build predictive models with just a few lines of code.
  • Real-Life Example: If you want to create a model to predict customer churn based on historical data, Scikit-learn has the tools to help you implement classification models and evaluate their performance.

2. Deep Learning Libraries

TensorFlow: Deep Learning Framework

  • What It Does: TensorFlow is a powerful open-source library developed by Google that facilitates building and deploying machine learning and deep learning models. It provides an extensive set of tools for everything from simple linear regression models to complex neural networks.
  • Why It’s Important: TensorFlow is scalable and used for both small and large projects. It’s excellent for AI development, including building models for image recognition, natural language processing, and more.
  • Real-Life Example: TensorFlow is commonly used for creating AI models that can recognize faces in images or recommend products based on browsing behavior.

Want to build deep learning models with TensorFlow? Get started with our Free TensorFlow Course, covering everything from Neural Networks to real-world AI applications. No prior experience is needed! Sign Up Now

PyTorch: Deep Learning Framework

  • What It Does: PyTorch is a deep learning library developed by Facebook that offers dynamic computation graphs, which makes it easier for developers to experiment with different model architectures.
  • Why It’s Important: PyTorch is more flexible and intuitive than TensorFlow, making it ideal for researchers who want to prototype quickly. It’s increasingly popular in both academia and industry.
  • Real-Life Example: PyTorch is widely used for tasks like speech recognition, language modeling, and image segmentation. It powers a lot of state-of-the-art research and is excellent for building cutting-edge AI models.

Learn PyTorch the right way—hands-on, practical, and beginner-friendly! In our Free PyTorch Course, you’ll build Neural Networks, Image Recognition models, and more from scratch. Sign up now and start coding! Enroll for Free

Keras: High-Level Deep Learning API

  • What It Does: Keras is a high-level deep learning API that runs on top of TensorFlow. It simplifies the process of building neural networks by abstracting away many complex operations and providing an easy interface.
  • Why It’s Important: Keras is perfect for beginners who want to get started with deep learning and quickly build prototypes without diving into the complexities of TensorFlow.
  • Real-Life Example: If you want to build a neural network for classifying images from the Fashion MNIST dataset, Keras makes it easier to do that with a simple API.

3. Web Development Libraries

Flask: Micro Web Framework

  • What It Does: Flask is a lightweight framework for building web applications in Python. It’s simple to learn, and you can use it to build basic applications and APIs.
  • Why It’s Important: Flask is great for beginners because it provides a lot of freedom and flexibility. It doesn’t force you into using predefined structures, making it easy to learn.
  • Real-Life Example: If you want to build a simple web app, like a to-do list app, Flask lets you set up routes, handle HTTP requests, and render templates with minimal code.

Django: Full-Stack Web Framework

  • What It Does: Django is a high-level Python web framework designed for building large-scale web applications. It comes with everything you need out of the box, including authentication, URL routing, and database management.
  • Why It’s Important: If you’re building complex web applications, Django offers a complete solution with features like an admin panel, security tools, and database management.
  • Real-Life Example: Django is perfect for building web applications like e-commerce websites or content management systems (CMS).

4. Web Scraping

BeautifulSoup: Web Scraping and HTML Parsing

  • What It Does: BeautifulSoup is a library that makes it easy to extract data from HTML and XML files, commonly used in web scraping.
  • Why It’s Important: If you need to collect data from web pages—such as product prices, news articles, or job listings—BeautifulSoup provides a simple way to parse and navigate HTML documents.
  • Real-Life Example: If you’re collecting real-time prices for products from different e-commerce websites, BeautifulSoup can help you extract and store this data.

5. Computer Vision Libraries

OpenCV: Computer Vision and Image Processing

  • What It Does: OpenCV (Open Source Computer Vision Library) is an open-source computer vision library that provides tools for real-time image processing, video analysis, and face recognition.
  • Why It’s Important: OpenCV is one of the most popular libraries for computer vision tasks. It’s efficient, fast, and supports a wide variety of image formats and operations.
  • Real-Life Example: If you’re creating an app that needs to detect faces in photos or videos, OpenCV will allow you to process the images, detect faces, and track them in real time.

Want a career in AI & Computer Vision? Get started with OpenCV, the world’s most widely used vision library. Our Free Course will teach you everything from image processing to real-world AI applications. Sign up now!

Summary of Libraries

LibraryApplication DomainPrimary Use
NumPyData Science, Scientific ComputingNumerical operations and array manipulation
PandasData Science, Data AnalysisData manipulation and analysis with DataFrames
MatplotlibData Science, VisualizationPlotting graphs, charts, and visualizing data trends
SeabornData Science, VisualizationCreating aesthetically pleasing statistical visualizations
Scikit-learnMachine LearningMachine learning algorithms for classification, regression, and clustering
TensorFlowDeep Learning, AIBuilding and training deep learning models
PyTorchDeep Learning, AIDynamic computation graphs for deep learning models
KerasDeep Learning, AIHigh-level API for building neural networks quickly
FlaskWeb DevelopmentLightweight web framework for small to medium web apps
DjangoWeb DevelopmentFull-stack web framework for building large-scale applications
OpenCVComputer Vision, Image ProcessingImage and video processing, facial recognition, object detection
BeautifulSoupWeb ScrapingExtracting and parsing data from HTML and XML documents

Conclusion

As you start your Python journey, libraries are your best friends. They provide powerful, pre-written functions that save you from having to solve common problems from scratch. Whether you’re diving into data science, deep learning, web development, or computer vision, these libraries will significantly speed up your projects and help you create sophisticated solutions.

With the libraries mentioned above, you can easily tackle data analysis, build AI-powered models, create web apps, and process images and videos with minimal effort. Learning to use these libraries is an essential step towards becoming a proficient Python developer, and they’ll open up countless possibilities for your future projects.

As you dive deeper into Python, keep experimenting with these libraries, and soon, you’ll be able to build robust and powerful applications. Happy coding!

FAQ’s

1. What is the difference between a Python library and a Python module?

A module is a single Python file that contains functions and classes.
A library is a collection of multiple modules that provide a broader set of functionalities.
For example, NumPy is a library that contains multiple modules for numerical computing.

2. Which Python libraries are best for computer vision?

OpenCV – Computer vision & image processing
Pillow – Image manipulation and enhancement
TensorFlow & PyTorch – AI-based vision models
Tesseract-OCR – Optical character recognition (OCR)

3. Can I use multiple Python libraries together?

Yes! Most Python libraries are designed to work together. For example:
Pandas + Matplotlib – For analyzing and visualizing data
TensorFlow + OpenCV – For deep learning-based image processing

4. Are Python libraries free to use?

Yes! Most Python libraries are open-source and free to use. They are maintained by the Python community, research institutions, and tech companies. However, some enterprise versions of these libraries offer premium features.

5. What is the difference between TensorFlow and PyTorch?

TensorFlow is a Google-backed deep learning framework known for production-grade deployment.
PyTorch is an open-source framework by Meta that is popular for research and experimentation due to its dynamic computation graph.

The post Top Python Libraries: A Comprehensive Guide appeared first on OpenCV.

Windows to Linux, Set Up Full Disk Encryption on openSUSE

20 February 2025 at 19:00

Data breaches and cyber threats are becoming increasingly common and securing your personal and professional information has never been more critical.

Users transitioning from Windows to Linux through the Upgrade to Freedom campaign can use openSUSE’s tools to protect sensitive data, which include full disk encryption (FDE).

Full disk encryption during installation ensures maximum security. It safeguards all data on your hard drive by encrypting it and makes it unreadable without an decryption key. This level of protection is vital for preventing unauthorized access if your laptop or desktop is lost or stolen.

FDE with openSUSE is both user-friendly and powerful. The setup with advanced security features is easy.

For users seeking feature parity with Windows BitLocker, openSUSE offers Full Disk Encryption (FDE) secured by a TPM2 chip or a FIDO2 key. This advanced setup enhances security by storing encryption keys within the TPM, which ensures that only a trusted system configuration can unlock the disk. For a step-by-step guide on enabling this feature, read the Quickstart in Full Disk Encryption with TPM and YaST2 article.

Here’s a step-by-step guide to set up FDE on your system:

Step 1: Download and Boot openSUSE

  • Visit get.opensuse.org to download the latest version of openSUSE Leap or Tumbleweed.
  • Create a bootable USB drive using tools like balenaEtcher or another image writer.
  • Restart your computer and boot from the USB drive to begin the installation process.

Step 2: Configure Encryption During Installation

  • Once the installer starts, select your preferred language and keyboard layout.
  • In the partitioning setup, choose Guided Setup with Encrypted LVM.
  • Set a strong passphrase for encryption. This passphrase will be required every time the system boots. - Use a mix of upper and lower case letters, numbers and special characters for optimal security.
  • Proceed with the installation as directed by the installer.

Step 3: Verify Encryption Settings

After installation is complete and the system restarts, you’ll be prompted to enter your encryption passphrase. Once entered, openSUSE tools will decrypt the disk and boot normally. To confirm encryption is active:

  • Open a terminal or console.
  • Run the command lsblk -f to verify that your disk is listed with the encryption type (e.g., crypto_LUKS).

The output might look something similar to the following:

NAME        FSTYPE      FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda                                                                                     
├─sda1      ext4        1.0     4a83v1e1-e8d2-4e38-815d-fd79j194f5   25G    30%    /
└─sda2      swap        1           d2e18c23-9w4b-4d26-p1s2-cm2sd64tx9de                
sdb                                                                                     
└─sdb1      crypto_LUKS 1           10bb2vca-81r4-418b-a2c4-e0f6585f2c7a                
  └─luks    ext4        1.0         8a9wka1b-7e9c-1a1f-a9f7-3c82x1e4e87f   150G    10%    /mnt/data

Step 4: Regular Backups

While FDE protects your data, it does not prevent data loss from hardware failure or accidental deletion. Regularly back up your data to an encrypted external drive or a secure cloud service to ensure its safety.

Post-Installation Encryption If you want to encrypt an existing partition after installation, visit the openSUSE wiki page about encryption

Enhanced Security for Modern Challenges

Setting up full disk encryption on openSUSE not only protects your data but also aligns with the Upgrade to Freedom campaign’s mission of empowering users to maintain control over their hardware and privacy. By combining open-source software with good security practices, openSUSE ensures that users can confidently embrace a more secure digital future.

For additional guidance and community support, visit the openSUSE forums or join discussions at your local Linux user group.

Please be aware that some hardward configurations may require additional drivers or BIOS settings adjustments for full disk encryption to fully function properly. Check your device’s compatibility and update your firmware before proceeding.

Solar-powered LLM over Meshtastic solution may provide live-saving instructions during disasters and emergencies

20 February 2025 at 16:36
Solar LLM over Meshtastic

People are trying to run LLMs on all sorts of low-end hardware with often limited usefulness, and when I saw a solar LLM over Meshtastic demo on X, I first laughed. I did not see the reason for it and LoRa hardware is usually really low-end with Meshtastic open-source firmware typically used for off-grid messaging and GPS location sharing. But after thinking more about it, it could prove useful to receive information through mobile devices during disasters where power and internet connectivity can not be taken for granted. Let’s check Colonel Panic’s solution first. The short post only mentions it’s a solar LLM over Meshtastic using M5Stack hardware. On the left, we must have a power bank charge over USB (through a USB solar panel?) with two USB outputs powering a controller and a board on the right. The main controller with a small display and enclosure is an ESP32-powered [...]

The post Solar-powered LLM over Meshtastic solution may provide live-saving instructions during disasters and emergencies appeared first on CNX Software - Embedded Systems News.

Switching to Curated Room Directories

As of yesterday, Matrix.org is using a curated room directory. We’re paring down the rooms that are visible to a collection of moderated spaces and rooms. This is an intervention against abuse on the network, and a continuation of work that we started in May 2024.

In early 2024 we noticed an uptick in users creating rooms to share harmful content. After a few iterations to identify these rooms and shut them down, we realised we needed to change tack. We landed on first reducing the discoverability and reach of these rooms - after all, no other encrypted messaging platform provides a room directory service, and unfortunately it can clearly serve as a mechanism to amplify abuse. So, in May 2024 we froze the room directory. Matrix.org users were no longer permitted to publish their rooms to the room directory. We also did some manual intervention to reduce the size of the directory as a whole, and remove harmful rooms ahead of blocking them.

This intervention aimed at three targets:

  • Lowering the risk of users discovering harmful rooms
  • Stopping the amplification of abuse via an under-moderated room directory
  • Reducing the risk for Matrix client developers for app store reviews

In truth, the way room discovery works needs some care and attention. Room directories pre-date Spaces, and some of the assumptions don't hold up to real world use. From the freeze, and the months since, we've learned a few things. First, the criteria for appearing in a server's room directory in the first place is way too broad. Also, abuse doesn't happen in a vacuum. Some rooms that were fine at the time of the freeze, are not now. There are a few different causes for that, including room moderators losing interest. We looked for criteria to give us the confidence in removing the freeze, and we hit all the edge cases that make safety work so challenging.

Those lessons led to a realization. One of the values of the Foundation is pragmatism, rather than perfection. We weren't living up to that value, so we decided to change. The plan became simpler: move to a curated list of rooms, with a rough first pass of criteria for inclusion. In parallel, we asked the Governing Board to come up with a process for adding rooms in the future, and to refine the criteria. We've completed the first part of the plan today.

🔗What comes next

There's plenty of scope for refinement here, and we've identified a few places where we can get started:

  • The Governing Board will publish criteria for inclusion in the Matrix.org room directory. They'll also tell you how you can suggest rooms and spaces for the directory.
  • We're going to recommend safer defaults. Servers should not let users publish rooms unless there are appropriate filtering and moderation tools in place, and people to wield them. For instance, Element have made this change to Synapse in PR18175
  • We're exploring discovery as a topic, including removing the room directory API. One promising idea is to use Spaces: servers could publish a default space, with rooms curated by the server admin. Our recent post includes some other projects we have in this area: https://matrix.org/blog/2025/02/building-a-safer-matrix/

🔗FAQs

What criteria did you use for this first pass?
We used a rough rubric: Is the room already in the room directory, and does the Foundation already protect the room with the Matrix.org Mjolnir? From there, we extended to well-moderated rooms and spaces that fit one of the following:

  • Matrix client and server implementations (e.g. FluffyChat, Dendrite)
  • Matrix community projects (e.g. t2bot.io)
  • Matrix homeserver spaces with a solid safety record (e.g. tchncs.de, envs.net)

Why isn't the Office of the Foundation in the directory?
It didn't exist before May 2024, so the Office has never been in the directory. We're going to add it in the next few days, with a couple of other examples that fit our rough rubric.

How do I add my room/space to the list?
At the moment, you can't. The Governing Board will publish the criteria and the flow for getting on the list.

What do I do if I find a harmful room in the current directory?
You shouldn't, but if a room does have harmful content, check out How you can help

Raspberry Pi Pico SDK 2.1.1 release adds 200MHz clock option for RP2040, various Waveshare boards, new code samples

20 February 2025 at 11:21
Raspberry Pi RP2040 200 MHz

The Raspberry Pi Pico SDK 2.1.1  has just been released with official 200 MHz clock support for the Raspberry Pi RP2040 MCU, several new boards mostly from Waveshare, but also one from Sparkfun, as well as new code samples, and other small changes. Raspberry Pi RP2040 gets official 200 MHz clock support When the Raspberry Pi RP2040 was first released along with Raspberry Pi Pico in 2021, we were told the default frequency was 48 MHz, but the microcontroller could also run up to 133 MHz. Eventually, I think the Cortex-M0+ cores were clocked at 125 MHz by default, although some projects (e.g. PicoDVI) would boost the frequency up to 252 MHz. Frequencies higher than 133 Mhz were not officially supported so far, but the Pico SDK 2.1.1 changes that since the Raspberry Pi RP2040 has now been certified to run at a system clock of 200MHz when using a [...]

The post Raspberry Pi Pico SDK 2.1.1 release adds 200MHz clock option for RP2040, various Waveshare boards, new code samples appeared first on CNX Software - Embedded Systems News.

❌
❌