Normal view

There are new articles available, click to refresh the page.
Today — 22 November 2024FOSS

The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects

22 November 2024 at 18:02

We are enormously proud to reveal The Official Raspberry Pi Camera Module Guide (2nd edition), which is out now. David Plowman, a Raspberry Pi engineer specialising in camera software, algorithms, and image-processing hardware, authored this official guide.

This detailed book walks you through all the different types of Camera Module hardware, including Raspberry Pi Camera Module 3, High Quality Camera, Global Shutter Camera, and older models; discover how to attach them to Raspberry Pi and integrate vision technology with your projects. This edition also covers new code libraries, including the latest PiCamera2 Python library and rpicam command-line applications, as well as integration with the new Raspberry Pi AI Kit.

Camera Guide - Getting Started page preview

Save time with our starter guide

Our starter guide has clear diagrams explaining how to connect various Camera Modules to the new Raspberry Pi boards. It also explains how to fit custom lenses to HQ and GS Camera Modules using C-CS adaptors. Everything is outlined in step-by-step tutorials with diagrams and photographs, making it quick and easy to get your camera up and running.

Camera Guide - connecting Raspberry Pi pages

Test your camera properly

You’ll discover how to connect your camera to a Raspberry Pi and test it using the new rpicam command-line applications — these replace the older libcam applications. The guide also covers the new PiCamera2 Python library, for integrating Camera Module technology with your software.

Camera Guide - Raw images and Camera Tuning pages

Get more from your images

Discover detailed information about how Camera Module works, and how to get the most from your images. You’ll learn how to use RAW formats and tuning files, HDR modes, and preview windows; custom resolutions, encoders, and file formats; target exposure and autofocus; shutter speed, and gain, enabling you to get the very best out of your imaging hardware.

Camera Guide - Get started with Raspberry Pi AI kit pages

Build smarter projects with AI Kit integration

A new chapter covers the integration of the AI Kit with Raspberry Pi Camera Modules to create smart imaging applications. This adds neural processing to your projects, enabling fast inference of objects captured by the camera.

Camera Guide - Time-lapse capture pages

Boost your skills with pre-built projects

The Official Raspberry Pi Camera Module Guide is packed with projects. Take selfies and stop-motion videos, experiment with high-speed and time-lapse photography, set up a security camera and smart door, build a bird box and wildlife camera trap, take your camera underwater, and much more! All of the code is tested and updated for the latest Raspberry Pi OS, and is available on GitHub for inspection.

Click here to pick up your copy of The Official Raspberry Pi Camera Module Guide (2nd edition).

The post The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects appeared first on Raspberry Pi.

Arm Tech Symposia: AI Technology Transformation Requires Unprecedented Ecosystem Collaborations

22 November 2024 at 15:57

The Arm Tech Symposia 2024 events in China, Japan, South Korea and Taiwan were some of the biggest and best attended events ever held by Arm in Asia. The size of all the events was matched by the enormity of the occasion that is being faced by the technology industry.

As Chris Bergey, SVP and GM of Arm’s Client Line of Business, said in the Tech Symposia keynote presentation in Taiwan: “This is the most important moment in the history of technology.”  

There are significant opportunities for AI to transform billions of lives around the world, but only if the ecosystem works together like never before.

Chris Bergey, SVP and GM of the Arm Client Line of Business, welcomes attendees to Arm Tech Symposia 2024

A re-thinking of silicon

At the heart of these ecosystem collaborations is a broad re-think of how the industry approaches the development and deployment of technologies. This is particularly applicable to the semiconductor industry, with silicon no longer a series of unrelated components but instead becoming “the new motherboard” to meet the demands of AI.

This means multiple components co-existing within the same package, providing better latency, increased bandwidth and more power efficiency.

Silicon technologies are already transforming the everyday lives of people worldwide, enabling innovative AI features on smartphones, like the real-time translation of languages and text summarization, to name a few.

As James McNiven, VP of Product Management for Arm’s Client Line of Business, stated in the South Korea Tech Symposia keynote: “AI is about making our future better. The potential impact of AI is transformative.”

The importance of the Arm Compute Platform

The Arm Compute Platform is playing a significant role in the growth of AI. This combines hardware and for best-in-class technology solutions for a wide range of markets, whether that’s AI smartphones, software-defined vehicles or data centers.

This is supported by the world’s largest software ecosystem, with more than 20 million software developers writing software for Arm, on Arm. In fact, all the Tech Symposia keynotes made the following statement: “We know that hardware is nothing without software.”

Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, outlines the software benefits of the Arm Compute Platform

How software “drives the technology flywheel”

Software has always been an integral part of the Arm Compute Platform, with Arm delivering the ideal platform for developers to “make their dreams (applications) a reality” through three key ways.

Firstly, Arm’s consistent compute platform touches 100 percent of the world’s connected population. This means developers can “write once and deploy everywhere.”

The foundation of the platform is the Arm architecture and its continuous evolution through the regular introduction of new features and instruction-sets that accelerate key workloads to benefit developers and the end-user.

SVE2 is one feature that is present across AI-enabled flagship smartphones built on the new MediaTek Dimensity 9400 chipset. It incorporates vector instructions to improve video and image processing capabilities, leading to better quality photos and longer-lasting video.

The Arm investment into AI architectural features at Arm Tech Symposia Shanghai

Secondly, through having acceleration capabilities to deliver optimized performance for developers’ applications. This is not just about high-end accelerator chips, but having access to AI-enabled software to unlock performance.

One example of this is Arm Kleidi, which seamlessly integrates with leading frameworks to ensure AI workloads run best on the Arm CPU. Developers can then unlock this accelerated performance with no additional work required.

At the Arm Tech Symposia Japan event, Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, said: “We are committed to abstracting away the hardware from the developer, so they can focus on creating world changing applications without having to worry about any technical complexities around performance or integration.”

This means that when the new version of Meta’s Llama, Google AI Edge’s MediaPipe and Tencent’s Hunyuan come online, developers can be confident that no performance is being left on the table with the Arm CPU.

Kleidi integrations are set to accelerate billions of AI workloads on the Arm Compute Platform, with the recent PyTorch integration leading to 2.5x faster time-to-first token on Arm-based AWS Graviton processors when running the Llama 3 large language model (LLM).

James McNiven, VP of Product Management for Arm’s Client Line of Business, discusses Arm Kleidi

Finally, developers need a platform that is easy to access and use. Arm has made this a reality through significant software investments that ensure developing on the Arm Compute Platform is a simplified, seamless experience that “just works.”

As each Arm Tech Symposia keynote speaker summarized: “The power of Arm and our ecosystem is that we deliver what developers need to simplify the process, accelerate time-to-market, save costs and optimize performance.”

The role of the Arm ecosystem

The importance of the Arm ecosystem in making new technologies a reality was highlighted throughout the keynote presentations. This is especially true for new silicon designs that require a combination of core expertise across many different areas.

As Dermot O’Driscoll, VP, Product Management for Arm’s Infrastructure Line of Business, said at the Arm Tech Symposia event in Shanghai, China: “No one company will be able to cover every single level of design and integration alone.”

Dermot O’Driscoll, VP, Product Management for Arm’s Infrastructure Line of Business, speaks at the Arm Tech Symposia event in Shanghai, China

Empowering these powerful ecosystem collaborations is a core aim of Arm Total Design, which enables the ecosystem to accelerate the development and deployment of silicon solutions that are more effective, efficient and performant. The program is growing worldwide, with the number of members doubling since the program was launched in late 2023. Each Arm Total Design partner offers something unique that accelerates future silicon designs, particularly those that are built on Arm Neoverse Compute Subsystems (CSS).

One company that exemplifies the spirit and value of Arm Total Design is South Korea-based Rebellions. Recently, it announced the development of a new large-scale AI platform, the REBEL AI platform, to drive power efficiency for AI workloads. Built on Arm Neoverse V3 CSS, the platform uses a 2nm process node and packaging from Samsung Foundry and leverages design services from ADtechnology. This demonstrates true ecosystem collaboration, with different companies offering different types of highly valuable expertise.

Dermot O’Driscoll said: “The AI era requires custom silicon, and it’s only made possible because everyone in this ecosystem is working together, lifting each other up and making it possible to quickly and efficiently meet the rising demands of AI.”

Chris Bergey at the Arm Tech Symposia event in Taiwan talks about the new chiplet ecosystem being enabled on Arm

Arm Total Design is also helping to enable a new thriving chiplet ecosystem that already involves over 50 leading technology partners who are working with Arm on the Chiplet System Architecture (CSA). This is creating the framework for standards that will enable a thriving chiplet market, which is key to meeting ongoing silicon design and compute challenges in the age of AI.

The journey to 100 billion Arm-based devices running AI

All the keynote speakers closed their Arm Tech Symposia keynotes by reinforcing the commitment that Arm CEO Rene Haas made at COMPUTEX in June 2024: 100 billion Arm-based devices running AI by the end of 2025.

James McNiven closes the Arm Tech Symposia keynote in Shenzhen

However, this goal is only possible if ecosystem partners from every corner of the technology industry work together like never before. Fortunately, as explained in all the keynotes, there are already many examples of this work in action.

The Arm Compute Platform sits at the center of these ecosystem collaborations, providing the technology foundation for AI that will help to transform billions of lives around the world.

The post Arm Tech Symposia: AI Technology Transformation Requires Unprecedented Ecosystem Collaborations appeared first on Arm Newsroom.

Mercury X1 wheeled humanoid robot combines NVIDIA Jetson Xavier NX AI controller and ESP32 motor control boards

22 November 2024 at 14:34
Mercury X1 wheeled humanoid robot

Elephant Robotics Mercury X1 is a 1.2-meter high wheeled humanoid robot with two robotic arms using an NVIDIA Jetson Xavier NX as its main controller and ESP32 microcontrollers for motor control and suitable for research, education, service, entertainment, and remote operation. The robot offers 19 degrees of freedom, can lift payloads of up to 1kg, work up to 8 hours on a charge, and travel at up to 1.2m/s or about 4.3km/h. It’s based on the company’s Mercury B1 dual-arm robot and a high-performance mobile base. Mercury X1 specifications: Main controller – NVIDIA Jetson Xavier NX CPU – 6-core NVIDIA Carmel ARM v8.2 64-bit CPU with 6MB L2 + 4MB L3 caches GPU – 384-core NVIDIA Volta GPU with 48 Tensor Cores AI accelerators – 2x NVDLA deep learning accelerators delivering up to 21 TOPS at 15 Watts System Memory – 8 GB 128-bit LPDDR4x @ 51.2GB/s Storage – 16 [...]

The post Mercury X1 wheeled humanoid robot combines NVIDIA Jetson Xavier NX AI controller and ESP32 motor control boards appeared first on CNX Software - Embedded Systems News.

(D241122) godns webhook coturn and dynamic IP address on FreeBSD

22 November 2024 at 13:52

This is for FreeBSD. It may not work on Linux because sh and sed on FreeBSD are not the same as Linux (bash and other sed).

We're running a STUN/TURN server, it work well but we don't have a static IP address. Everytime we got a new IP address we need to change coturn config and restart the service.

There is a way to get it work by a cron job that run a shell script check public ip every 5 minutes and restart service if the public IP address has changed.

#!/bin/bash

# Linux only, doesn't work on FreeBSD
current_external_ip_config=$(cat /etc/turnserver.conf | grep "^external-ip" | cut -d'=' -f2)
current_external_ip=$(dig +short <MY_DOMAIN>)

if [[ -n "$current_external_ip" ]] && [[ $current_external_ip_config != $current_external_ip ]]; then
        sed -i "/^external-ip=/ c external-ip=$current_external_ip" /etc/turnserver.conf
        systemctl restart coturn
fi

ref: set up with dynamic ip address

Since we're running a godns daemon to update our IP to Cloudflare DNS server we also want godns send a webhook to coturn server whenever it update IP to Cloudflare. That may be more effective.

So this is what we have:

godns

$ cat /etc/godns/config.json 
{
  "provider": "Cloudflare",
  "login_token": "YOUR_TOKEN",
  "domains": [
    {
      "domain_name": "yourdomain.com",
      "sub_domains": [
        "@"
      ]
    }
  ],
  "ip_urls": [
    "https://api.ipify.org"
  ],
  "ip_type": "IPv4",
  "interval": 300,
  "resolver": "8.8.8.8",
  "webhook": {
    "enabled": true,
    "url": "http://your.coturn.webhook.endpoint:9000/hooks/godns",
    "request_body": "{ \"domain\": \"{{.Domain}}\", \"ip\": \"{{.CurrentIP}}\", \"ip_type\": \"{{.IPType}}\" }"
  }
}

webhook

$ cat /usr/local/etc/webhook.yaml
---
# See https://github.com/adnanh/webhook/wiki for further information on this
# file and its options.  Instead of YAML, you can also define your
# configuration as JSON.  We've picked YAML for these examples because it
# supports comments, whereas JSON does not.
#
# In the default configuration, webhook runs as user nobody.  Depending on
# the actions you want your webhooks to take, you might want to run it as
# user root.  Set the rc.conf(5) variable webhook_user to the desired user,
# and restart webhook.

# An example for a simple webhook you can call from a browser or with
# wget(1) or curl(1):
#   curl -v 'localhost:9000/hooks/samplewebhook?secret=geheim'
- id: godns
  execute-command: "/usr/local/etc/godns.sh"
  command-working-directory: "/usr/local/etc"
  pass-arguments-to-command:
  - source: payload
    name: domain
  - source: payload
    name: ip
  - source: payload
    name: ip_type
  trigger-rule:
    and:
      - match:
          type: value
          value: "your.domain.com"
          parameter:
            source: payload
            name: domain

shell script

$ cat /usr/local/etc/godns.sh
#!/bin/sh

# write ip log to a file
now="$(date +'%y%m%d%H%M%S%N')"
echo $now $1 $2 $3 >> godns.txt

# restart coturn when ip changed
turnserver_config="/usr/local/etc/turnserver.conf"
current_external_ip_config=$(cat $turnserver_config | grep "^external-ip" | cut -d'=' -f2)
current_external_ip_webhook=$2

if [ -n "$current_external_ip_webhook" ] && [ $current_external_ip_config != $current_external_ip_webhook ]; then
        sed -i .old -e "s/external-ip=$current_external_ip_config/external-ip=$current_external_ip_webhook/g" $turnserver_config
        service turnserver restart
fi

It may not work well enough, if something happen and godns can not send webhook to coturn server. But for now we stick with it.

ASUSTOR Flashstor Gen2 NAS features AMD Ryzen Embedded V3C14, 10GbE networking, up to 12x NVMe SSD sockets

22 November 2024 at 10:25
AMD Ryzen V3C14 NAS

ASUSTOR Flashstor 6 Gen2 and Flashtor 6 Pro Gen2 are NAS systems based on AMD Ryzen Embedded V3C14 quad-core processor with up to two 10GbE RJ45 ports and taking up to 6 or 12 M.2 NVMe SSDs respectively. The Flashstor Gen2 models are updated to the ASUSTOR Flashtor NAS launched last year with similar specifications including 10GbE and up to 12 M.2 SSDs, but based on a relatively low-end Intel Celeron N5105 quad-core Jasper Lake processor. The new Gen2 NAS family features a more powerful AMD Ryzen Embedded V3C14 SoC, support for up to 64GB RAM with ECC, and USB4 ports. The downside is that it lacks video output, so it can’t be used for 4K/8K video consumption like its predecessor. Flashstor Gen2 NAS specifications: SoC – AMD Ryzen Embedded V3C14 quad-core/8-thread processor @ 2.3/3.8GHz; TDP: 15W System Memory Flashstor 6 Gen2 (FS6806X) – 8 GB DDR5-4800 Flashstor 12 Pro [...]

The post ASUSTOR Flashstor Gen2 NAS features AMD Ryzen Embedded V3C14, 10GbE networking, up to 12x NVMe SSD sockets appeared first on CNX Software - Embedded Systems News.

Important Announcement: Free Shipping Policy Adjustment and Service Enhancements

By: Rachel
22 November 2024 at 09:48

Thank you for your continued support! To further enhance your shopping experience, we are rolling out a series of new features designed to provide more efficient and localized services. Additionally, we will be updating our current free shipping policy. Please take note of the following important updates:


Recent Logistics Enhancements

Between June and October 2024, we implemented several key logistics upgrades to enhance service quality and lay the groundwork for upcoming policy adjustments:

1. Expanded Shipping Options

  • US Warehouse: Added UPS-2 Day, FedEx, and UPS-Ground for more fast shipping choices.
  • CN Warehouse: Introduced Airtransport Direct Line small package service, reducing delivery times from 20-35 days to just 7-10 days.

2. Optimized Small Parcel Shipping and Cost Control

  • Adjusted packaging specifications for CN warehouse shipments, significantly lowering costs while improving shipping efficiency.

3. Accelerated Overall Delivery Times

  • Streamlined export customs clearance from Shenzhen and synchronized handoffs with European and American couriers, reducing delivery times to just 3.5 days.

Enhanced Local Services for a Better Shopping Experience

To meet the diverse needs of our global users, we’ve implemented several improvements in local purchasing, logistics, and tax services:

1. Local Warehouse Pre-Order

  • Launch Date: Already Live
  • Highlights: Pre-order popular products from our US Warehouse and DE Warehouse. If immediate stock is needed, you can switch to the CN Warehouse for purchase.

2. Enhanced VAT Services for EU Customers

  • Launch Date: Already Live
  • Highlights: New VAT ID verification feature allows EU customers to shop tax-free with a valid VAT ID.

3. US Warehouse Sales Tax Implementation

  • Launch Date: January 1, 2025
  • Highlights: Automatic calculation of sales tax to comply with US local tax regulations.

Free Shipping Policy Adjustment

Starting December 31, 2024, our current free shipping policy (CN warehouse orders over $150, US & DE warehouse orders over $100) will no longer apply.

We understand that this change may cause some inconvenience in the short term. However, our aim is to offer more flexible and efficient shipping options, ensuring your shopping experience is more personalized and seamless.


Listening to Your Suggestions for Continuous Improvement

We understand that excellent logistics service stems from listening to every customer’s needs. During the optimization process, we received valuable suggestions, such as:

  • Adding a local warehouse in Australia to provide faster delivery for customers in that region.
  • Improving packaging designs to enhance protection during transit.
  • Supporting flexible delivery schedules, allowing customers to choose delivery times that work best for them.

We welcome your continued input! Starting today, submit your feedback via our Feedback Form, and receive coupon rewards for all adopted suggestions.


Important Reminder: Free Shipping Policy End Date

  • Current free shipping policy will officially end on December 31, 2024.
  • Plan your purchases in advance to enjoy the remaining free shipping benefits!

We are also working on future logistics enhancements and may introduce region-specific free shipping or special holiday promotions, so stay tuned!


Thank You for Your Support

Your trust and support inspire Seeed Studio to keep innovating. We remain focused on improving localized services, listening to your needs, and delivering a more convenient and efficient shopping experience.

If you have any questions, please don’t hesitate to contact our customer support team. Together, let’s move towards a smarter, more efficient future!

The post Important Announcement: Free Shipping Policy Adjustment and Service Enhancements appeared first on Latest Open Tech From Seeed.

Nice File Performance Optimizations Coming With Linux 6.13

22 November 2024 at 09:02
In addition to the pull requests managed by Microsoft engineer Christian Brauner for VFS untorn writes for atomic writes with XFS and EXT4, Tmpfs case insensitive file/folder support, new Rust file abstractions, and the renewed multi-grain timestamps work, another interesting Linux 6.13 pull submitted by Brauner revolves around VFS file enhancements...

Friday Five — November 22, 2024

22 November 2024 at 07:00
Red Hat Enterprise Linux AI Brings Greater Generative AI Choice to Microsoft AzureRHEL AI expands the ability of organizations to streamline AI model development and deployment on Microsoft Azure to fast-track AI innovation in the cloud. Learn more Technically Speaking | How open source can help with AI transparencyExplore the challenges of transparency in AI and how open source development processes can help create a more open and accessible future for AI. Learn more ZDNet - Red Hat's new OpenShift delivers AI, edge and security enhancementsRed Hat introduces new capabilities for Red Hat O

NVIDIA JetPack 6.1 Boosts Performance and Security through Camera Stack Optimizations and Introduction of Firmware TPM

22 November 2024 at 05:01
Connected icons show the workflow.NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release,...Connected icons show the workflow.

NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release, JetPack has enhanced its performance, introduced new features, and optimized existing tools to deliver increased value to its users. This means that your existing Jetson Orin-based products experience performance optimizations by upgrading to…

Source

Mesa 24.3 Released With Many Open-Source Vulkan Driver Improvements

22 November 2024 at 00:20
Mesa 24.3 has managed to make it out today, one week ahead of the previous release plans due to the lack of any major blocker bugs appearing. Mesa 24.3 has a lot of new feature work on the contained open-source Vulkan drivers as well as evolutionary improvements to their OpenGL drivers and other user-space 3D driver code...

Using Python with virtual environments | The MagPi #148

22 November 2024 at 00:17

Raspberry Pi OS comes with Python pre-installed, and you need to use its virtual environments to install packages. The latest issue of The MagPi, out today, features this handy tutorial, penned by our documentation lead Nate Contino, to get you started.

Raspberry Pi OS comes with Python 3 pre-installed. Interfering with the system Python installation can cause problems for your operating system. When you install third-party Python libraries, always use the correct package-management tools.

On Linux, you can install python dependencies in two ways:

  • use apt to install pre-configured system packages
  • use pip to install libraries using Python’s dependency manager in a virtual environment
It is possible to create virtual environments inside Thonny as well as from the command line

Install Python packages using apt

Packages installed via apt are packaged specifically for Raspberry Pi OS. These packages usually come pre-compiled, so they install faster. Because apt manages dependencies for all packages, installing with this method includes all of the sub-dependencies needed to run the package. And apt ensures that you don’t break other packages if you uninstall.

For instance, to install the Python 3 library that supports the Raspberry Pi Build HAT, run the following command:

$ sudo apt install python3-build-hat

To find Python packages distributed with apt, use apt search. In most cases, Python packages use the prefix python- or python3-: for instance, you can find the numpy package under the name python3-numpy.

Install Python libraries using pip

In older versions of Raspberry Pi OS, you could install libraries directly into the system version of Python using pip. Since Raspberry Pi OS Bookworm, users cannot install libraries directly into the system version of Python.

Attempting to install packages with pip causes an error in Raspberry Pi OS Bookworm

Instead, install libraries into a virtual environment (venv). To install a library at the system level for all users, install it with apt.

Attempting to install a Python package system-wide outputs an error similar to the following:

$ pip install buildhat
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    For more information visit http://rptl.io/venv

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Python users have long dealt with conflicts between OS package managers like apt and Python-specific package management tools like pip. These conflicts include both Python-level API incompatibilities and conflicts over file ownership.

Starting in Raspberry Pi OS Bookworm, packages installed via pip must be installed into a Python virtual environment (venv). A virtual environment is a container where you can safely install third-party modules so they won’t interfere with your system Python.

Use pip with virtual environments

To use a virtual environment, create a container to store the environment. There are several ways you can do this depending on how you want to work with Python:

per-project environments

Create a virtual environment in a project folder to install packages local to that project

Many users create separate virtual environments for each Python project. Locate the virtual environment in the root folder of each project, typically with a shared name like env. Run the following command from the root folder of each project to create a virtual environment configuration folder:

$ python -m venv env

Before you work on a project, run the following command from the root of the project to start using the virtual environment:

$ source env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

When you finish working on a project, run the following command from any directory to leave the virtual environment:

$ deactivate

per-user environments

Instead of creating a virtual environment for each of your Python projects, you can create a single virtual environment for your user account. Activate that virtual environment before running any of your Python code. This approach can be more convenient for workflows that share many libraries across projects.

When creating a virtual environment for multiple projects across an entire user account, consider locating the virtual environment configuration files in your home directory. Store your configuration in a folder whose name begins with a period to hide the folder by default, preventing it from cluttering your home folder.

Add a virtual environment to your home directory to use it in multiple projects and share the packages

Use the following command to create a virtual environment in a hidden folder in the current user’s home directory:

$ python -m venv ~/.env

Run the following command from any directory to start using the virtual environment:

$ source ~/.env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

To leave the virtual environment, run the following command from any directory:

$ deactivate

Create a virtual environment

Run the following command to create a virtual environment configuration folder, replacing <env-name> with the name you would like to use for the virtual environment (e.g. env):

$ python -m venv <env-name>

Enter a virtual environment

Then, execute the bin/activate script in the virtual environment configuration folder to enter the virtual environment:

$ source <env-name>/bin/activate

You should then see a prompt similar to the following:

$ (<env-name>) $

The (<env-name>) command prompt prefix indicates that the current terminal session is in a virtual environment named <env-name>.

To check that you’re in a virtual environment, use pip list to view the list of installed packages:

$ (<env-name>) $ pip list
Package    Version
---------- -------
pip        23.0.1
setuptools 66.1.1

The list should be much shorter than the list of packages installed in your system Python. You can now safely install packages with pip. Any packages you install with pip while in a virtual environment only install to that virtual environment. In a virtual environment, the python or python3 commands automatically use the virtual environment’s version of Python and installed packages instead of the system Python.

Top Tip
Pass the –system-site-packages flag before the folder name to preload all of the currently installed packages in your system Python installation into the virtual environment.

Exit a virtual environment

To leave a virtual environment, run the following command:

$ (<env-name>) $ deactivate

Use the Thonny editor

We recommend Thonny for editing Python code on the Raspberry Pi.

By default, Thonny uses the system Python. However, you can switch to using a Python virtual environment by clicking on the interpreter menu in the bottom right of the Thonny window. Select a configured environment or configure a new virtual environment with Configure interpreter.

The MagPi #148 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Using Python with virtual environments | The MagPi #148 appeared first on Raspberry Pi.

Meet the CapibaraZero, a multifunctional security and hacking tool based on the Nano ESP32

22 November 2024 at 00:13

In recent years, tools such as the FlipperZero have become quite popular amongst hobbyists and security professionals alike for their small size and wide array of hacking tools. Inspired by the functionality of the FlipperZero, Project Hub user ‘andreockx’ created a similar multi-radio tool named the CapibaraZero, which has the same core abilities and even a little more.

The project uses an Arduino Nano ESP32 as its processor and as a way to provide Wi-Fi, Bluetooth Low-Energy, and human interface features. The chipset can scan for nearby Wi-Fi networks, present fake captive portals, prevent other devices from receiving IP addresses through DHCP starvation, and even carry out ARP poisoning attacks. Andre’s inclusion of a LoRa radio module further differentiates his creation by letting it transmit information in the sub-GHz spectrum over long distances. And lastly, the PN532 RFID module can read encrypted MiFare NFC tags and crack them through brute force.

This collection of the Nano ESP32, wireless radios, and a LiPo battery + charging module were all attached to a custom PCB mainboard while five additional buttons were connected via secondary daughterboard before the entire assembly was placed into a 3D printed case.

For more details about the CapibaraZero, you can read Andre’s write-up here on the Project Hub.

The post Meet the CapibaraZero, a multifunctional security and hacking tool based on the Nano ESP32 appeared first on Arduino Blog.

SoundSlide capacitive touch USB-C adapter aims to ease volume control on laptops

22 November 2024 at 00:01
USB-C capacitive touch volume control

SoundSlide is an open-source hardware USB-C adapter that adds a capacitive touch interface to your laptop or keyboard PC in order to control the volume without having to reach out to the volume keys on the keyboard that may require Alt or Fn presses. SoundSlide is meant to be more intuitive than pressing keys and works without drivers with macOS, Windows, and Linux. At just 20.9 x 6.9 x 3.5 mm in size excluding the USB Type-C port, you can leave it connected to your laptop when you move around or put the laptop in your backpack. The SoundSlide relies on the touch interface from the Microchip SAM D11 Arm Cortex-M0+ microcontroller, and the company behind the project – Drake Labs – has made the firmware, schematics (PDF/WebP), and a command-line interface written on Go available on GitHub. You can check out how it works on a laptop in the [...]

The post SoundSlide capacitive touch USB-C adapter aims to ease volume control on laptops appeared first on CNX Software - Embedded Systems News.

Yesterday — 21 November 2024FOSS

Unlocking New Possibilities in Cloud Deployment with Arm at KubeCon NA 2024

21 November 2024 at 23:00

As developers and platform engineers seek greater performance, efficiency, and scalability for their workloads, Arm-based cloud services provide a powerful and trusted solution. At KubeCon NA 2024, we had the pleasure of meeting many of these developers face-to-face to showcase Arm solutions as they migrate to Arm.   

Today, all major hyperscalers, including Amazon Web Services (AWS), Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure (OCI), offer Arm-based servers optimized for modern cloud-native applications. This shift offers a significant opportunity for organizations to improve price-performance ratios, deliver a lower total cost of ownership (TCO), and meet sustainability goals, while gaining access to a robust ecosystem of tools and support.  

At KubeCon NA, it was amazing to hear from those in the Arm software ecosystem share their migration stories and the new possibilities they’ve unlocked. 

Arm from cloud to edge at KubeCon

Building on Arm unlocks a wide range of options from cloud to edge. It enables developers to run their applications seamlessly in the cloud, while tapping into the entire Arm software and embedded ecosystem and respective workflows. 

Arm-based servers are now integrated across leading cloud providers, making them a preferred choice for many organizations looking to enhance their infrastructure. At KubeCon NA 2024, attendees learned about the latest custom Arm compute offerings available from major cloud service providers including: 

  • AWS Graviton series for enhanced performance and energy efficiency; 
  • Microsoft Azure Arm-based VMs for scalable, cost-effective solutions; 
  • Google Cloud’s Tau T2A instances for price-performance optimization; and 
  • OCI Ampere A1 Compute for flexible and powerful cloud-native services.
Developer kit based on the Ampere Altra SoC512

Ampere showcased their Arm-based hardware in multiple form factors across different partner booths at the show to demonstrate how the Arm compute platform is enabling server workloads both in the cloud and on premises.

System76 ‘s Thelio Astra, an Arm64 developer desktop, featuring Ampere Altra processors, was also prominently displayed in booths across the KubeCon NA show floor. The workstation is streamlining developer workflows for Linux development and deployment across various markets, including automotive and IoT.

System76’s Thelio Astra

During the show, the Thelio Astra showcased its IoT capabilities by aggregating and processing audio sensor data from Arduino devices to assess booth traffic. This demonstrated cloud-connected IoT workloads in action. 

Arm Cloud-to-edge workloads with Arm-based compute from Arduino endpoints to Ampere servers running lightweight Fermyon Kubernetes and WASM

Migrating to Arm has never been easier

Migrating workloads to Arm-based servers is more straightforward than ever. Today, 95% of graduated CNCF (Cloud Native Computing Foundation) projects are optimized for Arm, ensuring seamless, efficient, and high-performance execution.  

Companies of all sizes visited the Arm booth at KubeCon NA to tell us about their migration journey and learn how to take advantage of the latest developer technologies. They included leading financial institutions, global telecommunications providers and large retail brands.  

For developers ready to add multi-architecture support to their deployments, we demonstrated a new tool – kubearchinspect – that can be deployed on a Kubernetes cluster and scan for container images to check for Arm compatibility. Check out our GitHub repo to get started and how to validate Arm support for your container images. 

Hundreds of independent software vendors (ISVs) are enabling their applications and services on Arm, with developers easily monitoring application performance and managing their workloads via the Arm Software Dashboard.  

For developers, the integration of GitHub Actions, GitHub Runners, and the soon to be available Arm extension for GitHub Copilot, means a seamless cloud-native CI/CD workflow is now fully supported on Arm. Graduated projects can scale using cost-effective Arm runners, while incubating projects benefit from lower pricing and improved support from open-source Arm runners. 

Extensive Arm ecosystem and Kubernetes support 

As Kubernetes continues to grow, with 5.6 million developers worldwide, expanding the contributor base is essential to sustaining the cloud-native community and supporting its adoption in technology stacks. Whether developers are using AWS EKS, Azure AKS, or OCI’s Kubernetes service, Arm is integrated to provide native support. This enables the smooth deployment and management of containerized applications. 

Scaling AI workloads and optimizing complex inference pipelines can be challenging across different architectures. Developers can deploy their AI models across distributed infrastructure, seamlessly integrating with the latest AI frameworks to enhance processing efficiency.  

Through a demonstration at the Arm booth, Pranay Bhakre, a Principal Solutions Engineer at Arm, showcased AI over Kubernetes. This brought together Kubernetes, Prometheus and Grafana open-source projects into a power-efficient real-time, scalable, sentiment analysis application. More information about how to enable real-time sentiment analysis on Arm Neoverse-based Kubernetes clusters can be found in this Arm Community blog

Pranay Bakre explains the real-time sentiment analysis demo at KubeCon NA 2024

Pranay also showcased the ability to run AKS on the very latest Arm Neoverse-powered Microsoft Azure Cobalt 100 processors. To jumpstart running Kubernetes via AKS on Microsoft Azure Cobalt 100, check out this learning path and the corresponding GitHub repo

Additionally, at Kubecon 2024, we launched a pilot expansion of our “Works on Arm” program into the CNCF community. This offers comprehensive resources to help scale and optimize cloud-native projects on the Arm architecture. Developers can click here to take a short survey and request to be included in this new initiative. 

Switch to Arm for smarter deployment and scalable performance 

As demonstrated at KubeCon 2024, Arm is transforming cloud-native deployment and accelerating the developer migration to Arm. 

In fact, now is the perfect time to harness Arm-based cloud services for better performance, lower costs, and scalable flexibility. Developers can start building or migrating today to deploy smarter, optimized cloud-native applications on Arm, for Arm. 

Developers are welcome to join us at KubeCon Europe in April 2025 to learn more about our latest advancements in platform engineering and cloud-native technologies.

The post Unlocking New Possibilities in Cloud Deployment with Arm at KubeCon NA 2024 appeared first on Arm Newsroom.

❌
❌