Reading view

There are new articles available, click to refresh the page.

Doing more with less: LLM quantization (part 2)

What if you could get similar results from your large language model (LLM) with 75% less GPU memory? In my previous article,, we discussed the benefits of smaller LLMs and some of the techniques for shrinking them. In this article, we’ll put this to test by comparing the results of the smaller and larger versions of the same LLM.As you’ll recall, quantization is one of the techniques for reducing the size of a LLM. Quantization achieves this by representing the LLM parameters (e.g. weights) in lower precision formats: from 32-bit floating point (FP32) to 8-bit integer (INT8) or INT4. The

The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects

We are enormously proud to reveal The Official Raspberry Pi Camera Module Guide (2nd edition), which is out now. David Plowman, a Raspberry Pi engineer specialising in camera software, algorithms, and image-processing hardware, authored this official guide.

This detailed book walks you through all the different types of Camera Module hardware, including Raspberry Pi Camera Module 3, High Quality Camera, Global Shutter Camera, and older models; discover how to attach them to Raspberry Pi and integrate vision technology with your projects. This edition also covers new code libraries, including the latest PiCamera2 Python library and rpicam command-line applications, as well as integration with the new Raspberry Pi AI Kit.

Camera Guide - Getting Started page preview

Save time with our starter guide

Our starter guide has clear diagrams explaining how to connect various Camera Modules to the new Raspberry Pi boards. It also explains how to fit custom lenses to HQ and GS Camera Modules using C-CS adaptors. Everything is outlined in step-by-step tutorials with diagrams and photographs, making it quick and easy to get your camera up and running.

Camera Guide - connecting Raspberry Pi pages

Test your camera properly

You’ll discover how to connect your camera to a Raspberry Pi and test it using the new rpicam command-line applications — these replace the older libcam applications. The guide also covers the new PiCamera2 Python library, for integrating Camera Module technology with your software.

Camera Guide - Raw images and Camera Tuning pages

Get more from your images

Discover detailed information about how Camera Module works, and how to get the most from your images. You’ll learn how to use RAW formats and tuning files, HDR modes, and preview windows; custom resolutions, encoders, and file formats; target exposure and autofocus; shutter speed, and gain, enabling you to get the very best out of your imaging hardware.

Camera Guide - Get started with Raspberry Pi AI kit pages

Build smarter projects with AI Kit integration

A new chapter covers the integration of the AI Kit with Raspberry Pi Camera Modules to create smart imaging applications. This adds neural processing to your projects, enabling fast inference of objects captured by the camera.

Camera Guide - Time-lapse capture pages

Boost your skills with pre-built projects

The Official Raspberry Pi Camera Module Guide is packed with projects. Take selfies and stop-motion videos, experiment with high-speed and time-lapse photography, set up a security camera and smart door, build a bird box and wildlife camera trap, take your camera underwater, and much more! All of the code is tested and updated for the latest Raspberry Pi OS, and is available on GitHub for inspection.

Click here to pick up your copy of The Official Raspberry Pi Camera Module Guide (2nd edition).

The post The Official Raspberry Pi Camera Module Guide out now: build amazing vision-based projects appeared first on Raspberry Pi.

Arm Tech Symposia: AI Technology Transformation Requires Unprecedented Ecosystem Collaborations

The Arm Tech Symposia 2024 events in China, Japan, South Korea and Taiwan were some of the biggest and best attended events ever held by Arm in Asia. The size of all the events was matched by the enormity of the occasion that is being faced by the technology industry.

As Chris Bergey, SVP and GM of Arm’s Client Line of Business, said in the Tech Symposia keynote presentation in Taiwan: “This is the most important moment in the history of technology.”  

There are significant opportunities for AI to transform billions of lives around the world, but only if the ecosystem works together like never before.

Chris Bergey, SVP and GM of the Arm Client Line of Business, welcomes attendees to Arm Tech Symposia 2024

A re-thinking of silicon

At the heart of these ecosystem collaborations is a broad re-think of how the industry approaches the development and deployment of technologies. This is particularly applicable to the semiconductor industry, with silicon no longer a series of unrelated components but instead becoming “the new motherboard” to meet the demands of AI.

This means multiple components co-existing within the same package, providing better latency, increased bandwidth and more power efficiency.

Silicon technologies are already transforming the everyday lives of people worldwide, enabling innovative AI features on smartphones, like the real-time translation of languages and text summarization, to name a few.

As James McNiven, VP of Product Management for Arm’s Client Line of Business, stated in the South Korea Tech Symposia keynote: “AI is about making our future better. The potential impact of AI is transformative.”

The importance of the Arm Compute Platform

The Arm Compute Platform is playing a significant role in the growth of AI. This combines hardware and for best-in-class technology solutions for a wide range of markets, whether that’s AI smartphones, software-defined vehicles or data centers.

This is supported by the world’s largest software ecosystem, with more than 20 million software developers writing software for Arm, on Arm. In fact, all the Tech Symposia keynotes made the following statement: “We know that hardware is nothing without software.”

Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, outlines the software benefits of the Arm Compute Platform

How software “drives the technology flywheel”

Software has always been an integral part of the Arm Compute Platform, with Arm delivering the ideal platform for developers to “make their dreams (applications) a reality” through three key ways.

Firstly, Arm’s consistent compute platform touches 100 percent of the world’s connected population. This means developers can “write once and deploy everywhere.”

The foundation of the platform is the Arm architecture and its continuous evolution through the regular introduction of new features and instruction-sets that accelerate key workloads to benefit developers and the end-user.

SVE2 is one feature that is present across AI-enabled flagship smartphones built on the new MediaTek Dimensity 9400 chipset. It incorporates vector instructions to improve video and image processing capabilities, leading to better quality photos and longer-lasting video.

The Arm investment into AI architectural features at Arm Tech Symposia Shanghai

Secondly, through having acceleration capabilities to deliver optimized performance for developers’ applications. This is not just about high-end accelerator chips, but having access to AI-enabled software to unlock performance.

One example of this is Arm Kleidi, which seamlessly integrates with leading frameworks to ensure AI workloads run best on the Arm CPU. Developers can then unlock this accelerated performance with no additional work required.

At the Arm Tech Symposia Japan event, Dipti Vachani, SVP and GM of Arm’s Automotive Line of Business, said: “We are committed to abstracting away the hardware from the developer, so they can focus on creating world changing applications without having to worry about any technical complexities around performance or integration.”

This means that when the new version of Meta’s Llama, Google AI Edge’s MediaPipe and Tencent’s Hunyuan come online, developers can be confident that no performance is being left on the table with the Arm CPU.

Kleidi integrations are set to accelerate billions of AI workloads on the Arm Compute Platform, with the recent PyTorch integration leading to 2.5x faster time-to-first token on Arm-based AWS Graviton processors when running the Llama 3 large language model (LLM).

James McNiven, VP of Product Management for Arm’s Client Line of Business, discusses Arm Kleidi

Finally, developers need a platform that is easy to access and use. Arm has made this a reality through significant software investments that ensure developing on the Arm Compute Platform is a simplified, seamless experience that “just works.”

As each Arm Tech Symposia keynote speaker summarized: “The power of Arm and our ecosystem is that we deliver what developers need to simplify the process, accelerate time-to-market, save costs and optimize performance.”

The role of the Arm ecosystem

The importance of the Arm ecosystem in making new technologies a reality was highlighted throughout the keynote presentations. This is especially true for new silicon designs that require a combination of core expertise across many different areas.

As Dermot O’Driscoll, VP, Product Management for Arm’s Infrastructure Line of Business, said at the Arm Tech Symposia event in Shanghai, China: “No one company will be able to cover every single level of design and integration alone.”

Dermot O’Driscoll, VP, Product Management for Arm’s Infrastructure Line of Business, speaks at the Arm Tech Symposia event in Shanghai, China

Empowering these powerful ecosystem collaborations is a core aim of Arm Total Design, which enables the ecosystem to accelerate the development and deployment of silicon solutions that are more effective, efficient and performant. The program is growing worldwide, with the number of members doubling since the program was launched in late 2023. Each Arm Total Design partner offers something unique that accelerates future silicon designs, particularly those that are built on Arm Neoverse Compute Subsystems (CSS).

One company that exemplifies the spirit and value of Arm Total Design is South Korea-based Rebellions. Recently, it announced the development of a new large-scale AI platform, the REBEL AI platform, to drive power efficiency for AI workloads. Built on Arm Neoverse V3 CSS, the platform uses a 2nm process node and packaging from Samsung Foundry and leverages design services from ADtechnology. This demonstrates true ecosystem collaboration, with different companies offering different types of highly valuable expertise.

Dermot O’Driscoll said: “The AI era requires custom silicon, and it’s only made possible because everyone in this ecosystem is working together, lifting each other up and making it possible to quickly and efficiently meet the rising demands of AI.”

Chris Bergey at the Arm Tech Symposia event in Taiwan talks about the new chiplet ecosystem being enabled on Arm

Arm Total Design is also helping to enable a new thriving chiplet ecosystem that already involves over 50 leading technology partners who are working with Arm on the Chiplet System Architecture (CSA). This is creating the framework for standards that will enable a thriving chiplet market, which is key to meeting ongoing silicon design and compute challenges in the age of AI.

The journey to 100 billion Arm-based devices running AI

All the keynote speakers closed their Arm Tech Symposia keynotes by reinforcing the commitment that Arm CEO Rene Haas made at COMPUTEX in June 2024: 100 billion Arm-based devices running AI by the end of 2025.

James McNiven closes the Arm Tech Symposia keynote in Shenzhen

However, this goal is only possible if ecosystem partners from every corner of the technology industry work together like never before. Fortunately, as explained in all the keynotes, there are already many examples of this work in action.

The Arm Compute Platform sits at the center of these ecosystem collaborations, providing the technology foundation for AI that will help to transform billions of lives around the world.

The post Arm Tech Symposia: AI Technology Transformation Requires Unprecedented Ecosystem Collaborations appeared first on Arm Newsroom.

Mercury X1 wheeled humanoid robot combines NVIDIA Jetson Xavier NX AI controller and ESP32 motor control boards

Mercury X1 wheeled humanoid robot

Elephant Robotics Mercury X1 is a 1.2-meter high wheeled humanoid robot with two robotic arms using an NVIDIA Jetson Xavier NX as its main controller and ESP32 microcontrollers for motor control and suitable for research, education, service, entertainment, and remote operation. The robot offers 19 degrees of freedom, can lift payloads of up to 1kg, work up to 8 hours on a charge, and travel at up to 1.2m/s or about 4.3km/h. It’s based on the company’s Mercury B1 dual-arm robot and a high-performance mobile base. Mercury X1 specifications: Main controller – NVIDIA Jetson Xavier NX CPU – 6-core NVIDIA Carmel ARM v8.2 64-bit CPU with 6MB L2 + 4MB L3 caches GPU – 384-core NVIDIA Volta GPU with 48 Tensor Cores AI accelerators – 2x NVDLA deep learning accelerators delivering up to 21 TOPS at 15 Watts System Memory – 8 GB 128-bit LPDDR4x @ 51.2GB/s Storage – 16 [...]

The post Mercury X1 wheeled humanoid robot combines NVIDIA Jetson Xavier NX AI controller and ESP32 motor control boards appeared first on CNX Software - Embedded Systems News.

(D241122) godns webhook coturn and dynamic IP address on FreeBSD

This is for FreeBSD. It may not work on Linux because sh and sed on FreeBSD are not the same as Linux (bash and other sed).

We're running a STUN/TURN server, it work well but we don't have a static IP address. Everytime we got a new IP address we need to change coturn config and restart the service.

There is a way to get it work by a cron job that run a shell script check public ip every 5 minutes and restart service if the public IP address has changed.

#!/bin/bash

# Linux only, doesn't work on FreeBSD
current_external_ip_config=$(cat /etc/turnserver.conf | grep "^external-ip" | cut -d'=' -f2)
current_external_ip=$(dig +short <MY_DOMAIN>)

if [[ -n "$current_external_ip" ]] && [[ $current_external_ip_config != $current_external_ip ]]; then
        sed -i "/^external-ip=/ c external-ip=$current_external_ip" /etc/turnserver.conf
        systemctl restart coturn
fi

ref: set up with dynamic ip address

Since we're running a godns daemon to update our IP to Cloudflare DNS server we also want godns send a webhook to coturn server whenever it update IP to Cloudflare. That may be more effective.

So this is what we have:

godns

$ cat /etc/godns/config.json 
{
  "provider": "Cloudflare",
  "login_token": "YOUR_TOKEN",
  "domains": [
    {
      "domain_name": "yourdomain.com",
      "sub_domains": [
        "@"
      ]
    }
  ],
  "ip_urls": [
    "https://api.ipify.org"
  ],
  "ip_type": "IPv4",
  "interval": 300,
  "resolver": "8.8.8.8",
  "webhook": {
    "enabled": true,
    "url": "http://your.coturn.webhook.endpoint:9000/hooks/godns",
    "request_body": "{ \"domain\": \"{{.Domain}}\", \"ip\": \"{{.CurrentIP}}\", \"ip_type\": \"{{.IPType}}\" }"
  }
}

webhook

$ cat /usr/local/etc/webhook.yaml
---
# See https://github.com/adnanh/webhook/wiki for further information on this
# file and its options.  Instead of YAML, you can also define your
# configuration as JSON.  We've picked YAML for these examples because it
# supports comments, whereas JSON does not.
#
# In the default configuration, webhook runs as user nobody.  Depending on
# the actions you want your webhooks to take, you might want to run it as
# user root.  Set the rc.conf(5) variable webhook_user to the desired user,
# and restart webhook.

# An example for a simple webhook you can call from a browser or with
# wget(1) or curl(1):
#   curl -v 'localhost:9000/hooks/samplewebhook?secret=geheim'
- id: godns
  execute-command: "/usr/local/etc/godns.sh"
  command-working-directory: "/usr/local/etc"
  pass-arguments-to-command:
  - source: payload
    name: domain
  - source: payload
    name: ip
  - source: payload
    name: ip_type
  trigger-rule:
    and:
      - match:
          type: value
          value: "your.domain.com"
          parameter:
            source: payload
            name: domain

shell script

$ cat /usr/local/etc/godns.sh
#!/bin/sh

# write ip log to a file
now="$(date +'%y%m%d%H%M%S%N')"
echo $now $1 $2 $3 >> godns.txt

# restart coturn when ip changed
turnserver_config="/usr/local/etc/turnserver.conf"
current_external_ip_config=$(cat $turnserver_config | grep "^external-ip" | cut -d'=' -f2)
current_external_ip_webhook=$2

if [ -n "$current_external_ip_webhook" ] && [ $current_external_ip_config != $current_external_ip_webhook ]; then
        sed -i .old -e "s/external-ip=$current_external_ip_config/external-ip=$current_external_ip_webhook/g" $turnserver_config
        service turnserver restart
fi

It may not work well enough, if something happen and godns can not send webhook to coturn server. But for now we stick with it.

ASUSTOR Flashstor Gen2 NAS features AMD Ryzen Embedded V3C14, 10GbE networking, up to 12x NVMe SSD sockets

AMD Ryzen V3C14 NAS

ASUSTOR Flashstor 6 Gen2 and Flashtor 6 Pro Gen2 are NAS systems based on AMD Ryzen Embedded V3C14 quad-core processor with up to two 10GbE RJ45 ports and taking up to 6 or 12 M.2 NVMe SSDs respectively. The Flashstor Gen2 models are updated to the ASUSTOR Flashtor NAS launched last year with similar specifications including 10GbE and up to 12 M.2 SSDs, but based on a relatively low-end Intel Celeron N5105 quad-core Jasper Lake processor. The new Gen2 NAS family features a more powerful AMD Ryzen Embedded V3C14 SoC, support for up to 64GB RAM with ECC, and USB4 ports. The downside is that it lacks video output, so it can’t be used for 4K/8K video consumption like its predecessor. Flashstor Gen2 NAS specifications: SoC – AMD Ryzen Embedded V3C14 quad-core/8-thread processor @ 2.3/3.8GHz; TDP: 15W System Memory Flashstor 6 Gen2 (FS6806X) – 8 GB DDR5-4800 Flashstor 12 Pro [...]

The post ASUSTOR Flashstor Gen2 NAS features AMD Ryzen Embedded V3C14, 10GbE networking, up to 12x NVMe SSD sockets appeared first on CNX Software - Embedded Systems News.

Important Announcement: Free Shipping Policy Adjustment and Service Enhancements

By: Rachel

Thank you for your continued support! To further enhance your shopping experience, we are rolling out a series of new features designed to provide more efficient and localized services. Additionally, we will be updating our current free shipping policy. Please take note of the following important updates:


Recent Logistics Enhancements

Between June and October 2024, we implemented several key logistics upgrades to enhance service quality and lay the groundwork for upcoming policy adjustments:

1. Expanded Shipping Options

  • US Warehouse: Added UPS-2 Day, FedEx, and UPS-Ground for more fast shipping choices.
  • CN Warehouse: Introduced Airtransport Direct Line small package service, reducing delivery times from 20-35 days to just 7-10 days.

2. Optimized Small Parcel Shipping and Cost Control

  • Adjusted packaging specifications for CN warehouse shipments, significantly lowering costs while improving shipping efficiency.

3. Accelerated Overall Delivery Times

  • Streamlined export customs clearance from Shenzhen and synchronized handoffs with European and American couriers, reducing delivery times to just 3.5 days.

Enhanced Local Services for a Better Shopping Experience

To meet the diverse needs of our global users, we’ve implemented several improvements in local purchasing, logistics, and tax services:

1. Local Warehouse Pre-Order

  • Launch Date: Already Live
  • Highlights: Pre-order popular products from our US Warehouse and DE Warehouse. If immediate stock is needed, you can switch to the CN Warehouse for purchase.

2. Enhanced VAT Services for EU Customers

  • Launch Date: Already Live
  • Highlights: New VAT ID verification feature allows EU customers to shop tax-free with a valid VAT ID.

3. US Warehouse Sales Tax Implementation

  • Launch Date: January 1, 2025
  • Highlights: Automatic calculation of sales tax to comply with US local tax regulations.

Free Shipping Policy Adjustment

Starting December 31, 2024, our current free shipping policy (CN warehouse orders over $150, US & DE warehouse orders over $100) will no longer apply.

We understand that this change may cause some inconvenience in the short term. However, our aim is to offer more flexible and efficient shipping options, ensuring your shopping experience is more personalized and seamless.


Listening to Your Suggestions for Continuous Improvement

We understand that excellent logistics service stems from listening to every customer’s needs. During the optimization process, we received valuable suggestions, such as:

  • Adding a local warehouse in Australia to provide faster delivery for customers in that region.
  • Improving packaging designs to enhance protection during transit.
  • Supporting flexible delivery schedules, allowing customers to choose delivery times that work best for them.

We welcome your continued input! Starting today, submit your feedback via our Feedback Form, and receive coupon rewards for all adopted suggestions.


Important Reminder: Free Shipping Policy End Date

  • Current free shipping policy will officially end on December 31, 2024.
  • Plan your purchases in advance to enjoy the remaining free shipping benefits!

We are also working on future logistics enhancements and may introduce region-specific free shipping or special holiday promotions, so stay tuned!


Thank You for Your Support

Your trust and support inspire Seeed Studio to keep innovating. We remain focused on improving localized services, listening to your needs, and delivering a more convenient and efficient shopping experience.

If you have any questions, please don’t hesitate to contact our customer support team. Together, let’s move towards a smarter, more efficient future!

The post Important Announcement: Free Shipping Policy Adjustment and Service Enhancements appeared first on Latest Open Tech From Seeed.

Nice File Performance Optimizations Coming With Linux 6.13

In addition to the pull requests managed by Microsoft engineer Christian Brauner for VFS untorn writes for atomic writes with XFS and EXT4, Tmpfs case insensitive file/folder support, new Rust file abstractions, and the renewed multi-grain timestamps work, another interesting Linux 6.13 pull submitted by Brauner revolves around VFS file enhancements...

Friday Five — November 22, 2024

Red Hat Enterprise Linux AI Brings Greater Generative AI Choice to Microsoft AzureRHEL AI expands the ability of organizations to streamline AI model development and deployment on Microsoft Azure to fast-track AI innovation in the cloud. Learn more Technically Speaking | How open source can help with AI transparencyExplore the challenges of transparency in AI and how open source development processes can help create a more open and accessible future for AI. Learn more ZDNet - Red Hat's new OpenShift delivers AI, edge and security enhancementsRed Hat introduces new capabilities for Red Hat O

NVIDIA JetPack 6.1 Boosts Performance and Security through Camera Stack Optimizations and Introduction of Firmware TPM

Connected icons show the workflow.NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release,...Connected icons show the workflow.

NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers. With each release, JetPack has enhanced its performance, introduced new features, and optimized existing tools to deliver increased value to its users. This means that your existing Jetson Orin-based products experience performance optimizations by upgrading to…

Source

Using Python with virtual environments | The MagPi #148

Raspberry Pi OS comes with Python pre-installed, and you need to use its virtual environments to install packages. The latest issue of The MagPi, out today, features this handy tutorial, penned by our documentation lead Nate Contino, to get you started.

Raspberry Pi OS comes with Python 3 pre-installed. Interfering with the system Python installation can cause problems for your operating system. When you install third-party Python libraries, always use the correct package-management tools.

On Linux, you can install python dependencies in two ways:

  • use apt to install pre-configured system packages
  • use pip to install libraries using Python’s dependency manager in a virtual environment
It is possible to create virtual environments inside Thonny as well as from the command line

Install Python packages using apt

Packages installed via apt are packaged specifically for Raspberry Pi OS. These packages usually come pre-compiled, so they install faster. Because apt manages dependencies for all packages, installing with this method includes all of the sub-dependencies needed to run the package. And apt ensures that you don’t break other packages if you uninstall.

For instance, to install the Python 3 library that supports the Raspberry Pi Build HAT, run the following command:

$ sudo apt install python3-build-hat

To find Python packages distributed with apt, use apt search. In most cases, Python packages use the prefix python- or python3-: for instance, you can find the numpy package under the name python3-numpy.

Install Python libraries using pip

In older versions of Raspberry Pi OS, you could install libraries directly into the system version of Python using pip. Since Raspberry Pi OS Bookworm, users cannot install libraries directly into the system version of Python.

Attempting to install packages with pip causes an error in Raspberry Pi OS Bookworm

Instead, install libraries into a virtual environment (venv). To install a library at the system level for all users, install it with apt.

Attempting to install a Python package system-wide outputs an error similar to the following:

$ pip install buildhat
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    For more information visit http://rptl.io/venv

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Python users have long dealt with conflicts between OS package managers like apt and Python-specific package management tools like pip. These conflicts include both Python-level API incompatibilities and conflicts over file ownership.

Starting in Raspberry Pi OS Bookworm, packages installed via pip must be installed into a Python virtual environment (venv). A virtual environment is a container where you can safely install third-party modules so they won’t interfere with your system Python.

Use pip with virtual environments

To use a virtual environment, create a container to store the environment. There are several ways you can do this depending on how you want to work with Python:

per-project environments

Create a virtual environment in a project folder to install packages local to that project

Many users create separate virtual environments for each Python project. Locate the virtual environment in the root folder of each project, typically with a shared name like env. Run the following command from the root folder of each project to create a virtual environment configuration folder:

$ python -m venv env

Before you work on a project, run the following command from the root of the project to start using the virtual environment:

$ source env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

When you finish working on a project, run the following command from any directory to leave the virtual environment:

$ deactivate

per-user environments

Instead of creating a virtual environment for each of your Python projects, you can create a single virtual environment for your user account. Activate that virtual environment before running any of your Python code. This approach can be more convenient for workflows that share many libraries across projects.

When creating a virtual environment for multiple projects across an entire user account, consider locating the virtual environment configuration files in your home directory. Store your configuration in a folder whose name begins with a period to hide the folder by default, preventing it from cluttering your home folder.

Add a virtual environment to your home directory to use it in multiple projects and share the packages

Use the following command to create a virtual environment in a hidden folder in the current user’s home directory:

$ python -m venv ~/.env

Run the following command from any directory to start using the virtual environment:

$ source ~/.env/bin/activate

You should then see a prompt similar to the following:

$ (.env) $

To leave the virtual environment, run the following command from any directory:

$ deactivate

Create a virtual environment

Run the following command to create a virtual environment configuration folder, replacing <env-name> with the name you would like to use for the virtual environment (e.g. env):

$ python -m venv <env-name>

Enter a virtual environment

Then, execute the bin/activate script in the virtual environment configuration folder to enter the virtual environment:

$ source <env-name>/bin/activate

You should then see a prompt similar to the following:

$ (<env-name>) $

The (<env-name>) command prompt prefix indicates that the current terminal session is in a virtual environment named <env-name>.

To check that you’re in a virtual environment, use pip list to view the list of installed packages:

$ (<env-name>) $ pip list
Package    Version
---------- -------
pip        23.0.1
setuptools 66.1.1

The list should be much shorter than the list of packages installed in your system Python. You can now safely install packages with pip. Any packages you install with pip while in a virtual environment only install to that virtual environment. In a virtual environment, the python or python3 commands automatically use the virtual environment’s version of Python and installed packages instead of the system Python.

Top Tip
Pass the –system-site-packages flag before the folder name to preload all of the currently installed packages in your system Python installation into the virtual environment.

Exit a virtual environment

To leave a virtual environment, run the following command:

$ (<env-name>) $ deactivate

Use the Thonny editor

We recommend Thonny for editing Python code on the Raspberry Pi.

By default, Thonny uses the system Python. However, you can switch to using a Python virtual environment by clicking on the interpreter menu in the bottom right of the Thonny window. Select a configured environment or configure a new virtual environment with Configure interpreter.

The MagPi #148 out NOW!

You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.

You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!

The post Using Python with virtual environments | The MagPi #148 appeared first on Raspberry Pi.

Meet the CapibaraZero, a multifunctional security and hacking tool based on the Nano ESP32

In recent years, tools such as the FlipperZero have become quite popular amongst hobbyists and security professionals alike for their small size and wide array of hacking tools. Inspired by the functionality of the FlipperZero, Project Hub user ‘andreockx’ created a similar multi-radio tool named the CapibaraZero, which has the same core abilities and even a little more.

The project uses an Arduino Nano ESP32 as its processor and as a way to provide Wi-Fi, Bluetooth Low-Energy, and human interface features. The chipset can scan for nearby Wi-Fi networks, present fake captive portals, prevent other devices from receiving IP addresses through DHCP starvation, and even carry out ARP poisoning attacks. Andre’s inclusion of a LoRa radio module further differentiates his creation by letting it transmit information in the sub-GHz spectrum over long distances. And lastly, the PN532 RFID module can read encrypted MiFare NFC tags and crack them through brute force.

This collection of the Nano ESP32, wireless radios, and a LiPo battery + charging module were all attached to a custom PCB mainboard while five additional buttons were connected via secondary daughterboard before the entire assembly was placed into a 3D printed case.

For more details about the CapibaraZero, you can read Andre’s write-up here on the Project Hub.

The post Meet the CapibaraZero, a multifunctional security and hacking tool based on the Nano ESP32 appeared first on Arduino Blog.

❌