Reading view

There are new articles available, click to refresh the page.

2024 reCamera reCap – AI Camera Growing on the Way

Dear All,

It’s been four months since our tiny AI superstar, reCamera, first stepped into the spotlight among our developer community. From its humble beginnings to the milestones we’ve achieved together, reCamera has become the trusted companion, continually inspiring creativity and innovation in edge AI and robotics. Now, as we look back at the journey so far, it’s the perfect moment to revisit how it all started, celebrate the progress we’ve made, and explore the exciting possibilities that await in the next chapter!

Have you heard about reCamera? What makes it unique?

reCamera comes with its processor and camera sensor. It’s the first open-source, tiny AI camera, programmable and customized, powered by an RISC-V SoC, delivering 1TOPS AI performance with video encoding 5MP@30fps. The modular hardware design gives you freedom to change various camera sensors and diverse interfaces baseboard as requirement, offering the most versitile platform for developers tackling vision AI systems.

Hardware checking list

  • Core board: CPU, RAM, 8GB/64GB eMMC, and wireless module as customized antenna on-board
  • Sensor board: currently compatible with OV5647/IMX335/SC130GS camera sensor, and continuously supporting more on the list, along with other sensors: mic, speaker, actuator, LED.
  • Base board: determines the communication ports at reCamera’s bottom, we have USB2.0, ethernet, and serial port by default, open to be customized as you want – PoE/CAN/RS485/Display/Gyro/Type-C/vertical Type-C, etc.
  • Core board covering by metal mainframe, along with the rubber ring wrapped inside the grooves for waterproofing and excellent temperature maintainance below 50℃.

OS & Dashboard

Well, the very first and important impression you should get from reCamera is that it’s already a computer. We set up the build root system, a lightweight customized Linux system in multi-thread, running all tasks without worrying about the conflicts. It supports Python and Node.js directly from console. You can also easily deploy the compiled executive files from C/Rust, so, very programming-friendly.

Just in case you’d prefer to forget about all programming scripts and complex configurations 🙂 we’ve pre-built Node-RED integration for you, to build up your whole workflow in NO-CODE. It’s completely simple to start by choosing the customized nodes for reCamera pipeline, linking each other to call the camera API, and using the NPU to load AI models directly onto the device. Finally, a web UI or mobile dashboard could show up seamlessly and help verify results effortlessly.

So far, what we’ve done

Ultralytics & reCamera

Besides the standard reCamera as a standalone device only with hardware combination and Node-RED integration, we also provide you another option that can be more seamless to build your vision AI project – reCamera pre-installed with Ultralytics YOLO11 model (YOLO11n)! It comes with the native support and licensing for Ultralytics YOLO, offering real-world solutions such as object counting, heatmap, blurring, and security systems, enhancing efficiency and accuracy in diverse industries.

Application demos

1. reCamera voice control gimbal: we used Llama3 and LLaVA deployed on reComputer Jetson Orin. The whole setup is fully local, reading live video streamed from reCamera to get the basic perception of the current situation, and delivering instructions through Jetson “brain” thinking. Now, you can ask reCamera to turn left to check how many people are there and describe what it sees!

2. reCamera with Wi-Fi HaLow: We’ve tested reCamera with Wi-Fi HaLow long-range connectivity, linear distance could be up to 1km in stable!

3. Live-check reCamera detecting results through any browser: With pre-built Node-RED for on-device workflow configuration, you can quickly build and modify your applications on it, and check out video streams with various platforms.

4. reCamera with LeRobot arm: We used reCamera to scan ArUco markers to identify specific objects and utilized the ROS architecture to control the robotic arm.

Milestone – ready to see you in the real world!

July: prototype in reCamera gimbal

August: First show up: in Seeed “Making Next Gadget” Livestream

September: Introduce reCamera to the world

October: unboxing

November: first batch shipping on the way

December: “gimbal” bells coming – reCamera gimbal alpha test

Where our reCamera has traveled

  • California, US – K-Hacks0.2 humanoid robotics hackathon 12/14-15
  • Shanghai, China – ROSCON China 2024 12/08
  • Barcelona, France – Smart City Expo World Congress 2024 11/05-07
  • California, US – MakerFaire Bay Area 10/18-20
  • Madrid, Spain – YOLO Vision 2024 09/27

Wiki & GitHub resources

Seeed Wiki:

Seeed GitHub:

Appreciate community contribution on reCamera resources from our Discord group:

Listen to users: upon Alpha Test reviews

Based on insights from the Alpha test and feedback during the official product launch, we’ve gained invaluable input from our developer community. Many of you have voiced specific requests, such as waterproofing the entire device, integrating NIR infrared, thermal imaging, and night vision camera sensors with reCamera. These features are crucial, and we’re committed to diving deeper into their development to bring them to life. Your enthusiasm and support mean the world to us—thank you for being part of this journey.

Some adorable moments~

In addition, we’ve selected four Alpha testers to explore fresh ideas and contribute to hardware iterations by trying out the reCamera gimbal. Stay tuned for our updates, and continue to join us as we embark on this exciting path of growth and innovation!

Warm Regards,

AI Robotics Team @ Seeed Studio

The post 2024 reCamera reCap – AI Camera Growing on the Way appeared first on Latest Open Tech From Seeed.

Different Types of Career Goals for Computer Vision Engineers

Computer vision is one of the most exciting fields in AI, offering a range of career paths from technical mastery to leadership and innovation. This article outlines the key types of career goals for computer vision engineers—technical skills, research, project management, niche expertise, networking, and entrepreneurship—and provides actionable tips to help you achieve them. 

Diverse Career Goals for Aspiring Computer Vision Engineers

Setting goals is about identifying where you want to go and creating a roadmap. Each of these paths can take your career in exciting directions.

Here are the key types of career goals for computer vision engineers:

  1. Technical Proficiency Goals: Building expertise in the tools and technologies needed to solve real-world problems.
  2. Research and Development Goals: Innovating and contributing to cutting-edge advancements in the field.
  3. Project and Product Management Goals: Taking the lead in driving projects and delivering impactful solutions.
  4. Industry Specialization Goals: Developing niche expertise in areas like healthcare, autonomous systems, or robotics.
  5. Networking and Community Engagement Goals: Growing your professional network and staying connected with industry trends.
  6. Entrepreneurial and Business Goals: Combining technical skills with business insights to start your own ventures.

Technical Proficiency Goals: Mastering Essential Skills

Learn Programming Languages Essential for Computer Vision

  • Python: The most popular programming language for AI and computer vision, thanks to its simplicity and a rich ecosystem of libraries and frameworks like OpenCV, TensorFlow, and PyTorch.
  • C++: Widely used in performance-critical applications such as robotics and real-time systems. Many industry-grade computer vision libraries, including OpenCV, are written in C++.

Tip: Combine Python’s ease of use with C++ for optimized and efficient solutions. For instance, you can prototype in Python and deploy optimized models in C++ for edge devices.

Master Frameworks for Machine Learning and Deep Learning

  • PyTorch: Known for its flexibility and pythonic style, PyTorch is ideal for experimenting with custom neural networks.

A notable example is OpenAI’s ChatGPT, which uses PyTorch as its preferred framework, along with additional optimizations.

  • TensorFlow: Preferred for production environments, TensorFlow excels in scalability and deployment, especially on cloud-based platforms like AWS or Google Cloud.
  • Hugging Face Transformers: For those exploring multimodal or natural language-driven computer vision tasks like CLIP, Hugging Face offers state-of-the-art models checkpoints and easy-to-use implementations.

Example: Learning PyTorch enables you to implement advanced architectures like YOLOv8 for object detection or Vision Transformers (ViT) for image classification. TensorFlow can then help you scale and deploy these models in production.

Tip: Practice by recreating famous models like ResNet or UNet using PyTorch or TensorFlow.

Technical Proficiency Goals: Mastering Essential Skills - Learn Programming Languages Essential for Computer Vision

Gain Expertise in Image Processing Techniques

Understanding image processing is critical for tasks like data pre-processing, feature extraction, and real-time applications. Focus on:

  • Feature Detection and Description: Study techniques like SIFT, ORB, and HOG for tasks such as image matching or object tracking.
  • Edge Detection: Techniques like Canny or Sobel filters are foundational for many computer vision workflows.
  • Fourier Transforms: Learn how frequency-domain analysis can be applied to denoise images or detect repeating patterns.

Example: Combining OpenCV’s edge detection with PyTorch’s neural networks allows you to design hybrid systems that are both efficient and robust, like automating inspection tasks in manufacturing.

Tip: Start with OpenCV, a comprehensive library for image processing. It’s an excellent tool to build a strong foundation before moving on to deep learning-based methods.

Get Familiar with Emerging Technologies and Trends

Staying updated is critical for growth in computer vision. Keep an eye on:

  • Transformers for Vision Tasks: Vision Transformers (ViT) and hybrid approaches like Swin Transformers are changing image classification and segmentation.
  • Self-Supervised Learning: Frameworks like DINO and MAE are redefining how models learn meaningful representations without extensive labeled data.
  • 3D Vision and Depth Sensing: Techniques like 3D Gaussian Splatting and NeRF (Neural Radiance Fields), along with LiDAR-based solutions, are becoming essential in fields like AR/VR and autonomous systems.
    Example: While NeRF models have demonstrated incredible potential in generating photorealistic 3D scenes from 2D images, 3D Gaussian Splatting has emerged as a faster and more efficient alternative, making it the current standard for many applications. Specializing in 3D vision can open doors to industries like gaming, AR/VR, and metaverse development.

Tip: Participate in Kaggle competitions to practice applying these emerging technologies to real-world problems.

Develop Cross-Disciplinary Knowledge

Computer vision often overlaps with other domains like natural language processing (NLP) and robotics. Developing cross-disciplinary knowledge can make you more versatile:

  • CLIP: A model that combines vision and language, enabling tasks like zero-shot classification or image-text retrieval.
  • SLAM (Simultaneous Localization and Mapping): Essential for robotics and AR/VR, where understanding the spatial environment is key.

Example: Using CLIP for a multimodal project like creating a visual search engine for retail can showcase your ability to work across disciplines.

Research and Development Goals: Innovating Through Research

In computer vision, research drives breakthroughs that power new applications and technologies. Whether you’re working in academia or the industry, contributing to cutting-edge advancements can shape your career and position you as an innovator. Here’s how to set impactful research and development goals:

Contribute to Groundbreaking Studies

  • Collaborate with academic or industrial research teams to explore new approaches in areas like image segmentation, object detection, or 3D scene reconstruction.
  • Stay informed about trending topics like self-supervised learning, generative AI (e.g., diffusion models), or 3D vision using NeRF.

Example: Researchers at Google AI introduced NeRF (Neural Radiance Fields), which has revolutionized 3D rendering from 2D images. Participating in similar projects can set you apart as a thought leader.

Tip: Follow top conferences like CVPR, ICCV, SIGGRAPH and NeurIPS to stay updated on the latest developments and identify research areas where you can contribute.

Research and Development Goals: Innovating Through Research

Publish Papers in Reputable Journals and Conferences

  • Focus on publishing high-quality research that offers novel solutions or insights. Start with workshops or smaller conferences before targeting top-tier journals.
  • Collaborate with peers to ensure diverse perspectives and rigorous methodologies in your research.

Example: Publishing a paper on improving image segmentation for medical imaging can highlight your ability to create solutions that directly impact lives.

Tip: Use platforms like arXiv to share your work as preprints with the broader community and gain visibility before formal publication.

Develop and Implement New Algorithms

  • Aim to design algorithms that solve specific real-world problems, such as reducing computation time for object detection or improving the accuracy of facial recognition systems.
  • Benchmark your work against existing methods to demonstrate its value.

Example: YOLO (You Only Look Once) became a widely used object detection algorithm due to its speed and accuracy. A similar contribution can position you as an industry expert.

Tip: Use open-source datasets like COCO or ImageNet to train and test your models, and make your code available on platforms like GitHub to build credibility.

Leverage Tools and Resources

  • Use frameworks like PyTorch or TensorFlow to experiment with state-of-the-art models and techniques.
  • OpenCV University courses can help you build the foundational and advanced knowledge required to excel in research.

Actionable Tip: Start small by recreating experiments from existing papers, then move on to creating your own enhancements.

Project and Product Management Goals: Leading Projects and Teams

Taking the lead on projects or managing products is a natural next step for many computer vision engineers. These roles allow you to combine your technical expertise with leadership skills, creating opportunities to drive impactful outcomes.

Project and Product Management Goals: Leading Projects and Teams

Lead High-Impact Projects

  • Take on leadership roles in projects that integrate computer vision into real-world applications. This could range from automating quality control in manufacturing to enabling AI-powered healthcare diagnostics.

Example: Managing a project to deploy a computer vision-based quality inspection system in a factory can showcase your ability to lead end-to-end solutions.

Tip: Start with smaller projects to build confidence, then gradually expand your scope to larger, multi-functional initiatives.

Deliver Products from Concept to Market

  • Gain experience in the entire lifecycle of product development:
    • Ideation: Identifying the problem and brainstorming solutions.
    • Prototyping: Building and testing proof-of-concept solutions.
    • Deployment: Scaling and optimizing solutions for real-world use.

Collaborate Across Teams

  • Work with data scientists, software engineers, and business teams to align technical goals with business objectives.
  • Improve your communication skills to effectively translate complex concepts into actionable tasks.

Example: Collaborating with Software Engineers to integrate a computer vision solution into a retail application can demonstrate your ability to connect technical and non-technical domains.

Tip: Practice explaining your projects to non-technical audiences—it’s a skill that will serve you well in leadership roles.

Learn Project Management Frameworks

  • Familiarize yourself with methodologies like Agile or Scrum to manage tasks and timelines effectively.
  • Consider certifications like PMP or training in tools like Kanban for added credibility.

Tip: Balancing technical responsibilities with management duties can be challenging. Start with hybrid roles, such as technical lead, before fully transitioning into management.

Leading projects or managing products not only enhances your impact but also helps you develop skills that are valuable across industries. Whether you’re building the next big AI product or managing a team of researchers, these goals can take your career to the next level.

Industry Specialization Goals: Becoming a Niche Expert

Specializing in a particular industry can set you apart as an expert in your field. Computer vision is a versatile discipline with applications across sectors like healthcare, autonomous systems, and retail. Focusing on a niche allows you to deepen your knowledge and create a strong personal brand.

computer vision engineer career goals - Industry Specialization Goals: Becoming a Niche Expert

Identify High-Demand Industries

  • Autonomous Vehicles: Dive into technologies like 3D vision, object detection, and path planning. Companies like Tesla and Waymo are actively seeking experts in this field.
  • Healthcare: Work on applications like medical image segmentation or diagnostic tools for diseases such as cancer or diabetic retinopathy.
  • Robotics: Explore vision-guided robots for warehouse automation, agriculture, or advanced manufacturing.

Example: Specializing in autonomous vehicles can open opportunities to work with industry giants or startups shaping the future of mobility.

Tip: Follow industry trends to identify growing demand. For instance, the rise of generative AI in 3D modeling is creating opportunities in AR/VR and gaming.

Develop Domain-Specific Knowledge

  • Learn industry-specific challenges and regulations. For example, healthcare solutions must comply with standards like HIPAA.
  • Focus on tools and techniques relevant to your chosen niche. For robotics, understanding SLAM (Simultaneous Localization and Mapping) is crucial.

Example: A deep understanding of SLAM can make you indispensable in robotics or AR/VR development.

Tip: Participate in sector-focused hackathons or competitions to build hands-on experience and grow your portfolio.

Collaborate with Industry Leaders

  • Attend conferences and workshops tailored to your chosen field. Build a professional network by connecting with industry experts on platforms like LinkedIn or during industry events.

Example: Networking at conferences can help you find mentors and collaborators in your specialization area.

Networking and Community Engagement Goals: Building Professional Connections

Building a strong network is vital for career growth. Engaging with the computer vision community can help you stay updated on trends, discover job opportunities, and find project collaborators.

set specific goal to become ann accomplished computer vision engineer

Attend Conferences and Workshops

  • Participate in renowned events like CVPR, ECCV, or NeurIPS. These gatherings provide opportunities to learn from industry leaders and showcase your work.
  • Explore local meetups or hackathons to connect with peers and exchange ideas.

Example: Presenting your project at a CVPR workshop can establish your credibility and expand your professional circle.

Tip: If you can’t attend in person, look for virtual events and webinars that offer similar benefits.

Contribute to the Community

  • Share your knowledge by writing blogs, creating tutorials, or contributing to open-source projects. Platforms like Medium, GitHub, and LinkedIn are excellent places to start.
  • Mentor aspiring computer vision engineers or participate in community forums to give back and establish your presence.

Example: Writing a step-by-step guide on implementing YOLOv8 can help you gain recognition in the community.

Tip: Regularly contribute to open-source libraries like OpenCV to demonstrate your skills and commitment to the community.

Build Meaningful Connections

  • Connect with thought leaders and peers on LinkedIn or Twitter. Engage with their work by sharing insights or asking thoughtful questions.
  • Join online communities like Kaggle, Stack Overflow, or GitHub Discussions to collaborate with like-minded professionals.

Example: Participating in Kaggle competitions improves your skills and helps you connect with top-tier talent in the field.

Tip: Networking isn’t just about meeting people—it’s about maintaining relationships. Follow up after conferences or collaborations to keep the connection alive.

Grow Through Collaborative Learning

  • Join study groups or enroll in courses that promote interaction with other learners. Sharing experiences can deepen your understanding of complex topics.

Tip: Actively participate in course discussions or forums to maximize your engagement and learning.

Entrepreneurial and Business Development Goals: Innovating and Leading

For entrepreneurial people, computer vision offers exciting opportunities to combine technical expertise with business skills. By starting your own venture or contributing to business development, you can shape innovative solutions and address market needs directly.

Career Planning in Computer Vision,Computer Vision Engineer Skills

Start Your Own Tech Venture

  • Identify real-world problems that computer vision can solve. Common areas include:
    • Retail: Inventory tracking or virtual try-ons.
    • Healthcare: AI-powered diagnostic tools.
    • Agriculture: Crop monitoring and pest detection.
  • Develop a minimum viable product (MVP) to demonstrate your solution’s potential.

Example: Launching a startup offering computer vision solutions for retail, such as automated shelf analysis, can attract investors and clients.

Tip: Use frameworks like OpenCV for prototyping and streamline development with cloud services like AWS or Google Cloud.

Secure Funding and Partnerships

  • Pitch your ideas to venture capitalists or apply for grants to secure funding.
  • Collaborate with established companies to accelerate development and market entry.

Example: Many successful startups, such as Scale AI, began by identifying niche challenges and partnering with industry leaders.

Tip: Prepare a solid business plan with a clear value proposition to stand out to investors.

Develop Proprietary Applications

  • Build solutions that cater to specific industries. Proprietary applications can give you a competitive edge and generate revenue through licensing or subscriptions.

Example: Creating a vision-based inspection tool for manufacturing could streamline quality control and open doors to recurring business.

Tip: Stay user-focused. Build intuitive interfaces and prioritize features that address client pain points.

Combine Technical and Business Skills

  • Expand your knowledge beyond technical expertise. Learn about market analysis, customer acquisition, and scaling strategies.
  • Consider taking business courses or certifications to enhance your entrepreneurial skillset.

Tip: Use your technical background to solve problems that non-technical founders might overlook, giving your business a unique edge.

Achieve Your Career Goals with OpenCV University

Whether you aim to master technical skills, lead groundbreaking projects, or launch your own startup, OpenCV University has the resources to support you. Our curated courses are designed to help you at every career stage.

Free Courses

  • Get started with the fundamentals of computer vision and machine learning.
  • Explore foundational courses like the OpenCV Bootcamp or TensorFlow Bootcamp to build your skills without breaking the bank.

Premium Courses

  • For a more in-depth learning experience, our premium course ‘Computer Vision Deep Learning Master Budle’ are tailored to help you excel in advanced topics like deep learning, computer vision applications, and AI-driven solutions.
  • Learn directly from industry experts and gain practical skills that are immediately applicable in professional settings.

The post Different Types of Career Goals for Computer Vision Engineers appeared first on OpenCV.

STMicro releases STM32N6 Cortex-M55 MCU series with in-house NPU and dedicated computer vision pipeline

STM32N6 AI Demo

STMicro has announced the availability of the STM32N6 microcontroller series based on the 800MHz ARM Cortex-M55 and the 600 GOPS-capable Neural-ART Accelerator. The STM32N6 is the company’s “newest and most powerful STM32 series,” bringing MPU-level performance to MCUs. It is the first STM32 to feature the Arm Cortex-M55 and offer up to 4.2MB of embedded RAM. Additionally, the chip includes ST’s NeoChrom GPU and an H.264 hardware encoder. According to Remi El-Quazzane, MDRF (Microcontrollers, Digital ICs, and RF Products) President at STMicro, the STM32N6 “marks the beginning of a long journey of AI hardware-accelerated STM32, which will enable innovations in applications and products in ways not possible with any other embedded processing solution.” STMicro offers two versions of the STM32N6 MCU: the STM32N6x7 AI line featuring the Neural-ART accelerator and the STM32N6x5 GP (general-purpose) product line without an NPU. The microcontroller series is primarily targeted at computer vision and audio [...]

The post STMicro releases STM32N6 Cortex-M55 MCU series with in-house NPU and dedicated computer vision pipeline appeared first on CNX Software - Embedded Systems News.

Lattice unveils Nexus 2 small FPGA platform, Lattice Avant 30 and Avant 50 mid-range devices, updated Lattice design software tools

Lattice Nexus 2

Lattice Semiconductors announced several new FPGAs and software tools at the Lattice Developers Conference 2024 which took place on December 10-11. First, the company unveiled the Nexus 2 small FPGA platform starting with the Certus-N2 general-purpose FPGAs offering significant efficiency and performance improvements in this category of devices. The Lattice Avant 30 and Avant 50 were also introduced as mid-range FPGA devices with new capacity options to enable edge-optimized and advanced connectivity applications. Finally, the company releases new versions of Lattice design software tools and application-specific solution stacks to help accelerate customer time-to-market such for edge AI, embedded vision, factory automation, and automotive designs with Lattice Drive. Let’s have a look at the highlights of each announcement. Lattice Nexus 2 small FPGA platform and Certus-N2 FPGA Highlights and benefits of the Lattice Nexus 2 small FPGA platform: Power Efficiency against similar class competitive devices Up to 3x lower power Up [...]

The post Lattice unveils Nexus 2 small FPGA platform, Lattice Avant 30 and Avant 50 mid-range devices, updated Lattice design software tools appeared first on CNX Software - Embedded Systems News.

Sipeed’s MaixCAM-Pro AI camera devkit adds 2.4-inch LCD, 1W speaker, PMOD interface on top of WiFi 6 and BLE 5.4

Sipeed MaixCAM Pro AI camera devkit

Sipeed has recently released the MaixCAM-Pro AI camera devkit built around the SOPHGO SG2002 RISC-V (and Arm, and 8051) SoC which also features a 1 TOPS NPU for AI tasks. The module includes a 2.4-inch color touchscreen and supports up to a 5MP camera module. Other features include WiFi 6, BLE 5.4, optional Ethernet, built-in audio capabilities, a PMOD interface, GPIOs, and more. Additionally, it features an IMU, RTC chip, and AXP2101 power management for enhanced performance. The module is designed for AI vision, IoT, multimedia, and real-time processing applications. Just a few months back, Sipeed introduced the MaixCAM AI camera devkit, which is also built around the SOPHGO SG2002 RISC-V SoC. The new module improves on the MaixCAM with a redesigned PCB, upgraded casing, and various new features including a 2.4-inch IPS touchscreen (640×480), a 1W speaker, expanded IO interfaces, a power button, and an illumination LED. It also [...]

The post Sipeed’s MaixCAM-Pro AI camera devkit adds 2.4-inch LCD, 1W speaker, PMOD interface on top of WiFi 6 and BLE 5.4 appeared first on CNX Software - Embedded Systems News.

SONOFF CAM Slim Gen2 Review – A tiny indoor security camera tested with eWeLink and Home Assistant

SonoffCAMSlim2 Cover

We have received the latest tiny indoor security camera from SONOFF: the second generation of the CAM Slim series known as the CAM Slim Gen2 (or CAM S2 for shorts). Some of you might remember the first-generation CAM Slim model reviewed by Jean-Luc about two years ago. The Gen2 version keeps the same 1080p resolution but comes with several upgraded features, including AI algorithms to distinguish living beings, customizable detection zones, customizable privacy zones, sleep mode, enhanced low-light image quality, and flexible storage management. Although it’s packed with several enhancements, its price is lower than the Gen1. Let’s delve into the details! SONOFF CAM Slim Gen2 unboxing Inside the box, you’ll find a compact manual, a USB-C cable, a mounting kit, and a sticker template acting as a drilling guide. The camera is smaller than your palm and comes mounted on a versatile, rotatable base, making installation in various positions [...]

The post SONOFF CAM Slim Gen2 Review – A tiny indoor security camera tested with eWeLink and Home Assistant appeared first on CNX Software - Embedded Systems News.

Getting Started with Raspberry Pi AI HAT+ (26 TOPS) and Raspberry Pi AI camera

Raspberry Pi AI HAT+ and AI camera review

Raspberry Pi recently launched several AI products including the Raspberry Pi AI HAT+ for the Pi 5 with 13 TOPS or 26 TOPS of performance and the less powerful Raspberry Pi AI camera suitable for all Raspberry Pi SBC with a MIPI CSI connector. The company sent me samples of the AI HAT+ (26 TOPS) and the AI camera for review, as well as other accessories such as the Raspberry Pi Touch Display 2 and Raspberry Pi Bumper, so I’ll report my experience getting started mostly following the documentation for the AI HAT+ and AI camera. Hardware used for testing In this tutorial/review, I’ll use a Raspberry Pi 5 with the AI HAT+ and a Raspberry Pi Camera Module 3, while I’ll connect the AI camera to a Raspberry Pi 4. I also plan to use one of the boards with the new Touch Display 2. Let’s go through a [...]

The post Getting Started with Raspberry Pi AI HAT+ (26 TOPS) and Raspberry Pi AI camera appeared first on CNX Software - Embedded Systems News.

Mercury X1 wheeled humanoid robot combines NVIDIA Jetson Xavier NX AI controller and ESP32 motor control boards

Mercury X1 wheeled humanoid robot

Elephant Robotics Mercury X1 is a 1.2-meter high wheeled humanoid robot with two robotic arms using an NVIDIA Jetson Xavier NX as its main controller and ESP32 microcontrollers for motor control and suitable for research, education, service, entertainment, and remote operation. The robot offers 19 degrees of freedom, can lift payloads of up to 1kg, work up to 8 hours on a charge, and travel at up to 1.2m/s or about 4.3km/h. It’s based on the company’s Mercury B1 dual-arm robot and a high-performance mobile base. Mercury X1 specifications: Main controller – NVIDIA Jetson Xavier NX CPU – 6-core NVIDIA Carmel ARM v8.2 64-bit CPU with 6MB L2 + 4MB L3 caches GPU – 384-core NVIDIA Volta GPU with 48 Tensor Cores AI accelerators – 2x NVDLA deep learning accelerators delivering up to 21 TOPS at 15 Watts System Memory – 8 GB 128-bit LPDDR4x @ 51.2GB/s Storage – 16 [...]

The post Mercury X1 wheeled humanoid robot combines NVIDIA Jetson Xavier NX AI controller and ESP32 motor control boards appeared first on CNX Software - Embedded Systems News.

Portwell PJAI-100-ON rugged Edge AI embedded system features NVIDIA Jetson Orin Nano for industrial quality control

Portwell PJAI-100-ON

Portwell has launched the PJAI-100-ON, a rugged and compact embedded system powered by the NVIDIA Jetson Orin Nano SOM and designed for demanding industrial environments. It operates reliably across a wide temperature range of -20°C to 60°C with a fanless, quiet design that requires minimal maintenance. Equipped with dual Gigabit Ethernet LAN ports and M.2 slots for wireless connectivity, the PJAI-100-ON is tailored for Edge AI applications. This system enhances quality control in manufacturing by using advanced optical inspection technology to detect defects, improving quality assurance and reducing production errors. Previously, we covered other Jetson Orin Nano embedded systems, including Aetina’s AIE-KO21, AIE-KO31, AIE-KN31, and AIE-KN41 and DFI X6-MTH-ORN fanless Edge AI Box computer, as well as Portwell’s WEBS-21J0-ASL another rugged embedded system built around an Intel Atom x7000RE Amstom Lake Nano-ITX motherboard and equipped with an Hailo-8 AI accelerator. Portwell PJAI-100-ON specifications: SoM – NVIDIA Jetson Orin Nano CPU [...]

The post Portwell PJAI-100-ON rugged Edge AI embedded system features NVIDIA Jetson Orin Nano for industrial quality control appeared first on CNX Software - Embedded Systems News.

Giveaway Week 2024 – RT-Thread Vision board with Renesas RA8D1 Arm Cortex-M85 MCU

RT-Thread Vision board

It’s already Friday, and the fifty prize of CNX Software’s Giveaway Week 2024 will be the RT-Thread Vision board equipped with a Renesas RA8D1 Arm Cortex-M85 microcontroller, a camera, an optional LCD display, and a 40-pin GPIO header. The board is used as an evaluation platform for the Renesas RA8D1 MCU and RT-Thread real-time operating system. As its name implies, it’s mainly designed for computer vision applications leveraging the Helium MVE (M-Profile Vector Extension) for digital signal processing (DSP) and machine learning (ML) applications. I haven’t reviewed it myself and instead, received two samples from RT-Thread who sent them to me by mistake, so I’ll give them away here and on the Thai website. But it was reviewed by Supachai who tested the RT-Thread Vision board with OpenMV and ran a few benchmarks last June. The Helium MVE did not seem to be utilized in OpenMV at that time (June [...]

The post Giveaway Week 2024 – RT-Thread Vision board with Renesas RA8D1 Arm Cortex-M85 MCU appeared first on CNX Software - Embedded Systems News.

M5Stack releases AX630C-powered offline “Module LLM” for local smart home and AI applications

M5Stack Module LLM

The M5Stack Module LLM is yet another box-shaped device from the company that provides artificially intelligent control without internet access. It is described as an “integrated offline Large Language Model (LLM) inference module” which can be used to implement local LLM-based solutions in smart homes, voice assistants, and industrial control. Module LLM is powered by the AX630C SoC, equipped with 4GB LPDDR4 memory, 32GB storage, and a 3.2 TOPS (INT8) or 12.8 TOPS (INT4) NPU. M5Stack says the main chip has an average runtime power consumption of 1.5W, making it suitable for long-term operation. It has a built-in microphone, speaker, microSD card slot, and USB OTG. The USB port can connect peripherals such as cameras and debuggers, and the microSD card slot supports cold and hot firmware updates. The M5Stack Module LLM joins the list of other offline, on-device LLM-based solutions, such as the SenseCAP Watcher, Useful Sensors’ AI in [...]

The post M5Stack releases AX630C-powered offline “Module LLM” for local smart home and AI applications appeared first on CNX Software - Embedded Systems News.

UP Squared Pro 710H SBC combines Alder Lake-N CPU with Hailo-8 AI accelerator for the machine vision market

UP Squared Pro 710H

AAEON UP Squared Pro 710H is a 4×4-inch SBC and developer board designed for machine vision applications with an Intel Processor N97 or Core i3-N305 Alder Lake-N CPU and a 26 TOPS Hailo-8 Edge AI processor. The single board computer supports up to two MIPI CSI cameras via a 61-pin connector and offers expansion capabilities through a 40-pin Raspberry Pi-compatible header and three M.2 sockets for 5G, Wi-Fi, and NVMe storage support. While it’s only designed to operate in the 0 to 60°C temperature range, it does have industrial features such as 12V to 36V DC input and two RS-232/422/485 connectors. UP Squared Pro 710H specifications: Alder Lake N-series SoC (one or the other) Intel Processor N97 quad-core processor up to 3.6 GHz with 6MB cache, 24EU Intel UHD Graphics Gen 12 @ 1.2 GHz; TDP: 12W Intel Core i3-N305 octa-core processor up to 3.8 GHz with 6MB cache, 32EU Intel [...]

The post UP Squared Pro 710H SBC combines Alder Lake-N CPU with Hailo-8 AI accelerator for the machine vision market appeared first on CNX Software - Embedded Systems News.

OpenUC2 10x is an ESP32-S3 portable microscope with AI-powered real-time image analysis

Seeed Studio OpenUC2 10x AI Microscope

Seeed Studio has recently launched the OpenUC2 10x AI portable microscope built around the XIAO ESP32-S3 Sense module. Designed for educational, environmental research, health monitoring, and prototyping applications this microscope features an OV2640 camera with a 10x magnification with precise motorized focusing, high-resolution imaging, and real-time TinyML processing for image handling. The microscope is modular and open-source making it easy to customize and expand its features using 3D-printed parts, motorized stages, and additional sensors. It supports Wi-Fi connectivity with a durable body, uses USB-C for power and swappable objectives make it usable in various applications. Previously we have written about similar portable microscopes like the ioLight microscope and the KoPa W5 Wi-Fi Microscope, and Jean-Luc also tested a cheap USB microscope to read part number of components. Feel free to check those out if you are looking for a cheap microscope. OpenUC2 10x specifications: Wireless MCU – Espressif Systems ESP32-S3 CPU [...]

The post OpenUC2 10x is an ESP32-S3 portable microscope with AI-powered real-time image analysis appeared first on CNX Software - Embedded Systems News.

Microchip PIC64HX1000 64-bit RISC-V AI MPU delivers post-quantum security for aerospace, defense, and automotive applications

Microchip PIC64HX1000 64 bit AI MPU

Microchip recently unveiled the PIC64HX1000 64-bit RISC-V AI microprocessor (MPU) family designed for mission-critical intelligent edge applications in the aerospace, defense, industrial, and medical sectors thanks to a quantum-resistant design. These new MPUs feature eight SiFive’s Intelligence X280 cores, each clocked at 1 GHz. The MPUs are engineered with a decoupled vector pipeline enabling 512-bit operations enabling the PIC64HX1000 to achieve up to 2 TOPS for AI/ML processing and come equipped with integrated Time-Sensitive Networking (TSN) Ethernet connectivity. This new microprocessor includes a dedicated system controller for runtime monitoring and fault management, WorldGuard architecture for workload isolation, and post-quantum defense-grade cryptography, which includes the NIST-standardized FIPS 203 and FIPS 204 cryptographic algorithms, ensuring protection against future quantum computing threats. PIC64HX1000 64-bit AI MPU specification MPU variants PIC64HX1000 – IN (Industrial) PIC64HX1000 – AV (Aviation) PIC64HX1000 – MI (Military) CPU – 8x 64-bit RISC-V cores (SiFive X280), up to 1 GHz, [...]

The post Microchip PIC64HX1000 64-bit RISC-V AI MPU delivers post-quantum security for aerospace, defense, and automotive applications appeared first on CNX Software - Embedded Systems News.

Raspberry Pi AI HAT+ features Hailo-8L or Hailo-8 AI accelerator with up to 26 TOPS of performance

Raspberry Pi AI HAT+ 26 TOPS

The Raspberry Pi AI HAT+ is a PCIe expansion board for the Raspberry Pi 5 with either a 13 TOPS Hailo-8L or 26 TOPS Hailo-8 AI accelerator. You may remember the Raspberry Pi AI Kit was introduced last June with an official M.2 Key M HAT+ and a 13 TOPS Hailo-8L M.2 AI accelerator module, The new Raspberry Pi AI HAT+ is quite similar except the chip is soldered on the expansion board and offered with either Hailo-8L or the more powerful Hailo-8 variant. Raspberry Pi AI HAT+ specifications: Supported SBC – Raspberry Pi 5 AI accelerator Hailo-8L AI accelerator with up to 13 TOPS of performance Hailo-8 AI accelerator with up to 26 TOPS of performance Host Interface – PCIe Gen3 interface 16mm stacking GPIO header PCIe FPC cable Spacers and screws enabling fitting on Raspberry Pi 5 with Raspberry Pi Active Cooler Dimensions – 65 x 56.5 mm [...]

The post Raspberry Pi AI HAT+ features Hailo-8L or Hailo-8 AI accelerator with up to 26 TOPS of performance appeared first on CNX Software - Embedded Systems News.

Press Release: PyCharm Becomes Official IDE of OpenCV, JetBrains Joins as Silver Member

PALO ALTO, CA– JetBrains, the creators of PyCharm, the popular Python IDE for data science and web development, has formed a new partnership with OpenCV, the world’s largest library for computer vision. As part of the collaboration, JetBrains has joined OpenCV as a Silver Member, making PyCharm the official Python IDE for OpenCV.

Actively developed since June 2000, OpenCV is essential for developers and researchers working in fields like artificial intelligence (AI), machine learning, and robotics, providing powerful, open-source tools that accelerate innovation. JetBrains’ financial contribution as a Silver Member will help sustain OpenCV.org, ensuring that this invaluable resource remains free for both commercial and non-commercial projects. This is especially important as more and more of the tech industry becomes closed off to the open source community.

JetBrains, known for its suite of world-class development tools, including PyCharm, has a long-standing reputation for delivering innovative software solutions. PyCharm, in particular, is a favorite among developers due to its smart code completion, deep code analysis, support for web development frameworks, and interactive Jupyter notebooks. In addition, PyCharm is powered by an AI Assistant and provides superior database support, Python script editing, as well as support for Hugging Face, Databricks, Conda, dbt-Core, and much more. Its slogan, “Focus on code and data. PyCharm will take care of the rest,” reflects the platform’s mission to let developers focus on their core tasks while PyCharm automates routine processes. This is especially beneficial for developers working with OpenCV, as it ensures that AI and data science projects are developed faster, more efficiently, and with fewer errors.

Dr. Satya Mallick, CEO of OpenCV, expressed enthusiasm for the partnership, saying, “High-quality development tools like PyCharm are essential for driving innovation in AI and computer vision. JetBrains’ support as a Silver Member ensures that OpenCV continues to be freely available for developers around the world. PyCharm’s powerful features will undoubtedly enhance productivity and spark the imagination of OpenCV community members everywhere.”

A JetBrains executive commented, “At JetBrains, giving back to the community is a core part of our mission. By partnering with OpenCV, we’re supporting a global ecosystem of developers working in AI and computer vision, ensuring they have the best tools and open-source resources available. Our collaboration with OpenCV reflects our commitment to advancing technology and empowering developers to focus on what matters: creating impactful code.”

JetBrains’ involvement in OpenCV will also be highlighted on OpenCV Live, a popular weekly show which airs Thursday at 9am Pacific. PyCharm will be featured in episodes that showcase how its features can enhance the development process for computer vision and AI applications, beginning with an appearance on November 7th. Registration for the stream is available at http://opencv.live

As an industry leader, JetBrains has long been committed to supporting the open-source community. Trusted by developers worldwide, including those at companies like Google, Microsoft, and Meta, JetBrains provides tools that improve productivity and foster innovation. The company’s decision to become an OpenCV Silver Member reinforces its dedication to the advancement of AI and computer vision, two fields that are rapidly transforming industries around the world.

For organizations interested in joining JetBrains in supporting open-source computer vision and AI, OpenCV offers a variety of membership opportunities. Becoming a member allows companies to contribute directly to the sustainability of OpenCV, ensuring that these powerful tools remain accessible to all.

More information on how to support OpenCV’s mission can be found at opencv.org/membership

About JetBrains
JetBrains is a software development company renowned for creating powerful, intelligent tools designed to enhance developer productivity. Founded in 2000, JetBrains offers a wide range of integrated development environments (IDEs) and tools tailored to various programming languages and platforms. Among its flagship products are PyCharm, a leading Python IDE that provides robust features for coding, debugging, and testing, and CLion, an advanced IDE for C and C++ development. JetBrains’ tools are trusted by developers worldwide to streamline workflows, improve code quality, and foster efficient development across multiple programming environments.

About OpenCV
OpenCV is the largest and most widely used open-source library for computer vision and artificial intelligence. The library is downloaded over 20 million times per month, and used in an estimated 80% of embedded vision systems. OpenCV code powered Stanley, the first DARPA Grand Challenge winner, and was used in NASA’s 2020 Mars Helicopter project. Operated by the non-profit Open Source Vision Foundation, OpenCV is dedicated to advancing the field through open collaboration and democratizing access to transformative technologies. OpenCV’s official website is https://opencv.org

The post Press Release: PyCharm Becomes Official IDE of OpenCV, JetBrains Joins as Silver Member appeared first on OpenCV.

❌