Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

SeamPose Powered by XIAO nRF52840, Repurposing Seams for Upper-Body Pose Tracking with Smart Clothing

29 October 2024 at 17:31

Imagine turning your everyday clothes into smart motion-tracking tools. That’s exactly what a team of researchers at Cornell University has achieved with SeamPose. This innovative project, led by Catherine Tianhong Yu and Professor Cheng Zhang uses conductive threads sewn over the seams of a shirt to transform it into an upper-body pose-tracking device by unleashing the power of XIAO nRF52840. Unlike traditional sensor-laden garments that change the clothing’s appearance and comfort, SeamPose blends seamlessly into everyday wear without compromising aesthetics or fit.

This project offers exciting potential applications in areas like health monitoring, sports analytics, AR/VR, and human-robot interaction by making wearable tracking more accessible and comfortable.

Source: Cornell University Team

Hardwares

To build SeamPose, the following key hardware elements were used:

    • Long-sleeve T-shirt with machine-sewn conductive thread along the seams.
    • Customized Sensing Board:
      • XIAO nRF52840 
      • A 36x31mm board with two FDC2214 capacitance-to-digital converters
      • 3.7V 290mAh LiPo battery
Source: Cornell University Team

How SeamPose Works

SeamPose operates by transforming ordinary seams in a long-sleeve shirt into capacitive sensors, enabling real-time tracking of upper-body movements. Conductive threads, specifically insulated silver-plated nylon, are machine-sewn along key seams—such as the shoulders and sleeves—without altering the garment’s appearance or comfort. As the wearer moves, the seams stretch and shift, causing variations in their capacitance. These signals are captured by a customized sensing board integrated with two FDC2214 capacitance-to-digital converters and the XIAO nRF52840 microcontroller. The XIAO nRF52840 transmits the data wirelessly via Bluetooth Low Energy (BLE) to a nearby computer for processing.

The transmitted signals are fed into a deep learning model that maps the seam data to 3D joint positions relative to the pelvis. This allows the system to interpret complex upper-body movements using only eight sensors distributed symmetrically along the shirt. During testing, SeamPose achieved a mean per joint position error (MPJPE) of 6.0 cm, comparable to more invasive tracking systems. The XIAO nRF52840 ensures seamless real-time data transmission, making SeamPose a breakthrough in wearable technology by combining comfort with precise motion tracking.

Signals

Source: Cornell University Team

What’s Next for SeamPose?

SeamPose offers a glimpse into the future of smart clothing, where everyday garments become powerful tools for motion tracking without sacrificing comfort or design. However, challenges like real-world deployment considerations and smart-clothing manufacturing at scale need to be addressed for broader adoption. The research team plans to explore improved seam placement and enhanced sensor calibration for even more accurate tracking in the future.

If you’re excited about SeamPose and want to dive deeper, check out their paper on ACM Digital Library

End Note

Hey community, we’re curating a monthly newsletter centering around the beloved Seeed Studio XIAO. If you want to stay up-to-date with:

🤖 Cool Projects from the Community to get inspiration and tutorials
📰 Product Updates: firmware update, new product spoiler
📖 Wiki Updates: new wikis + wiki contribution
📣 News: events, contests, and other community stuff

Please click the image below👇 to subscribe now!

The post SeamPose Powered by XIAO nRF52840, Repurposing Seams for Upper-Body Pose Tracking with Smart Clothing appeared first on Latest Open Tech From Seeed.

WatchThis: A Wearable Point-and-Ask Interface Powered by Vision-Language Models and XIAO ESP32S3 Sense

21 October 2024 at 11:58

MIT Media Lab researchers Cathy Mengying Fang, Patrick Chwalek, Quincy Kuang, and Pattie Maes have developed WatchThis, a groundbreaking wearable device that enables natural language interactions with real-world objects through simple pointing gestures. Cathy conceived the idea for WatchThis during a one-day hackathon in Shenzhen, organized as part of MIT Media Lab’s “Research at Scale” initiative. Organized by Cedric Honnet and hosted by Southern University of Science and Technology and Seeed Studio, the hackathon provided the perfect setting to prototype this innovative device using components from the Seeed Studio XIAO ESP32S3 suite. By integrating Vision-Language Models (VLMs) with a compact wrist-worn device, WatchThis allows users to ask questions about their surroundings in real-time, making contextual queries as intuitive as pointing and asking.

Credit: Cathy Fang

Hardwares

The WatchThis project utilizes the following hardware components:

Credit: Cathy Fang

How the Project Works

WatchThis is designed to seamlessly integrate natural, gesture-based interaction into daily life. The wearable device consists of a watch with a rotating, flip-up camera attached to the back of a display. When the user points at an object of interest, the camera captures the area, and the device processes contextual queries based on the user’s gesture.

The interaction begins when the user flips up the watch body to reveal the camera, which then captures the area where the finger points at. The watch’s display shows a live feed from the camera, allowing precise aiming. When the user touches the screen, the device captures the image and pauses the camera feed. The captured RGB image is then compressed into JPG format and converted to base64, after which an API request is made to query the image.

The device uses these API calls to interact with OpenAI’s GPT-4o model, which accepts both text and image inputs. This allows the user to ask questions such as “What is this?” or “Translate this,” and receive immediate responses. The text response is displayed on the screen, overlaid on the captured image. After the response is shown for 3 seconds, the screen returns to streaming the camera feed, ready for the next command.

The software driving WatchThis is written in Arduino-compatible C++ and runs directly on the device. It is optimized for quick and efficient performance, with an end-to-end response time of around 3 seconds. Instead of relying on voice recognition or text-to-speech—which can be error-prone and resource-intensive—the system uses direct text input for queries. Users can further personalize their interactions by modifying the default query prompt through an accompanying WebApp served on the device, allowing tailored actions such as identifying objects, translating text, or requesting instructions.

Credit: Cathy Fang

Applications

Imagine strolling through a city and pointing at a building to learn its history, or identifying an exotic plant in a botanical garden with a mere gesture.

The device goes beyond simple identification, offering practical applications like real-time translation of, for example, menu items, which is a game-changer for travelers and language learners alike.

The research team has discussed even more exciting potential applications:

    • A “Remember this” function could serve as a visual reminder system, potentially aiding those who need to take medication regularly.
    • For urban explorers, a “How do I get there” feature could provide intuitive, spatially-aware navigation by allowing users to point at distant landmarks.
    • A “Zoom in on that” capability could offer a closer look at far-off objects without disrupting the user’s activities.
    • Perhaps most intriguingly, a “Turn that off” function could allow users to control smart home devices with a combination of voice commands and gestures, seamlessly integrating with IoT ecosystems.

While some of these features are still in conceptual stages, they paint a picture of a future where our interactions with the world around us are more intuitive, informative, and effortless than ever before.

Credit: Cathy Fang

Build Your Own WatchThis

Interested in building your own WatchThis wearable? Explore the open-source hardware and software components on GitHub and start creating today! Check out their paper below for full details.

End Note

Hey community, we’re curating a monthly newsletter centering around the beloved Seeed Studio XIAO. If you want to stay up-to-date with:

🤖 Cool Projects from the Community to get inspiration and tutorials
📰 Product Updates: firmware update, new product spoiler
📖 Wiki Updates: new wikis + wiki contribution
📣 News: events, contests, and other community stuff

Please click the image below👇 to subscribe now!

The post WatchThis: A Wearable Point-and-Ask Interface Powered by Vision-Language Models and XIAO ESP32S3 Sense appeared first on Latest Open Tech From Seeed.

❌
❌