If you want to add a display to your Arduino project, the easiest solution will likely be an LCD or OLED screen. But though those are affordable and work really well, they may not provide the vibe you’re looking for. If you want a more vintage look, Vaclav Krejci has a great tutorial that will walk you through using old-school LED bubble displays with your Arduino.
Krejci’s video demonstrates how to use HPDL-1414 displays, which are what most people call “bubble” displays, because they have clear bubble-like lenses over each character’s array of LEDs. They were fairly popular in the late ‘70s and ‘80s on certain devices, like calculators. These specific bubble displays can show the full range of alphanumeric characters (uppercase only), plus a handful of punctuation marks and special symbols.
The HPDL-1414 displays Krejci used come on driver boards that set the characters based on serial input. In the video, Krejci first connects those directly to a PC via a serial-to-USB adapter board. That helps to illustrate the control method through manual byte transmission.
Then Krejci gets to the good stuff: connecting the HPDL-1414 bubble displays to an Arduino. He used an Arduino UNO Rev3, but the same setup should work with any Arduino board. As you may have guessed based on the PC demonstration, the Arduino controls the display via Serial.print() commands. The hex code for each character matches the standard ASCII table, which is pretty handy. That makes it possible to Serial.write() those hex codes and even Serial.write() the actual characters.
Don’t worry if that sounds a little intimidating, because Krejci has sample code that will let you easily turn any arbitrary array of characters into the serial output you need. Now you can use those awesome bubble displays in your own projects!
A brand new issue of The MagPi is out in the wild, and one of our favourite projects we read about involved rebuilding an old PDP-9 computer with a Raspberry Pi-based device that tests hundreds of components.
Anders Sandahl loves collecting old computers: “I really like to restore them and get them going again.” For this project, he wanted to build a kind of component tester for old DEC (Digital Equipment Corporation) Flip-Chip boards before he embarked on the lengthy task of restoring his 1966 PDP-9 computer — a two-foot-tall machine with six- to seven-hundred Flip-Chip boards inside — back to working order.
His Raspberry Pi-controlled DEC Flip-Chip tester checks the power output of these boards using relay modules and signal clips, giving accurate information about each one’s power draw and output. Once he’s confident each component is working properly, Anders can begin to assemble the historic DEC PDP-9 computer, which Wikipedia advises is one of only 445 ever produced.
Logical approach
“Flip-Chip boards from this era implement simple logical functions, comparable to one 7400-series logic circuit,” Anders explains. “The tester uses Raspberry Pi and an ADC (analogue-to-digital converter) to measure and control analogue signals sent to the Flip-Chip, and digital signals used to control the tester’s circuits. PDP-7, PDP-8 (both 8/S and Straight-8), PDP-9, and PDP-10 (with the original KA processor) all use this generation of Flip-Chips. A testing device for one will work for all of them, which is pretty useful if you’re in the business of restoring old computers.
Rhode Island Computer Museum (RICM) is where The MagPi publisher Brian Jepson and friend Mike Thompson both volunteer. Mike is part of a twelve-year-project to rebuild RICM’s own DEC PDP-9 and, after working on a different Flip-Chip tester there, he got in touch with Anders about his Raspberry Pi-based version. He’s now busily helping write the user manual for the tester unit.
Mike explains: “Testing early transistor-only Flip-Chips is incredibly complicated because the voltages are all negative, and the Flip-Chips must be tested with varying input voltages and different loads on the outputs.” There are no integrated circuits, just discrete transistors. Getting such an old computer running again is “quite a task” because of the sheer number of broken components on each PCB, and Flip-Chip boards hold lots of transistors and diodes, “all of which are subject to failure after 55+ years”.
Obstacles, of course
The Flip-Chip tester features 15 level-shifter boards. These step down the voltage so components with different power outputs and draws can operate alongside each other safely and without anything getting frazzled. Anders points out the disparity between the Flip-Chips’ 0 and -3V logic voltage levels and the +10 and -15V used as supply voltages. Huge efforts went into this level conversion to make it reliable and failsafe. Anders wrote the testing software himself, and built the hardware “from scratch” using parts from Mouser and custom-designed circuit boards. The project took around two years and cost around $500, of which the relays were a major part.
Anders favours Raspberry Pi because “it offers a complete OS, file system, and networking in a neat and well-packaged way”, and says it is “a very good software platform that you really just have to do minor tweaks on to get right”. He’s run the tester on Raspberry Pi 3B, 4, and 5. He says it should also run on Raspberry Pi Zero as well, “but having Ethernet and the extra CPU power makes life easier”.
Although this is a fairly niche project for committed computer restorers, Anders believes his Flip-Chip tester can be built by anyone who can solder fairly small SMD components. Documenting the project so others can build it was quite a task, so it was quite helpful when Mike got in touch and was able to assist with the write-up. As a fellow computer restorer, Mike says the tester means getting RICM’s PDP-9 working again “won’t be such an overwhelming task. With the tester we can test and repair each of the boards instead of trying to diagnose a very broken computer as a whole.”
The MagPi #147 out NOW!
You can grab the new issue right now from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available at our online store, which ships around the world. You can also get it via our app on Android or iOS.
You can also subscribe to the print version of The MagPi. Not only do we deliver it globally, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico W!
Since we released the Wio Tracker 1110 Dev Kit and SenseCAP Card Tracker T1000-E, we’ve gotten a lot of feedback from the community to include a smaller, more affordable Meshtastic powered product. And our answer is the XIAO ESP32S3 for Meshtastic & LoRa. Sized as small as your thumb, and priced at only $9.9, it‘s the smallest dev kit for prototyping your first Meshtastic and LoRa/LoRaWAN projects.
XIAO ESP32S3 + Semtech SX1262 LoRa
Featuring the powerful thumb-sized XIAO ESP32S3 Dev Board and a Wio-SX1262 Extension Board, the XIAO ESP32S3 Dev Kit for Meshtastic & LoRa is a super mini LoRa dev kit supporting LoRa (862-930MHz) with a 5km LoRa range. It also supports WiFi, BLE for wireless communication with a range of 100m+. Built on the XIAO ESP32S3, the kit comes with a built-in power management systems, and can be expanded via IIC, UART, and other GPIO interfaces. Compatible with Arduino IDE, it’s available for pre-order now at $9.9, with an estimated shipping in November, 2024.
Key Features of XIAO ESP32S3 for Meshtastic and LoRa Dev Kit
Meshtactic works out of the box: Pre-flashed with Meshtastic firmware, it is ready to work once powered on.
Outstanding RF performance: Supports LoRa(862-930MHz) 2.4GHz Wi-Fi and BLE 5.0 dual wireless communication, support 2~5km(LoRa) and 100m+(Wi-Fi/BLE) remote communication when connected with U.FL antenna.
Thumb-sized Compact Design: 21 x 18mm, adopting the classic form factor of Seeed Studio XIAO, suitable for space-limited projects.
Powerful MCU Board: Incorporate the ESP32S3 32-bit, dual-core, Xtensa processor running at up to 240MHz, mounted multiple development ports, Arduino / MicroPython supported.
Elaborate Power Design: Includes a Type-C USB interface and lithium battery charge management.
Same Hardware for Multiple Applications: Can be developed as a Meshtastic node or router, a single-channel LoRaWAN gateway, or a LoRa & LoRaWAN sensor.
Applications of XIAO ESP32S3 for Meshtastic & LoRa
1. Tap into Meshtastic for Off-Grid Communication
Meshtastic is an open-source, off-grid, decentralized LoRa mesh network designed for affordable, low-power devices. This XIAO ESP32S3 Dev Kit offers a flexible and cost-effective solution for Meshtastic developers. You can build a Meshtastic device at an affordable price and use it as an emergency communication tool, similar to the SenseCAP T1000-E, but simpler in function. Moreover, thanks to the amazing Seeed Studio XIAO Product Ecosystem, it’s compatible with the XIAO Expansion Board and the Grove Expansion Board for adding screens, sensors, and over 300+ Grove modules, allowing you to customize your unique Meshtastic devices. You can even design your own casing.
2. Configure It as a LoRa/LoRaWAN Sensor Node
It can be set up as a LoRa node, enabling the XIAO ESP32S3 Dev Kit to connect with various sensors and transmit data using LoRaWAN. This offers flexibility for different applications, like home automation. Essentially, the XIAO ESP32S3 Dev Kit can serve as a data collection and transmission node, allowing for communication and control among smart home devices.
3. The Most Cost-Effective Single-Channel LoRa/LoRaWAN Gateway
Thanks to the powerful Semtech SX1262 LoRa Chip, you can turn it into a single-channel gateway, making it the most cost effective single-channel LoRaWAN gateway in the world. It receives LoRa packets on a specific setting and share them with the LoRaWAN network. By using the XIAO ESP32S3 Dev Kit, you can set it up to connect to The Things Network or Chripstack. This kit is great for those who want to learn about LoRa technology and connect to a LoRa Network Server (LNS).
Available for pre-order now at just $9.9, the XIAO ESP32S3 Dev Kit is one of our solutions to provide the community more affordable and flexible products for you to build your LoRa-based IoT projects. Stay tuned for more updates on our Meshtastic-powered and LoRaWAN products. If you have any ideas or suggestions about our Meshtastic-compatible products, feel free to share them in the comments!
MIT Media Lab researchers Cathy Mengying Fang, Patrick Chwalek, Quincy Kuang, and Pattie Maes have developed WatchThis, a groundbreaking wearable device that enables natural language interactions with real-world objects through simple pointing gestures. Cathy conceived the idea for WatchThis during a one-day hackathon in Shenzhen, organized as part of MIT Media Lab’s “Research at Scale” initiative. Organized by Cedric Honnet and hosted by Southern University of Science and Technology and Seeed Studio, the hackathon provided the perfect setting to prototype this innovative device using components from the Seeed Studio XIAO ESP32S3 suite. By integrating Vision-Language Models (VLMs) with a compact wrist-worn device, WatchThis allows users to ask questions about their surroundings in real-time, making contextual queries as intuitive as pointing and asking.
Credit: Cathy Fang
Hardwares
The WatchThis project utilizes the following hardware components:
WatchThis is designed to seamlessly integrate natural, gesture-based interaction into daily life. The wearable device consists of a watch with a rotating, flip-up camera attached to the back of a display. When the user points at an object of interest, the camera captures the area, and the device processes contextual queries based on the user’s gesture.
The interaction begins when the user flips up the watch body to reveal the camera, which then captures the area where the finger points at. The watch’s display shows a live feed from the camera, allowing precise aiming. When the user touches the screen, the device captures the image and pauses the camera feed. The captured RGB image is then compressed into JPG format and converted to base64, after which an API request is made to query the image.
The device uses these API calls to interact with OpenAI’s GPT-4o model, which accepts both text and image inputs. This allows the user to ask questions such as “What is this?” or “Translate this,” and receive immediate responses. The text response is displayed on the screen, overlaid on the captured image. After the response is shown for 3 seconds, the screen returns to streaming the camera feed, ready for the next command.
The software driving WatchThis is written in Arduino-compatible C++ and runs directly on the device. It is optimized for quick and efficient performance, with an end-to-end response time of around 3 seconds. Instead of relying on voice recognition or text-to-speech—which can be error-prone and resource-intensive—the system uses direct text input for queries. Users can further personalize their interactions by modifying the default query prompt through an accompanying WebApp served on the device, allowing tailored actions such as identifying objects, translating text, or requesting instructions.
Applications
Imagine strolling through a city and pointing at a building to learn its history, or identifying an exotic plant in a botanical garden with a mere gesture.
The device goes beyond simple identification, offering practical applications like real-time translation of, for example, menu items, which is a game-changer for travelers and language learners alike.
The research team has discussed even more exciting potential applications:
A “Remember this” function could serve as a visual reminder system, potentially aiding those who need to take medication regularly.
For urban explorers, a “How do I get there” feature could provide intuitive, spatially-aware navigation by allowing users to point at distant landmarks.
A “Zoom in on that” capability could offer a closer look at far-off objects without disrupting the user’s activities.
Perhaps most intriguingly, a “Turn that off” function could allow users to control smart home devices with a combination of voice commands and gestures, seamlessly integrating with IoT ecosystems.
While some of these features are still in conceptual stages, they paint a picture of a future where our interactions with the world around us are more intuitive, informative, and effortless than ever before.
Build Your Own WatchThis
Interested in building your own WatchThis wearable? Explore the open-source hardware and software components on GitHub and start creating today! Check out their paper below for full details.
Hey community, we’re curating a monthly newsletter centering around the beloved Seeed Studio XIAO. If you want to stay up-to-date with:
Cool Projects from the Community to get inspiration and tutorials Product Updates: firmware update, new product spoiler Wiki Updates: new wikis + wiki contribution News: events, contests, and other community stuff
Explore the best smart audio devices and microphones designed for seamless integration with Home Assistant, and partial compatibility with Amazon Alexa and Google Assistant. Whether you’re building a voice-activated smart home or a DIY audio project, these products offer high-performance sound capture and voice control.
Why You Need a Smart Speaker for Home Assistant
In a smart home setup, audio devices such as microphones and speakers play a critical role in enhancing voice control and automation. These products integrate seamlessly with Home Assistant, while some may also be compatible with Amazon Alexa or Google Assistant through additional setup. Here, we’ll explore the top ReSpeaker and Grove products that enable robust voice interaction and sound management for DIY smart home systems.
The ReSpeaker 2-Mics Pi HAT is designed for Raspberry Pi projects and features two analog microphones with high-definition voice capture capabilities. It is equipped with Voice Activity Detection and Direction of Arrival algorithms, making it perfect for local voice recognition tasks with Home Assistant and other DIY voice interaction applications.
Key Features:
Dual analog microphones and WM8960 Audio Codec for high-quality sound
Voice Activity Detection, Direction of Arrival, and Keyword Spotting capabilities
RGB LED and programmable button for customized control
This high-performance speaker is ideal for audio output in various smart home projects. It integrates easily with devices like the ReSpeaker Lite to enhance audio quality for voice commands, notifications, or media playback in Home Assistant setups.
Key Features:
Compact, enclosed design for enhanced sound quality
Suitable for smart home audio systems, robot voice output, and DIY audio projects
The ReSpeaker USB Mic Array is perfect for far-field voice recognition in larger spaces. It includes four microphones and advanced speech algorithms such as Beamforming, Noise Suppression, and Acoustic Echo Cancellation, making it ideal for voice-controlled smart home environments.
An upgrade to the original ReSpeaker Mic Array, the ReSpeaker Mic Array v2.0 is equipped with XMOS’s XVF-3000 chip and supports far-field voice capture. This version improves voice recognition accuracy, making it a robust solution for adding a voice interface to your existing or future smart home products.
Key Features:
Far-field voice recognition with four microphones
USB Audio Class 1.0 compatibility
Includes advanced algorithms like Voice Activity Detection, Direction of Arrival, and Noise Suppression
Powered by the XMOS XU316 AI Sound chip, this development board excels in audio processing and speech recognition. With its dual microphone array and support for external power supplies, it is a perfect solution for DIY smart home audio systems or voice-controlled projects.
Key Features:
Dual microphone array for far-field voice capture
Onboard AI algorithms for noise suppression and echo cancellation
Supports I2S and USB connections
Compatible with XIAO ESP32S3, Adafruit QT Py, Raspberry Pi, and PC
The Grove – Speaker module is perfect for DIY projects, providing sound amplification and voice output. Its on-board potentiometer allows for loudness control, and it is compatible with Arduino platforms for building custom sound systems or music boxes.
Key Features:
Simple module for voice output and sound generation
Platform Compatibility: Ensure the product supports the platform you’re using, whether it’s Home Assistant, Amazon Alexa, or Google Assistant. Most of these devices are compatible with ESPHome for Home Assistant integration.
Customizability: Products like the ReSpeaker Lite offer flexibility for developers, with open-source compatibility and support for multiple programming platforms.
Use Case Ideas for Home Assistant Speakers
Home Automation with Voice Control: Integrate the ReSpeaker Lite with XIAO ESP32S3 with Home Assistant to control smart devices like lights, thermostats, or security systems via voice commands.
Audio Alerts and Notifications: Use the Mono Enclosed Speaker or Grove – Speaker to generate audio alerts for events like door openings or motion detection in your smart home.
Far-Field Voice Recognition: With the ReSpeaker USB Mic Array, capture voice commands from across the room, ideal for large, open living spaces.
Conclusion
These ReSpeaker and Grove products offer robust solutions for building voice-activated smart homes or custom audio systems. While they seamlessly integrate with Home Assistant, some products may also work with Amazon Alexa or Google Assistant, though additional setup or configurations might be required. These devices provide the flexibility and performance needed to bring your DIY smart home projects to life.
Explore our range of Home Assistant-compatible speakers and take your home automation to the next level!
Al aprovechar los LLMs, Watcher puede capturar los eventos precisos que especificaste, lo
que significa ir más allá de las simples detecciones de objetivos, ya que analiza
completamente comportamientos y estados. Como “perro + jugando con la pelota, 0
personas + en la habitación”, o “La persona + con uniforme de DHL ha llegado”, etc.
¡Watcher realmente entiende la escena bajo su vigilancia!
Llamar constantemente a los LLMs como ChatGPT genera un costo elevado. Por
lo tanto, utilizamos una arquitectura de IA en el dispositivo + LLMs. El modelo de IA en el
Watcher detecta el objetivo, que luego es analizado por LLM para generar ideas precisas y
accionables.
Con la tarea “dime si el perro está rompiendo papel”, cuando el modelo del dispositivo
detecta perros, esta escena clave se envía al LLM para un análisis más profundo: ¿está
rompiendo papel? Los LLM no se activarán por personas, gatos, ardillas u otros. Esto
reduce enormemente el costo de los LLM.
Responde por voz
Después de detectar el evento especificado, Watcher puede interactuar con los objetivos a
través de la voz. Por ejemplo, puedes configurarlo de la siguiente manera:
Evento especificado: alguien toma el teclado rosa;
Dos reacciones especificadas:
1. una respuesta por voz que diga: ‘¡Buena elección! Esta es nuestra versión más
reciente, inspirada en la máquina de escribir Pink Royal’;
2. la otra reacción es un mensaje de texto a los vendedores: ‘¡Alguien está interesado
en el teclado más reciente! Por favor, vayan al estante lo antes posible’.
Colocar Watcher en diferentes escenarios permite respuestas versátiles. Al ver un perro
rompiendo pañuelos, dile “¡Para, Cooper!”, o si ve a una persona fumando en áreas donde
no está permitido, dile “¡No fumar!”
¿Cómo mejorar tus sistemas? Solo necesitas un complemento
sencillo.
Una vez detectado el evento objetivo, puede activar diferentes acciones, por ejemplo, enviar
mensajes push en la aplicación SenseCraft o hacer que las luces LED de Watcher
parpadeen. Por supuesto, entendemos que quieres ir más allá de las funcionalidades en la
app o el dispositivo. ¡No hay problema! ¡Hay infinitas posibilidades para explorar!
Añade Watcher a Home Assistant.
Conectado a plataformas IoT como Home Assistant, tu Watcheractúa como un
sensor de comportamiento, activando acciones personalizadas en diversos contextos. Por
ejemplo, cuando detecta personas, Watcher analiza la situación y automáticamente
desencadena acciones: luces encendidas para leer y luces apagadas para dormir.
Más que un sensor de visión, Watcher se puede ampliar con otros sensores
para habilitar la detección multimodal. En la parte trasera de Watcher, el I2C está
reservado para acomodar más de 100 sensores Grove. Esta flexibilidad permite queWatcher integre una amplia gama de fuentes de datos para un análisis exhaustivo y
acciones apropiadas. Podrás construir aplicaciones como detectar la temperatura de la
habitación y ajustar el aire acondicionado en consecuencia: cambiarlo a 22°C cuando llevas
traje, o a 26°C cuando estás en shorts.
Añade Watcher a tu flujo de Node-RED
A solo 3 bloques de distancia, puedes transmitir los resultados detectados por tu Watcher a
cualquier lugar en internet a través de Node-RED, ¡tan fácil como hacer clic!
Añade Watcher a tu Arduino | ESP32 y otros sistemas de
hardware.
¿Quieres añadir un agente de IA a los MCU y SBC populares como
Arduino/ESP32/Raspberry Pi? Simplemente conéctalo a Watcher a través de UART, HTTP o
USB, y podrás explorar la nueva frontera de la inteligencia.
Dato curioso: Aquí en Seeed, Watcher tiene el apodo de “Nobody,” simbolizando
un robot sin cuerpo. Esto transmite un mensaje poderoso: ¡Dale a Watcher un cuerpo!
¡Watcher puede ser una cabeza genial para tu robot!
Para los fans de XIAO, ¡tenemos un paquete especial! Apoya a SenseCAP
Watcher y podrás añadir un XIAO ESP32C6 por solo $1.
XIAO ESP32C6 es un MCU compacto, nativo de Matter, para hogares inteligentes. Basado
en ESP32-C6 de Espressif, soporta diversas conectividades inalámbricas: Wi-Fi 6 a 2.4
GHz, BLE 5.0, Zigbee y Thread.
Por ejemplo, puedes hacer un sensor de emociones añadiendo Watcher a XIAO.
Al detectar tu mal humor, florece para animarte.
Dale emociones únicas a tu Watcher
SenseCAP Watcher se adapta a tus comandos con “modos” únicos. Primero ESCUCHA tus
instrucciones, luego cambia al modo de VIGILANCIA para proteger el espacio designado,
etc. ¡Pero la diversión no termina aquí! Puedes subir tus propios diseños de emociones
para darle a tu Watcher una personalidad única. ¡Prepara un archivo PNG y súbelo a
Watcher en segundos! ¡Es tu decisión!
Los emojis predeterminados de Watcher están inspirados en C-3PO, en homenaje a uno de
nuestros mayores ayudantes.
Puedes subir los tuyos fácilmente en formato .PNG
Incluso puedes diseñar una interfaz de usuario (UI) única para tu SenseCAP
Watcher. Nuestra interfaz actual está impulsada por Squareline.
Open Source y despliegue On-Premise.
Como un dispositivo de IA, es fundamental respetar la privacidad. Y adoptamos e
implementamos dos estrategias clave en este ingenioso dispositivo.
1. SenseCAP Watcher es de código abierto, lo que te brinda acceso completo para
entender a fondo cómo funciona.
2. Soporta despliegue on-premise. La columna vertebral de SenseCAP Watcher es la
suite de software SenseCraft, que se utiliza para configurar Watcher, interpretar tus
comandos y acceder a los LLMs. Por lo tanto, todo está almacenado y ejecutándose
de manera privada, garantizando que tus modelos y datos no se envíen a ningún
LLM o nube pública. SenseCraft puede ejecutarse en Windows/MacOS/Linux. Dado
que SenseCraft contiene LLMs, ciertos criterios de rendimiento son necesarios para
la computadora, como se detalla a continuación.
Puedes desplegar SenseCraft en tus computadoras de repuesto. Sin embargo, si
buscas aplicaciones que requieran confiabilidad a nivel comercial y bajo consumo de
energía, la combinación de SenseCAP Watcher + NVIDIA® Jetson AGX Orin es perfecta
para ti.
En comparación con las tarjetas gráficas de consumo (como la RTX 4090) para
tareas de IA, NVIDIA® Jetson AGX Orin destaca con ventajas clave:
1. Confiabilidad a nivel industrial: Diseñada para aplicaciones industriales y
comerciales, la serie Jetson AGX cuenta con un MTBF (tiempo medio entre fallas)
más largo. Diseñada para operar las 24 horas del día, ofrece una confiabilidad
superior para aplicaciones de funcionamiento continuo, en comparación con las
tarjetas gráficas de consumo.
2. Compacta y de bajo consumo: Con aplicaciones de computación embebida y en el
borde en mente, la serie Jetson AGX está diseñada en un formato más pequeño y
consume más de 3 veces menos energía que las tarjetas gráficas de consumo. Es
adecuada para espacios reducidos, genera menos calor y ayuda a reducir los costos
operativos. Esto es crucial para los sistemas embebidos.
¿Suscripción? ¡Tú decides!
Los Servicios de IA de SenseCraft están diseñados para tus necesidades.
Empieza con Basic, completamente GRATIS para baja frecuencia: 15 minutos/solicitud para
análisis de imágenes, 200 chats/mes con LLMs.
¿Necesitas más? Solo $6.90 para la versión Pro, paga solo por lo que consumes, ¡SIN
compromiso de tarifas recurrentes! Además, ¡cada dispositivo NUEVO incluye un
paquete Pro GRATIS de $6.90! Recarga según sea necesario.
¿Quieres evitar tarifas por completo? Elige On-Premise. Simplemente despliega y usa sin
costos adicionales.
Dos modelos
Soporte para varios métodos de montaje.
Se puede colocar sobre un escritorio o montarlo en paredes como una expansión
inteligente. O también se puede transformar en la cabeza de un robot.
Otros lo usaron como
Personalización – El camino más rápido para construir hardware
con IA.
El próximo dispositivo debería estar impulsado por IA. ¡El hardware con IA incorporada
conectada a LLMs es la tendencia!
Con SenseCAP Watcher, hemos desarrollado un marco revolucionario que lleva potentes
LLMs al borde del mundo físico. Este marco acelera el prototipado de hardware con IA. Con
nuestros servicios de personalización, ¡puedes dar vida a tu hardware de IA más rápido que
nunca!
¡Es hora de aumentar la inteligencia de tu espacio! Estamos en
Hemos realizado pruebas exhaustivas con SenseCAP Watcher, y estamos completamente
comprometidos a entregarlo a tiempo a nuestros patrocinadores, con todo funcionando sinproblemas. No obstante, como sucede con cualquier nuevo producto, existen riesgos
potenciales asociados al lanzamiento.
Sin embargo, como una empresa consolidada con un historial probado de éxito en el
desarrollo de cientos de productos, sabemos cómo superar estos desafíos. Ten la seguridad
de que, si surgen problemas de producción, seremos completamente transparentes con
nuestros patrocinadores y los mantendremos informados durante todo el proceso.
Uso de la IA
Tengo pensado usar contenido generado por IA en mi proyecto.
¿Qué partes de tu proyecto utilizarán contenido generado por IA? Responde con la
mayor precisión posible.
Usaremos contenido generado por IA de MidJourney para crear imágenes que destaquen
las características de nuestro producto y muestran cómo puede ser usado en escenarios
reales. Además, utilizaré ChatGPT para la corrección de estilo de nuestra campaña.
¿Tienes el consentimiento de los propietarios de los trabajos que se usaron (o se
usarán) para producir la parte de tu proyecto generada por IA? Por favor, explica.
Para las imágenes que necesitamos, primero buscaremos en bibliotecas existentes en
Google. Si encontramos imágenes que puedan ser usadas directamente, verificaremos sus
licencias para determinar los pasos apropiados a seguir. Si las imágenes no pueden ser
Millimeter Wave (mmWave) radar technology, recognized for its high accuracy, privacy-centric capabilities, adaptability, and flexibility, is increasingly becoming a critical tool in privacy-oriented sensing applications. These applications range from presence and fall detection to sensitive monitoring of breathing and heartbeat.
To empower developers worldwide to adapt mmWave radar for more responsive automation, we are now excited to introduce two new mmWave Sensor Kits designed to meet diverse needs:
Priced at $24.9, both kits utilize the same hardware platform but are equipped with specialized pre-set algorithms tailored for distinct motion detection tasks—fall detection and breathing & heartbeat monitoring, respectively.
Key Features of the New mmWave Sensor Kits
These kits leverage 60GHz mmWave technology to offer reliable detection capabilities:
Presence and Fall Detection: Detects subtle motions, accurately identifying human activities such as standing, walking, or falling.
Presence, Breathing and Heartbeat Monitoring: Captures minute displacements caused by heartbeat and chest movements, providing invaluable data for health monitoring.
Each kit features a mmWave Sensor Module with:
Light Level Sensing
Customizable WS2812B RGB LED
Support for Extended Grove Sensors/Actuators
Powered by the XIAO ESP32C6, these kits come with pre-flashed ESPHome firmware and support multiple wireless protocols, including Wi-Fi, Bluetooth Low Energy (BLE), Zigbee, and Thread. They are designed for easy no-code integration with Home Assistant via ESPHome, allowing users to customize detection zones and analytics. P.S. A 3D printing enclosure file is available for free download as a reference design to fit in your application.
Why Choose mmWave Sensors for Your Projects?
High Resolution and Accuracy: Essential for applications requiring precise detection and differentiation of human movements.
Non-contact and Non-invasive: Ideal for continuous monitoring without disturbance, crucial for healthcare and home environments.
Operational in Various Conditions: Functions effectively regardless of lighting conditions or physical barriers.
Privacy Preservation: Does not capture identifiable images or videos, ensuring privacy while monitoring.
Real-Time Processing: Supports immediate data processing for quick responses to detected emergencies.
Flexible Integration and Wide Coverage: Easily integrated into existing systems, enhancing capabilities without significant modifications.
Expanding Your Toolkit
If you’re a long-term supporter of Seeed, you’re likely familiar with our mmWave Sensor Series. We have prepared a comparison guide to help you select the best mmWave sensors for your projects.
Reserve your mmWave Sensor Kit today and enhance your automation systems, whether for smart homes, healthcare, safety monitoring, elderly caring, security, or caregiving.
Hey community, we’re curating a monthly newsletter centering around the beloved Seeed Studio XIAO. If you want to stay up-to-date with:
Cool Projects from the Community to get inspiration and tutorials Product Updates: firmware update, new product spoiler Wiki Updates: new wikis + wiki contribution News: events, contests, and other community stuff
With the introduction of Home Assistant hub, setting up smart home automation has become more accessible than ever. Selecting the right sensors is essential to collect accurate data and automate various home systems.
This guide explores sensor types compatible with Home Assistant hub, offering automation ideas and real-world use cases. We’ll also cover how to integrate these sensors with Home Assistant hub, ensuring seamless operation whether through native support or additional setup with ESPHome.
Table of Contents
1. Temperature and Humidity Sensors
Monitoring temperature and humidity helps you automate climate control systems, ensuring your home stays comfortable and energy-efficient. Here are some top picks that work well with Home Assistant hub:
This sensor provides accurate readings of temperature and humidity, useful for automating HVAC systems or dehumidifiers. While not natively supported, it can be integrated with Home Assistant using ESPHome via I2C.
A more budget-friendly sensor for temperature and humidity. Although basic, it serves well for simple climate monitoring in home spaces, integrated via ESPHome.
Perfect for harsher environments such as basements or garages. Integration is similar to the DHT20 and requires using ESPHome or MQTT for Home Assistant compatibility.
A more precise option for environments that require long-term stability. Similar ESPHome setup required for Home Assistant integration.Use case: Set up an automation to turn on a dehumidifier when the humidity level rises above 65%, preventing mold in damp areas.
This sensor not only measures temperature and humidity but also tracks air quality metrics, providing a holistic view of your environment. It integrates with Home Assistant using ESPHome.
Use case example: Set up an automation to turn on a dehumidifier when the humidity level rises above 65%, helping to prevent mold growth in damp areas like basements or bathrooms. Alternatively, create an automation that adjusts your HVAC system to maintain optimal indoor temperature throughout the day.
2. Air Quality Sensors
Good indoor air quality is essential, especially for homes with allergies or asthma. These sensors help you automate air purifiers or ventilation systems based on real-time data:
A great solution for general air quality monitoring in home spaces. It tracks VOCs and dust particles, providing vital information for automating air purifiers.
This sensor measures CO2, temperature, and humidity, making it ideal for offices or classrooms. You can set it up with ESPHome for Home Assistant automation.
Use case example: Automate your air purifier to activate when air quality falls below a specific threshold, such as when VOC levels exceed safe limits or PM2.5 dust particles are detected. This can help maintain clean air inside your home, especially during high-pollution periods.
3. Motion and Presence Sensors
Detecting movement can enhance your home security and automate lights or other devices based on activity:
A highly accurate sensor for presence detection. Works seamlessly with Home Assistant using ESPHome for automating lights or appliances based on human presence.
Fully Zigbee compatible, this sensor works natively with Home Assistant, perfect for automating lights or cameras upon motion detection.
Use case example: Automate hallway or room lights to turn on when the motion sensor detects movement and turn off after a set period of inactivity. This saves energy while adding convenience. You can also use the sensors to trigger security cameras when unexpected motion is detected.
4. Water and Soil Sensors
These sensors help with garden care or detecting potential water leaks:
A more advanced soil moisture sensor, also ESPHome-compatible, to optimize garden or greenhouse watering.
Use case example: Automate your irrigation system to activate when the soil moisture drops below a certain threshold, ensuring your garden or indoor plants receive water at the right time. You can also set up notifications to remind you when it’s time to water your plants manually.
5. Door and Window Sensors
Enhance home security with automation that triggers alerts or actions when doors or windows open:
Another Zigbee option, seamlessly integrates with Home Assistant for reliable door and window status monitoring.
Use case example: Create an automation where your porch light turns on when the front door is opened at night, or trigger an alarm if a window is opened when you’re away from home. You can also receive a notification on your phone if any door or window is left open for too long.
6. Light Sensors
Control indoor lighting based on ambient light conditions:
Similar to the Light Sensor v1.2 but offers more detailed ambient light data for use in advanced smart lighting automations via ESPHome.Use case: Automate living room lights to adjust based on natural light conditions, ensuring a comfortable environment while saving energy.
Use case example: Automate your indoor lighting to adjust based on the amount of natural light entering the room. For example, you can dim your living room lights as the sun sets, creating a comfortable atmosphere while reducing energy consumption.
These displays allow for real-time visualizations of environmental data. You can link them to Home Assistant through ESPHome for a detailed display of home sensor information.
Conclusion
Integrating the right sensors with the Home Assistant hub enhances your home’s automation potential.
Sonoff Zigbee sensors are natively compatible, while Grove and mmWave sensors require additional setup through ESPHome.
Whether you’re looking to monitor air quality, automate lighting, or manage soil moisture, these sensors provide powerful solutions for elevating your smart home experience.
Authored by Mengdu and published on Hackster, for sharing purposes only.
AI gadgets Rabbit R1 & SenseCAP Watcher design, UI, user experience compared – hardware/interaction highlights, no application details.
Story
The world of AI gadgets is rapidly evolving, with companies racing to deliver intelligent home companions. Two such devices, the Rabbit R1, and SenseCAP Watcher, recently caught my attention through very different means – brilliant marketing drew me to purchase the former, while the latter was a review unit touted as a “Physical AI Agent” by Seeed Studio.
Intrigued by the potential convergence between these products, I embarked on an immersive user experience testing them side-by-side. This review offers a candid assessment of their design, user interfaces, and core interactions. However, I’ll steer clear of Rabbit’s app ecosystem and third-party software integration capabilities, as Watcher lacks such functionality by design.
My goal is to unravel the unique propositions each gadget brings to the AI gadgets market and uncover any surprising distinctions or similarities. Join me as I separate gimmick from innovation in this emerging product category.
Packaging
Rabbit really went all out with the packaging for the R1. As soon as I got the box, I could tell this wasn’t your average gadget. Instead of cheap plastic, the R1 comes cocooned in a crystal-clear acrylic case. It looks and feels incredibly premium.
It allows you to fully admire the R1’s design and interactive components like the scroll wheel and speakers before even taking it out. Little etched icons map out exactly what each part does.
The acrylic case doesn’t just protect – it also doubles as a display stand for the R1. There’s a molded pedestal that cradles the plastic body, letting you showcase the device like a museum piece.
By the time I finally got the R1 out of its clear jewel case, I was already grinning like a kid on Christmas day. The whole unboxing makes you feel like you’re uncovering a precious gadget treasure.
While the Watcher is priced nearly half that of the Rabbit R1, its eco-friendly cardboard packaging is anything but cheap. Extracting the Watcher unit itself is a simple matter of gently lifting it from the integrated enclosure.
At first glance, like me, you may puzzle over the purpose of the various cutouts, folds, and perforations. But a quick peek at their wiki reveals this unassuming exterior actually transforms into a multi-functional stand!
Echoing the form of a desktop calendar, a central cutout cradles the Watcher body, allowing it to be displayed front-and-center on your desk like a compact objet d’art. A clever and well-considered bit of innovation that deserves kudos for the design team!
Interaction Logic
Despite being equipped with speakers, microphone, camera, scroll wheel, and a touchscreen display – the R1 restricts touch input functionality. The touchscreen remains unresponsive to touch for general commands and controls, only allowing input through an on-screen virtual keyboard in specific scenarios like entering a WiFi password or using the terminal interface.
The primary interaction method is strictly voice-driven, which feels counterintuitive given the prominent touchscreen hardware. It’s puzzling why Rabbit’s design team limited core touch functionality on the included touchscreen display.
The overall operation logic also takes some getting used to. Take the side button dubbed the “PTT” – its function varies situationally.
This unintuitive behavior tripped me up when configuring WiFi. After tapping “connect”, I instinctively tried hitting PTT again to go back, only to accidentally cancel the connection instead. It wasn’t until later that I realized using the scroll wheel to navigate to the very top option, then pressing PTT is the correct “back” gesture.
While not necessarily a flaw, this interaction model defies typical user expectations. Most would assume a core navigation function like “back” to be clearly visible and accessible without obscure gestures. Having to precisely scroll to the top option every single time just to return to the previous menu is quite cumbersome, especially for nested settings trees.
This jarring lack of consistency in the control scheme is truly baffling. The operation logic appears haphazardly scattered across different button combinations and gestures depending on the context. Mastering the R1’s controls feels like an exercise in memorizing arbitrary rules rather than intuitive design principles.
In contrast to the Rabbit R1, the Watcher device seems to have a much simpler and more consistent interaction model. This could be attributed to the fact that the Watcher’s operations are inherently not overly complex, and it relies on a companion smartphone app for assistance in many scenarios.
Like the R1, the Watcher is equipped with a scroll wheel, camera, touchscreen, microphone, and speakers. Additionally, it has various pin interfaces for connecting external sensors, which may appeal to developers looking to tinker.
Commendably, the current version of the Watcher maintains a high degree of unity in its operational logic. Pressing the scroll wheel confirms a selection, scrolling up or down moves the cursor accordingly, and a long press initiates voice interaction with the device. This level of consistency is praiseworthy.
Moreover, the touchscreen is fully functional, allowing for a seamless experience where users can choose to navigate via either the scroll wheel or touch input, maintaining interactivity consistency while providing independent input methods. This versatility is a welcome design choice.
However, one minor drawback is that the interactions lack the “stickiness” found in smartphone interfaces. Both the scroll wheel and touch inputs exhibit a degree of frame drops and latency, which may be a common limitation of microcontroller-based device interactions.
When I mentioned that “it relies on a companion smartphone app for assistance in many scenarios, ” I was referring to the inability to perform tasks like entering long texts, such as WiFi passwords, directly on the Watcher‘s small screen. This reliance is somewhat unfortunate.
However, given the Watcher’s intended positioning as a device meant to be installed in a fixed location, perhaps mounted on a wall, it is understandable that users may not always need to operate it directly. The design team likely factored in the convenience of using a smartphone app for certain operations, as you wouldn’t necessarily be handling the Watcher itself at all times.
What can they do?
At its core, the Rabbit R1 leverages cloud-based large language models and computer vision AI to provide natural language processing, speech recognition, image identification and generation, and more. It has an array of sensors including cameras, microphones and environmental detection to take in multimodal inputs.
One of the Rabbit R1’s marquee features is voice search and question answering. Simply press the push-to-talk button and ask it anything, like “What were last night’s NBA scores?” or “What’s the latest on the TikTok ban?”. The AI will quickly find and recite relevant, up-to-date information drawn from the internet.
The SenseCAP Watcher, while also employing voice interaction and large language models, takes a slightly different approach. By long-pressing the scroll wheel on the top right of the Watcher, you can ask it profound existential questions like “Can you tell me why I was born into this world? What is my value to the universe?” It will patiently provide some insightful, if ambiguous, answers.
However, the key difference lies in contextual awareness: unlike the Rabbit R1, the Watcher can’t incorporate your current time and location into its responses. So while both devices might ponder the meaning of life with you, only the Rabbit R1 could tell you where to find the nearest open café to continue your existential crisis over a cup of coffee.
While both devices offer voice interaction capabilities, their approaches to visual processing showcase even more distinct differences.
Vision mode allows the Rabbit R1’s built-in camera to identify objects you point it towards. I found it was generally accurate at recognizing things like office supplies, food, and electronics – though it did mistake my iPhone 16 Pro Max for older models a couple times. This feature essentially turns the Rabbit R1 into a pocket-sized seeing-eye dog, ready to describe the world around you at a moment’s notice.
Unlike the Rabbit R1’s general-purpose object recognition, the Watcher’s visual capabilities appear to be tailored for a specific task. It’s not designed to be your all-seeing companion, identifying everything from your morning bagel to your office stapler.
Things are starting to get interesting. Seeed Studio calls the SenseCAP Watcher a “Physical AI Agent” – a term that initially puzzled me.
The term “Physical” refers to its tangible presence in the real world, acting as a bridge between our physical environment and Large Language Model.
As a parent of a mischievous toddler, my little one has a habit of running off naked while I’m tidying up the bathroom, often resulting in them catching a chill. I set up a simple task for the Watcher: “Alert me if my child leaves the bathroom without clothes on.” Now, the device uses its AI to recognize my child, determine if they’re dressed, and notify me immediately if they attempt to make a nude escape.
Unlike traditional cameras or smart devices, the Watcher doesn’t merely capture images or respond to voice commands. Its sophisticated AI allows it to analyze and interpret its surroundings, understanding not just what objects are present, but also the context and activities taking place.
I’ve experienced its autonomous capabilities firsthand as a working parent with a hectic schedule. After a long day at the office and tending to my kids, I usually collapse on the couch late at night for some much-needed TV time. However, I often doze off, leaving the TV and lights on all night, much to my wife’s annoyance the next morning.
Enter the Watcher. I’ve set it up to monitor my situation during late-night TV watching. Using its advanced AI, the Watcher can detect when I’ve fallen asleep on the couch. Once it recognizes that I’m no longer awake, it springs into action. Through its integration with my Home Assistant system, the Watcher triggers a series of automated actions: the TV switches off, the living room lights dim and then turn off, and the air conditioning adjusts to a comfortable sleeping temperature.
The “Agent” aspect of the Watcher emphasizes its role as an autonomous assistant. Users can assign tasks to the device, which then operates independently to achieve those goals. This might involve interacting with other smart devices, making decisions based on observed conditions, or providing insights without constant human input. It offers a new level of environmental awareness and task execution, potentially changing how we interact with AI in our daily lives.
You might think that devices like the Rabbit R1 could perform similar tasks. However, you’ll quickly realize that the Watcher’s capabilities are the result of Seeed Studio’s dedicated efforts to optimize large language models specifically for this purpose.
When it comes to analyzing object behaviors, the Rabbit R1 often provides ambiguous answers. For instance, it might suggest that a person “could be smoking” or “might be sleeping.” This ambiguity directly affects their ability to make decisive actions. This is probably a common problem with all devices using AI at the moment, too much nonsense and indecision. We sometimes find them cumbersome, often because they can’t be as decisive as humans.
I think I can now understand all the reasons why Seeed Studio calls it Physical AI Agent. I can use it in many of my scenarios. It could detect if your kid has an accident and wets the bed, then alert you. If it sees your pet causing mischief, it can recognize the bad behavior and give you a heads up.
If a package arrives at your door, the Watcher can identify it’s a delivery and let you know, rather than just sitting there unknowingly. It’s an always-vigilant smart camera that processes what it sees almost like having another set of eyes monitoring your home or office.
As for their distinct focus areas, the ambition on the Rabbit R1 side is to completely replace traditional smartphones by doing everything via voice control. Their wildest dream is that even if you metaphorically chopped off both your hands, you could just tell the R1 “I want to order food delivery” and it would magically handle the entire process from ordering to payment to confirming arrival – all without you having to lift a finger.
Instead of overcomplicating it with technical jargon about sensors and AI models, the key is that the Watcher has enough awareness to comprehend events unfolding in the physical world around it and keep you informed, no fiddling required on your end.
Perhaps this duality of being an intelligent aide with a tangible physical embodiment is the core reason why Seeed Studio dubs the Watcher a “Physical AI Agent.” Unlike disembodied virtual assistants residing in the cloud, the Watcher has a real-world presence – acting as an ever-present bridge that allows advanced AI language models to directly interface with and augment our lived physical experiences. It’s an attentive, thoughtful companion truly grounded in our reality.
Concluding
The Rabbit R1 and SenseCAP Watcher both utilize large language models combined with image analysis, representing innovative ways to bring advanced AI into physical devices. However, their application goals differ significantly.
The Watcher, as a Physical AI Agent, focuses on specific scenarios within our living spaces. It continuously observes and interprets its environment, making decisions and taking actions to assist users in their daily lives. By integrating with smart home systems, it can perform tasks autonomously, effectively replacing repetitive human labor in defined contexts.
Rabbit R1, on the other hand, aims to revolutionize mobile computing. Its goal is to replace traditional smartphones by offering a voice-driven interface that can interact with various digital services and apps. It seeks to simplify and streamline how we engage with technology on the go.
Both devices represent early steps towards a future where AI is more deeply integrated into our daily lives. The Watcher showcases how AI can actively participate in our physical spaces, while the R1 demonstrates AI’s potential to transform our digital interactions. As pioneering products, they offer glimpses into different facets of our AI-enhanced future, inviting us to imagine a world where artificial intelligence seamlessly blends with both our physical and digital realities.
There is no clear “winner” here.
Regardless of how successful these first iterations prove to be, Rabbit and Seeed Studio have staked unique perspectives on unleashing productivity gains from large language AI. Their distinct offerings are pioneering explorations that will undoubtedly hold a place in the historical arc of ambient AI development.
If given the opportunity to experience them first-hand, I wholeheartedly recommend picking up both devices. While imperfect, they provide an enthralling glimpse into the future – where artificial intelligence transcends virtual assistants confined to the cloud, and starts manifesting true cognition of our physical spaces and daily lives through thoughtful hardware/software synergies.