Reading view

There are new articles available, click to refresh the page.

Upcoming Webinar – 8 Business Solutions Built with LoRaWAN and Low-Code IoT Platform

Hey community, we’re excited to share that we’re speaking at a joint webinar, “8 Business Solutions Built with LoRaWAN and Low-Code IoT Platform,” hosted by Blynk and The Things Industries. Join us on Thursday, November 21st, at 10:00 AM EST / 4:00 PM CET for the insightful webinar, where you can explore how to combine LoRaWAN-enabled hardware with Blynk’s low-code IoT platform can help you quickly launch and scale IoT solutions that are practical, powerful, and easy to manage.

Blynk TTI Webinar poster

Why You Should Attend

Building an IoT solution doesn’t have to be complex. This webinar offers a step-by-step look into using off-the-shelf LoRaWAN hardware including our SenseCAP LoRaWAN devices (read on to explore), paired with Blynk’s intuitive platform, to create impactful IoT solutions without heavy coding or lengthy development time. Whether you’re an IoT beginner or looking to expand your current deployments, this session has everything you need to set your IoT solutions up for long-term success.

What You’ll Learn

In just one hour, you’ll gain insights from industry experts on:

    • Quickly deploy and manage IoT solutions using best-in-class LoRaWAN hardware.
    • Seamlessly provision and manage devices with Blynk’s low-code platform for faster setup and efficient data handling.
    • Visualize data through no-code web and mobile dashboards to easily monitor and control your devices.
    • Explore 8 game-changing IoT Solutions which are designed to solve specific business challenges, offering ready-to-deploy options that can scale as your business grows

This session is designed to empower you to take your IoT deployments from prototyping to enterprise scale effortlessly.

Discover 8 Game-Changing IoT Solutions for Business

The hosts, Pavlo and Felix together with 8 industry leaders from Thermokon Sensortechnik GmbH, Pepperl+Fuchs Group, Miromico, Seeed Studio, MOKO SMART, Milesight IoT, ATIM Radiocommunication, and Blues, will introduce eight proven IoT applications across various industries. Each solution is designed to solve a specific business challenge, offering ready-to-deploy options that can scale as your business grows. Here’s a sneak peek at what you’ll explore:

    • Smart heating for hotels 
    • Warehouse & production site solutions 
    • Building Management 
    • People Counting 
    • Personnel safety on construction site 
    • Water leak detection 
    • Smart metering 
    • Refrigerator fleet monitoring

During the webinar, our BD manager Violet Su will present our latest LoRaWAN solution with built-in Vision AI capabilities for efficient meter reading, person detection, people counting, object detection, and more. 

The solution is powered by SenseCAP A1102 Vision AI Sensor and SenseCAP S21000 LoRaWAN DTU

    • Built-in AI Camera: Local AI models for high-accuracy detection
    • Battery-Saving and Long Range LoRaWAN Connectivity: Perfect for various settings with range up to 10km
    • Industrial RS485 Support: Connect to your legacy LoRaWAN DTUs via RS485
    • Actionable Data: Integrated with Blynk’s low-code platform, making data easy to analyze and act on

Register Now

Don’t miss this opportunity to learn from the experts and bring your IoT ideas to life with LoRaWAN and low-code technology!

Mark your calendar for November 21st and take the next step in your IoT journey. We look forward to seeing you there!

The post Upcoming Webinar – 8 Business Solutions Built with LoRaWAN and Low-Code IoT Platform appeared first on Latest Open Tech From Seeed.

Seeed Studio and Blynk Expand Partnership to Enhance IoT Integration with LoRaWAN and MQTT

Seamless IoT with LoRaWAN and MQTT Support

With industries increasingly adopting LoRaWAN and MQTT protocols, Seeed Studio’s devices—like the SenseCAP LoRaWAN sensors and gateways—are now fully supported on Blynk’s platform. This expanded functionality simplifies IoT deployments by allowing users to monitor and control their devices through Blynk‘s intuitive dashboard, without needing extensive coding or infrastructure setup.

The expanded functionality available with Blynk’s platform is a game-changer for businesses using Seeed Studio SenseCAP LoRaWAN devices,” said Joey Jiang, VP of Industrial Application Group at Seeed Studio.Our customers can now take advantage of Blynk’s advanced device management features, such as multi-level user management and granular access controls, to manage fleets of devices at scale with ease. This integration allows businesses to focus on their operations while Blynk and Seeed handle the heavy lifting on the backend, from device provisioning to user control.

SeeedxBlynk

Empowering Developers and Businesses with No-Code IoT Solutions

Through Blynk’s intuitive no-code platform, users gain access to advanced device management tools such as visual data visualization, automation workflows, and real-time alerts. With native MQTT support and a recent integration with The Things Stack for LoRaWAN, Seeed Studio hardware can now unlock unprecedented possibilities for IoT deployments.

Our mission has always been to simplify IoT for businesses of all sizes,” said Pavel Bayborodin, CEO of Blynk. “By expanding our platform to support industry-standard protocols like LoRaWAN and MQTT, and partnering with Seeed Studio’s robust hardware lineup, we are enabling companies to launch powerful IoT systems faster than ever.

Unlocking Opportunities Across Smart Industries

The extended partnership offers tremendous opportunities across key industries like smart agriculture, industrial automation, and smart cities. With Seeed’s SenseCAP LoRaWAN devices integrated into Blynk, businesses can achieve remote monitoring, real-time data transfer, and centralized device management with minimal effort. Whether deploying sensors in the field or managing urban infrastructure, users benefit from both Blynk’s software capabilities and Seeed’s reliable hardware for cost-effective solutions.

Discover Seeed Studio Products with LoRaWAN and MQTT Support

Explore Seeed Studio’s wide range of IoT solutions designed to seamlessly integrate with Blynk’s platform. Whether you need SenseCAP LoRaWAN sensors for environmental monitoring, LoRaWAN gateways for reliable connectivity, or development kits for rapid prototyping, Seeed Studio has the tools to bring your IoT vision to life.

About Seeed Studio

Seeed Studio, founded in 2008, is a pioneer in Open Hardware and IoT innovation, offering a broad portfolio of industrial IoT solutions, including sensor modules, edge devices, and platforms like SenseCAP. By promoting collaborative development and open-source principles, Seeed Studio empowers developers and enterprises worldwide to design solutions that address local challenges and advance emerging technologies such as AI and IoT.

About Blynk

Blynk is a leading low-code IoT platform that enables businesses to build custom mobile IoT applications and manage millions of connected devices globally. Used by over 1 million developers and 5,000 businesses, Blynk simplifies IoT management through easy-to-use dashboards and advanced user management features, accelerating time to market and helping companies scale their connected products.

The post Seeed Studio and Blynk Expand Partnership to Enhance IoT Integration with LoRaWAN and MQTT appeared first on Latest Open Tech From Seeed.

Akash Muthukumar: A 9th Grader Leading the Future of TinyML with XIAO ESP32S3

By: Akash

The world of technology is ever-changing, and young minds prove time and again that age really is just a number when it comes to mastery and innovation. One of the most interesting stories of youth combined with innovation belongs to a 9th-grade student named Akash Muthukumar, whose workshop on deploying TinyML using the XIAO ESP32S3 Sense stirred waves in both the world of machine learning and embedded systems.

A Passion for Technology

Early in his childhood, Akash’s journey in the world of technology begins. As a child, Akash was fascinated by gadgets and how they work. Middle school saw Akash deep-diving into programming, robotics, and machine learning while playing with platforms like Arduino and TensorFlow Lite. And here came his curiosity and drive to learn about TinyML-a nascent field where Akash deployed machine learning models on microcontrollers and embedded systems.

Why TinyML?

TinyML, short for Tiny Machine Learning, is a revolution within the field of artificial intelligence; it’s extending the power of machine learning into the smallest and most power-efficient kinds of devices-microcontrollers. These are just the kind of devices now being used everywhere these days in things like IoTs, where there is every need to perform intelligently locally-for instance, speech recognition, anomaly detection, and gesture recognition-without being remotely dependent on the cloud for end-to-end processing.

For Akash, the coolness factor about TinyML was its ability to take already ‘smart’ devices to the next level. The deployability of machine learning models on tiny devices opened up a world of possibilities, creating innovative projects such as real-time object detection and predictive maintenance systems.

Workshop on TinyML and XIAO ESP32S3 Sense

Akash’s workshop focused on TinyML deployment on the XIAO ESP32S3, a powerful Seeed Studio microcontroller for edge AI applications. The Xiao ESP32S3 is compact, powerful, and affordable, thus ideal for students, hobbyists, and developers interested in exploring TinyML.

Akash took participants through the whole process, from training a model to deploying it on the microcontroller. Here is a breakdown of what Akash covered:

1. Intro to TinyML
Akash introduced the concepts of TinyML – what it is, why it is needed, how it works, and how it differs from normal machine learning. He noted that edge AI gets more relevant every day, and TinyML fared well in resource-constrained applications.

2. Introduction to XIAO ESP32S3
Then Akash presented the Xiao ESP32S3 Board: its features, specifications, and why it was a great platform for TinyML. Further, he presented the onboard Wi-Fi and Bluetooth capabilities, the low-power consumption, and compatibility with various sensors.

3. Building a Machine Learning Model
Akash then walked them through building a machine-learning model on Edge Impulse, one of the most popular platforms for TinyML model development. Next, train your model on any simple dataset like a gesture or keyword recognition dataset.

4. Deployment of Model on XIAO ESP32S3
Deployment Process: The heart of the workshop was the deployment process. First, Akash showed how one could convert a trained model to a deployable format on the Xiao ESP32S3 using TensorFlow Lite for Microcontrollers; then, he uploaded the model onto the board and ran inferences directly on the device.

5. Real Time Demonstration
The workshop concluded with a very exciting live demo: Akash showed how, in real-time, Xiao ESP32S3 was able to recognize hand gestures or detect certain sounds using the deployed TinyML model. This left the participants aghast and proved that even the most minute devices could do complex tasks using TinyML.

Empowering Next-Generation Innovators

Akash’s workshop was not limited to teaching some particular technology, but to inspire others. Being a 9th grader, he proved that none should be barred due to age factors from working on advanced fields like TinyML and can always contribute something meaningful. He made the workshop quite interactive, explaining each complex thing in an easy manner such that all participants, irrespective of age and skills, enjoyed it.

Akash has become a young leader in the tech community through his passion for teaching and deep knowledge of TinyML and embedded systems. This workshop reminded us that the future of technology rests in the hands of such young innovators who push beyond the edge of what’s possible.

Looking Ahead

And that is just where Akash Muthukumar gets started. With interests in TinyML, embedded systems, and machine learning, he definitely is going to keep making his presence known in the tech world. And as he does so, deeper into it all, it’s a dead giveaway that Akash is not only learning from the world but teaching too.

Akash’s workshop on deploying TinyML using the Xiao ESP32S3 is another good example of how the young mind has embraced technology and is showing the way. The world of TinyML is big, and with innovators like Akash at the helm, the future looks bright.

It is a story that inspires both novice and experienced developers alike to understand that with curiosity and passion in their hearts, commitments can help achieve great things even at a tender age!

Akash Teaching The Workshop

The post Akash Muthukumar: A 9th Grader Leading the Future of TinyML with XIAO ESP32S3 appeared first on Latest Open Tech From Seeed.

Next-Gen AI Gadgets: Rabbit R1 vs SenseCAP Watcher

Authored by Mengdu and published on Hackster, for sharing purposes only.

AI gadgets Rabbit R1 & SenseCAP Watcher design, UI, user experience compared – hardware/interaction highlights, no application details.

Next-Gen AI Gadgets: Rabbit R1 vs SenseCAP Watcher

Story

The world of AI gadgets is rapidly evolving, with companies racing to deliver intelligent home companions. Two such devices, the Rabbit R1, and SenseCAP Watcher, recently caught my attention through very different means – brilliant marketing drew me to purchase the former, while the latter was a review unit touted as a “Physical AI Agent” by Seeed Studio.

Intrigued by the potential convergence between these products, I embarked on an immersive user experience testing them side-by-side. This review offers a candid assessment of their design, user interfaces, and core interactions. However, I’ll steer clear of Rabbit’s app ecosystem and third-party software integration capabilities, as Watcher lacks such functionality by design.

My goal is to unravel the unique propositions each gadget brings to the AI gadgets market and uncover any surprising distinctions or similarities. Join me as I separate gimmick from innovation in this emerging product category.

Packaging

Rabbit really went all out with the packaging for the R1. As soon as I got the box, I could tell this wasn’t your average gadget. Instead of cheap plastic, the R1 comes cocooned in a crystal-clear acrylic case. It looks and feels incredibly premium.

It allows you to fully admire the R1’s design and interactive components like the scroll wheel and speakers before even taking it out. Little etched icons map out exactly what each part does.

The acrylic case doesn’t just protect – it also doubles as a display stand for the R1. There’s a molded pedestal that cradles the plastic body, letting you showcase the device like a museum piece.

By the time I finally got the R1 out of its clear jewel case, I was already grinning like a kid on Christmas day. The whole unboxing makes you feel like you’re uncovering a precious gadget treasure.

While the Watcher is priced nearly half that of the Rabbit R1, its eco-friendly cardboard packaging is anything but cheap. Extracting the Watcher unit itself is a simple matter of gently lifting it from the integrated enclosure.

At first glance, like me, you may puzzle over the purpose of the various cutouts, folds, and perforations. But a quick peek at their wiki reveals this unassuming exterior actually transforms into a multi-functional stand!

Echoing the form of a desktop calendar, a central cutout cradles the Watcher body, allowing it to be displayed front-and-center on your desk like a compact objet d’art. A clever and well-considered bit of innovation that deserves kudos for the design team!

Interaction Logic

Despite being equipped with speakers, microphone, camera, scroll wheel, and a touchscreen display – the R1 restricts touch input functionality. The touchscreen remains unresponsive to touch for general commands and controls, only allowing input through an on-screen virtual keyboard in specific scenarios like entering a WiFi password or using the terminal interface.

The primary interaction method is strictly voice-driven, which feels counterintuitive given the prominent touchscreen hardware. It’s puzzling why Rabbit’s design team limited core touch functionality on the included touchscreen display.

The overall operation logic also takes some getting used to. Take the side button dubbed the “PTT” – its function varies situationally.

This unintuitive behavior tripped me up when configuring WiFi. After tapping “connect”, I instinctively tried hitting PTT again to go back, only to accidentally cancel the connection instead. It wasn’t until later that I realized using the scroll wheel to navigate to the very top option, then pressing PTT is the correct “back” gesture.

While not necessarily a flaw, this interaction model defies typical user expectations. Most would assume a core navigation function like “back” to be clearly visible and accessible without obscure gestures. Having to precisely scroll to the top option every single time just to return to the previous menu is quite cumbersome, especially for nested settings trees.

This jarring lack of consistency in the control scheme is truly baffling. The operation logic appears haphazardly scattered across different button combinations and gestures depending on the context. Mastering the R1’s controls feels like an exercise in memorizing arbitrary rules rather than intuitive design principles.

In contrast to the Rabbit R1, the Watcher device seems to have a much simpler and more consistent interaction model. This could be attributed to the fact that the Watcher’s operations are inherently not overly complex, and it relies on a companion smartphone app for assistance in many scenarios.

Like the R1, the Watcher is equipped with a scroll wheel, camera, touchscreen, microphone, and speakers. Additionally, it has various pin interfaces for connecting external sensors, which may appeal to developers looking to tinker.

Commendably, the current version of the Watcher maintains a high degree of unity in its operational logic. Pressing the scroll wheel confirms a selection, scrolling up or down moves the cursor accordingly, and a long press initiates voice interaction with the device. This level of consistency is praiseworthy.

Moreover, the touchscreen is fully functional, allowing for a seamless experience where users can choose to navigate via either the scroll wheel or touch input, maintaining interactivity consistency while providing independent input methods. This versatility is a welcome design choice.

However, one minor drawback is that the interactions lack the “stickiness” found in smartphone interfaces. Both the scroll wheel and touch inputs exhibit a degree of frame drops and latency, which may be a common limitation of microcontroller-based device interactions.

When I mentioned that “it relies on a companion smartphone app for assistance in many scenarios, ” I was referring to the inability to perform tasks like entering long texts, such as WiFi passwords, directly on the Watcher‘s small screen. This reliance is somewhat unfortunate.

However, given the Watcher’s intended positioning as a device meant to be installed in a fixed location, perhaps mounted on a wall, it is understandable that users may not always need to operate it directly. The design team likely factored in the convenience of using a smartphone app for certain operations, as you wouldn’t necessarily be handling the Watcher itself at all times.

What can they do?

At its core, the Rabbit R1 leverages cloud-based large language models and computer vision AI to provide natural language processing, speech recognition, image identification and generation, and more. It has an array of sensors including cameras, microphones and environmental detection to take in multimodal inputs.

One of the Rabbit R1’s marquee features is voice search and question answering. Simply press the push-to-talk button and ask it anything, like “What were last night’s NBA scores?” or “What’s the latest on the TikTok ban?”. The AI will quickly find and recite relevant, up-to-date information drawn from the internet.

The SenseCAP Watcher, while also employing voice interaction and large language models, takes a slightly different approach. By long-pressing the scroll wheel on the top right of the Watcher, you can ask it profound existential questions like “Can you tell me why I was born into this world? What is my value to the universe?” It will patiently provide some insightful, if ambiguous, answers.

However, the key difference lies in contextual awareness: unlike the Rabbit R1, the Watcher can’t incorporate your current time and location into its responses. So while both devices might ponder the meaning of life with you, only the Rabbit R1 could tell you where to find the nearest open café to continue your existential crisis over a cup of coffee.

While both devices offer voice interaction capabilities, their approaches to visual processing showcase even more distinct differences.

Vision mode allows the Rabbit R1’s built-in camera to identify objects you point it towards. I found it was generally accurate at recognizing things like office supplies, food, and electronics – though it did mistake my iPhone 16 Pro Max for older models a couple times. This feature essentially turns the Rabbit R1 into a pocket-sized seeing-eye dog, ready to describe the world around you at a moment’s notice.

Unlike the Rabbit R1’s general-purpose object recognition, the Watcher’s visual capabilities appear to be tailored for a specific task. It’s not designed to be your all-seeing companion, identifying everything from your morning bagel to your office stapler.

Things are starting to get interesting. Seeed Studio calls the SenseCAP Watcher a “Physical AI Agent” – a term that initially puzzled me.

The term “Physical” refers to its tangible presence in the real world, acting as a bridge between our physical environment and Large Language Model.

As a parent of a mischievous toddler, my little one has a habit of running off naked while I’m tidying up the bathroom, often resulting in them catching a chill. I set up a simple task for the Watcher: “Alert me if my child leaves the bathroom without clothes on.” Now, the device uses its AI to recognize my child, determine if they’re dressed, and notify me immediately if they attempt to make a nude escape.

Unlike traditional cameras or smart devices, the Watcher doesn’t merely capture images or respond to voice commands. Its sophisticated AI allows it to analyze and interpret its surroundings, understanding not just what objects are present, but also the context and activities taking place.

I’ve experienced its autonomous capabilities firsthand as a working parent with a hectic schedule. After a long day at the office and tending to my kids, I usually collapse on the couch late at night for some much-needed TV time. However, I often doze off, leaving the TV and lights on all night, much to my wife’s annoyance the next morning.

Enter the Watcher. I’ve set it up to monitor my situation during late-night TV watching. Using its advanced AI, the Watcher can detect when I’ve fallen asleep on the couch. Once it recognizes that I’m no longer awake, it springs into action. Through its integration with my Home Assistant system, the Watcher triggers a series of automated actions: the TV switches off, the living room lights dim and then turn off, and the air conditioning adjusts to a comfortable sleeping temperature.

The “Agent” aspect of the Watcher emphasizes its role as an autonomous assistant. Users can assign tasks to the device, which then operates independently to achieve those goals. This might involve interacting with other smart devices, making decisions based on observed conditions, or providing insights without constant human input. It offers a new level of environmental awareness and task execution, potentially changing how we interact with AI in our daily lives.

You might think that devices like the Rabbit R1 could perform similar tasks. However, you’ll quickly realize that the Watcher’s capabilities are the result of Seeed Studio’s dedicated efforts to optimize large language models specifically for this purpose.

When it comes to analyzing object behaviors, the Rabbit R1 often provides ambiguous answers. For instance, it might suggest that a person “could be smoking” or “might be sleeping.” This ambiguity directly affects their ability to make decisive actions. This is probably a common problem with all devices using AI at the moment, too much nonsense and indecision. We sometimes find them cumbersome, often because they can’t be as decisive as humans.

I think I can now understand all the reasons why Seeed Studio calls it Physical AI Agent. I can use it in many of my scenarios. It could detect if your kid has an accident and wets the bed, then alert you. If it sees your pet causing mischief, it can recognize the bad behavior and give you a heads up.

If a package arrives at your door, the Watcher can identify it’s a delivery and let you know, rather than just sitting there unknowingly. It’s an always-vigilant smart camera that processes what it sees almost like having another set of eyes monitoring your home or office.

As for their distinct focus areas, the ambition on the Rabbit R1 side is to completely replace traditional smartphones by doing everything via voice control. Their wildest dream is that even if you metaphorically chopped off both your hands, you could just tell the R1 “I want to order food delivery” and it would magically handle the entire process from ordering to payment to confirming arrival – all without you having to lift a finger.

Instead of overcomplicating it with technical jargon about sensors and AI models, the key is that the Watcher has enough awareness to comprehend events unfolding in the physical world around it and keep you informed, no fiddling required on your end.

Perhaps this duality of being an intelligent aide with a tangible physical embodiment is the core reason why Seeed Studio dubs the Watcher a “Physical AI Agent.” Unlike disembodied virtual assistants residing in the cloud, the Watcher has a real-world presence – acting as an ever-present bridge that allows advanced AI language models to directly interface with and augment our lived physical experiences. It’s an attentive, thoughtful companion truly grounded in our reality.

Concluding

The Rabbit R1 and SenseCAP Watcher both utilize large language models combined with image analysis, representing innovative ways to bring advanced AI into physical devices. However, their application goals differ significantly.

The Watcher, as a Physical AI Agent, focuses on specific scenarios within our living spaces. It continuously observes and interprets its environment, making decisions and taking actions to assist users in their daily lives. By integrating with smart home systems, it can perform tasks autonomously, effectively replacing repetitive human labor in defined contexts.

Rabbit R1, on the other hand, aims to revolutionize mobile computing. Its goal is to replace traditional smartphones by offering a voice-driven interface that can interact with various digital services and apps. It seeks to simplify and streamline how we engage with technology on the go.

Both devices represent early steps towards a future where AI is more deeply integrated into our daily lives. The Watcher showcases how AI can actively participate in our physical spaces, while the R1 demonstrates AI’s potential to transform our digital interactions. As pioneering products, they offer glimpses into different facets of our AI-enhanced future, inviting us to imagine a world where artificial intelligence seamlessly blends with both our physical and digital realities.

There is no clear “winner” here.

Regardless of how successful these first iterations prove to be, Rabbit and Seeed Studio have staked unique perspectives on unleashing productivity gains from large language AI. Their distinct offerings are pioneering explorations that will undoubtedly hold a place in the historical arc of ambient AI development.

If given the opportunity to experience them first-hand, I wholeheartedly recommend picking up both devices. While imperfect, they provide an enthralling glimpse into the future – where artificial intelligence transcends virtual assistants confined to the cloud, and starts manifesting true cognition of our physical spaces and daily lives through thoughtful hardware/software synergies.

The post Next-Gen AI Gadgets: Rabbit R1 vs SenseCAP Watcher appeared first on Latest Open Tech From Seeed.

AIoT Xperience Day in Düsseldorf: Unlock the Power of AI and IoT

Hey community! We’re excited to invite you to join us at the AIoT Xperience Day in Düsseldorf, an event co-organized by Techbros, Uplink, Heliotics, Infinetics and Seeed Studio

This is an event you should not miss. Either join the hands-on workshops, play with live demos, show your projects or talk with like-minded enthusiasts, innovators, and professionals in the fields of AI and IoT, it is ensured to be a half-day full of excitement!

Hands-on Workshops

During the event, you’ll get the opportunity to explore two exciting workshops:

1. Meshtastic Workshop

Learn how to set up and utilize Meshtastic, a decentralized LoRa mesh network, with the SenseCAP T1000-E Tracker. Whether you’re a beginner or an experienced developer, this session will demonstrate how you can build a free, off-grid communication network to transmit messages without relying on traditional infrastructure.

2. No-Code AI with XIAO ESP32S3 Sense

Can AI be added to almost anything? Yes, and without any coding! In this hands-on workshop, you’ll discover how to easily build your own AI Vision sensor using XIAO ESP32S3 Sense and SenseCraft AI, our no-code platform for training and deploying AI models. With just a few steps, you’ll have your own custom AI Vision sensor up and running. For this workshop participants need to bring their own laptop.

Why Attend?

This event is more than just workshops. You’ll also get to play with live demos of cutting-edge AIoT technology and connect with others who are passionate about the future of AI and IoT.  Don’t miss the “show and tell” session, where participants will have the chance to share their own innovative projects. And you are invited to bring your projects to the spotlight as well!

Whether you’re interested in creating decentralized communication networks, or you want to see how AI can be seamlessly integrated into your projects without a single line of code, this is the event for you.

Event Details

Time: September 27, 2024  2:00 PM – 6:00PM

Venue: Techbros Office, Heerdter Lohweg 212, 40549 Düsseldorf, Germany

Food and drinks will be prepared according to sign-ups, and the spots are limited. Do sign up now to secure your spot! See you in Düsseldorf!

The post AIoT Xperience Day in Düsseldorf: Unlock the Power of AI and IoT appeared first on Latest Open Tech From Seeed.

Seeed Studio at The Things Conference 2024: Join Us for the Future of IoT

We are thrilled to announce that Seeed Studio will be a Silver Sponsor at The Things Conference 2024 in Amsterdam! As a proud participant in this flagship event for LoRaWAN® and Low Power IoT, we invite you to visit us at Booth B03 and explore our cutting-edge technologies that are revolutionizing the IoT space. From September 25-26, join us and over 1,500 industry professionals for a deep dive into the latest innovations in LoRaWAN®.

At our booth, we will be showcasing several new and existing products, providing hands-on demos, and discussing how these solutions are making IoT applications smarter, scalable, and more efficient by unleashing the power of AI. You’ll get to experience our latest innovations, including:

New Products

    • SenseCAP A1102 LoRaWAN Vision AI Sensor: A state-of-the-art AI-powered sensor.
    • Wio-SX1262: The world’s smallest LoRa module.
    • SenseCAP Watcher: The ultimate solution for intelligent monitoring.

Existing Products

A Joint Live Demo with Blynk

Additionally, we are collaborating with Blynk to deliver a Joint Live Demo showcasing the seamless onboarding of the SenseCAP S2120 8-in-1 LoRaWAN Weather Station. Experience real-time climate data visualization on the Blynk platform, offering end-to-end IoT solutions for your projects.

Explore Innovation with Seeed’s Keynotes and Hands-on Workshops

In addition to our exhibits, we’re excited to participate in key discussions and workshops that highlight the synergy between LoRaWAN® and AI. Here are some key sessions you won’t want to miss:

Keynote Speech: Revolutionizing IoT: The Synergy of LoRaWAN and Artificial Intelligence

Speaker: Joey Jiang, VP of Seeed Studio
Time: 4:40 PM, September 25 (Wednesday)
Venue: The Things Theater

In today’s rapidly evolving digital landscape, the need for efficient data processing from the physical world is growing exponentially. Joey Jiang, VP of Seeed Studio, will present a thought-provoking keynote on how the synergy between LoRaWAN and Artificial Intelligence (AI) is revolutionizing IoT solutions.

LoRaWAN’s long-range, low-power capabilities make it ideal for IoT, but handling vast data streams can be challenging. AI enhances LoRaWAN by optimizing data at the sensor level, compressing large datasets into actionable insights for transmission, reducing latency, bandwidth usage, and costs. In this session, Joey Jiang will showcase Seeed Studio’s AI-enabled LoRaWAN sensor nodes, built for scalability and adaptability across industries. He will explore how this technology drives efficiency and digital transformation in IoT ecosystems. Expect to learn how AI improves data processing, real-world use cases, and innovations that boost cost-effectiveness.

Workshop: No-Code to Build Your First AI-Enabled LoRaWAN Vision Sensor

Host: Joey Jiang, VP of Seeed Studio
Time: 11:00 AM, September 26 (Thursday)
Venue: Workshop 1

Want to build your own AI-enabled LoRaWAN sensor node without coding? This hands-on workshop, led by Joey Jiang, is for you! You’ll learn to assemble a plug-and-play LoRaWAN sensor node using Seeed Studio’s kit, train and deploy an AI model with SenseCraft AI, and transmit detection outcomes via LoRaWAN to The Things Network—all without writing a single line of code. Ideal for both beginners and professionals, this workshop offers a practical introduction to AI and LoRaWAN integration. With only 25 seats available, don’t miss the chance for personalized guidance. Bring your laptop for this interactive session!

Roundtable Discussions

1. LoRaWAN in the Harsh Environment of Agriculture

Moderator: Joey Jiang 

Time: 3:00 PM, September 25

Venue: Roundtable Area

2. Do’s and Don’ts in Electronics Manufacturing for LoRaWAN Devices

Panelist: Violet Su

Time: 11:00 AM, September 25

Venue: The Things Podcast

Seeed Studio Partners and Onsite Campaigns

We are excited to have our partners alongside us too!

Join Us at The Things Conference 2024

The Things Conference is the premier event for LoRaWAN®, bringing together over 1,500 top IoT professionals, 70+ industry leaders, and expert speakers. Connect with the entire LoRaWAN® ecosystem, explore the latest IoT technologies, and gain practical insights through hands-on workshops.

Get your tickets (use our partner code to enjoy a 20% off discount: JOIN-TTC2024), and join us for an unforgettable two days of innovation, networking, and hands-on learning.

See you in Amsterdam at Booth B03!

The post Seeed Studio at The Things Conference 2024: Join Us for the Future of IoT appeared first on Latest Open Tech From Seeed.

Hands-on Workshop in Amsterdam: Meshtastic and No-Code AI with XIAO

Hey community,

We have some exciting news to share. As we’re attending the upcoming The Things Conference 2024 in Amsterdam, we also got the pleasure of hosting a hands-on workshop thanks to the unwaving support of our local community partner Sensemakers on September 24, 2024 (Tuesday). Welcome to join us if you are in or visiting Amsterdam next week!

HandOnWorkshop pic

Time: 7:00 PM to 9:00 PM on September 24, 2024 (Tuesday) GMT+2

Place: Amsterdam Public Library (OBA)

Register: FREE, but registration required, and you can book it here.

Our VP Joey Jiang together with the amazing team Sensemakers will be hosting the workshop. During this 2-hour workhsop, you’ll explore the Meshtastic open-source, off-grid, decentralized LoRa mesh network using the SenseCAP T1000-E Tracker. Yes, we know the SenseCAP T1000-E Tracker for Meshtastic has been sold out and many people have been waiting. We are working on delivering more and faster! At the same time, we really want to offer the community to experience how easy and cool to set up your own Meshtastic network.

Moreover, you can expect to discover AI at your fingertips with the thumb-sized XIAO ESP32S3 Sense and the no-code SenseCraft AI Platform. You’ll be guided to train, deploy vision AI models without writing any code.

Everyone is welcome, and no experience or engineering background is required. Please do bring your laptop if you’d like to participate in the second session of XIAO with SenseCraft AI. Learn more details and reserve your spot here.

About Sensemakers:

SensemakersAMS is a volunteer based community dedicated to connecting people and sharing knowledge, ideas & hands-on experience related to new technology, mostly IoT and AI. They host community events, welcoming everyone to join, technical, non-technical or just interested. Learn more here: https://www.meetup.com/sensemakersams/

The post Hands-on Workshop in Amsterdam: Meshtastic and No-Code AI with XIAO appeared first on Latest Open Tech From Seeed.

❌