Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 February 2025Main stream

Control your Raspberry Pi GPIO with Arduino Cloud using Node.js | Part III

5 February 2025 at 20:56

As a Node.js developer, you’re probably eager to put your JavaScript skills to work beyond the browser or server, diving into the world of hardware control with Raspberry Pi GPIOs. If that’s the case, you’re in the right place!

This article is the third part of our series, following A guide to visualize your Raspberry Pi data on Arduino Cloud | Part I and the Python-focused Control your Raspberry Pi GPIO in Arduino Cloud using Python | Part II, which introduced GPIO management. Now, it’s time to explore how Node.js can be your gateway to controlling Raspberry Pi GPIOs, a foundational task in IoT development. Whether you’re toggling LEDs, reading sensors, or controlling relays, Node.js offers the tools and flexibility to make it happen seamlessly.

But IoT isn’t just about managing hardware locally. True IoT projects require remote dashboards that let you visualize real-time and historical data, and control devices from anywhere. With Arduino Cloud, you can do all of this with ease.

Let’s dive in and see how you can take your IoT skills to the next level with Node.js and the Arduino Cloud!

Raspberry Pi basic GPIO setup

In this article, we present a straightforward yet comprehensive example to demonstrate the power of Arduino Cloud. You’ll learn how to use an Arduino Cloud dashboard to remotely control and monitor your Raspberry Pi’s digital GPIOs. Specifically, we’ll cover how to:

  • Turn an LED connected to your Raspberry Pi on and off.
  • Detect when a push button connected to your Raspberry Pi is pressed.
  • Visualize the real-time and historical values of an integer variable.

To get started, let’s connect an LED and a push button to your Raspberry Pi as illustrated in the diagram below.

It’s a very simple setup. Now that we have everything ready, let’s get started!

Create the Device and Thing in Arduino Cloud

To send your Raspberry Pi data to Arduino Cloud, you have to follow these simple steps:

1. Set up an Arduino Cloud account if you didn’t have one before.
2. Create your device as a Manual device.

Note: Jot down your Device ID and Secret, as we will need them later.

3. Create your Thing and add your variables.

In the example shown in this blog post, we use the following three variables:

  • test_value: We will use this integer variable to show an integer value generated periodically in our Raspberry Pi application in our Arduino Cloud dashboard.
  • button: We will use this boolean variable to send the information to the Cloud when the push button is pressed.
  • led: We will use this boolean variable to switch on and off the LED from the Arduino Cloud dashboard.

Create an Arduino Cloud dashboard for data visualization:

  • Create a switch widget (name: LED) and a LED widget (name: LED) and linke them to the led variable.
  • Create a chart widget (name: Value evolution) and a Value widget (name: Value) and link them to the test_value variable.
  • Create a Push button (name: Push Button) and a Status widget (name: Button) and link them to the button variable.

With the dashboard, you will be able to:

  • Switch ON and OFF the LED using the switch widget
  • Visualize the status of the LED with the LED widget
  • Visualize the real time value of the variable test_value with the Value widget
  • Visualize the evolution over time of the variable test_value with the chart widget
  • Visualize on the Push Button and Button widgets when the push button has been pressed on the board

Note: You can find more detailed information about the full process in our documentation guide.

Program your IoT device using Node.js

Now it’s time to develop your Node.j application.

const gpiod = require('node-libgpiod');
const { ArduinoIoTCloud } = require('arduino-iot-js');
const { DEVICE_ID, SECRET_KEY } = require('./credentials');


// Modify these lines according to your board setup
const GPIOCHIP = 'gpiochip4';
const LED = 14; // GPIO14, Pin 8
const BUTTON = 15; // GPIO15, Pin 10


// Make sure these variables are global. Otherwise, they will not
// work properly inside the timers
chip = new gpiod.Chip(GPIOCHIP);
ledLine = chip.getLine(LED);
buttonLine = chip.getLine(BUTTON);


ledLine.requestOutputMode("gpio-basic");
// To configure the pull-up bias, use 32 instead of gpiod.LineFlags.GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP if it is undefined
buttonLine.requestInputModeFlags("gpio-basic", gpiod.LineFlags.GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP);


let client;


// This function is executed every 1.0 seconds, polls the value
// of the button and sends the data to Arduino Cloud
function readButton(client) {
  let button = buttonLine.getValue() ? true : false;
  if (client)
     client.sendProperty("button", button);
  console.log("pollButton:", button);
}


// This function is executed every 10.0 seconds, gets a random
// number between 0 and 100 and sends the data to Arduino Cloud
function readValue(client) {
  let value = Math.floor(Math.random() * 101);
  if (client)
     client.sendProperty("test_value", value);
  console.log("pollValue", value);
}


// This function is executed each time the "led" variable changes
function onLedChanged(led) {
  ledLine.setValue(led ? 1 : 0);
  console.log("LED change! Status is: ", led);
}


// Create Arduino Cloud connection
(async () => {
  try {
     client = await ArduinoIoTCloud.connect({
        deviceId: DEVICE_ID,
        secretKey: SECRET_KEY,
        onDisconnect: (message) => console.error(message),
     });
     client.onPropertyValue("led", (led) => onLedChanged(led));
  }
  catch(e) {
     console.error("ArduinoIoTCloud connect ERROR", e);
  }
})();


// Poll Value every 10 seconds
const pollValue = setInterval(() => {
  readValue(client);
}, 10000);


// Poll Button every 1 seconds
const pollButton = setInterval(() => {
  readButton(client);
}, 1000);

Create a file called credentials.js with your Device ID and secret.

module.exports = {
   DEVICE_ID: '09d3a634-e1ad-4927-9da0-dde663f8e5c6',
   SECRET_KEY: 'IXD3U1S37QPJOJXLZMP5'
 };

This code is compatible with all Raspberry Pi models and should also work on any Linux-based machine. Just make sure to specify the correct gpiochip and configure the appropriate GPIO lines in the code snippet below:

const GPIOCHIP = 'gpiochip4';
const LED = 14; // GPIO14, Pin 8
const BUTTON = 15; // GPIO15, Pin 10

For more information about the project, check out the details on Project Hub. You can find the complete code and additional resources in the GitHub repository. Plus, don’t miss the comprehensive JavaScript + Arduino Cloud guide in the following article.

Start with Arduino Cloud for free

Getting your Raspberry Pi connected to Arduino Cloud with Node.js is incredibly easy. Simply create your free account, and you’re ready to get started. Arduino Cloud is free to use and comes with optional premium features for even greater flexibility and power.  

If you’re ready to simplify data visualization and remote control for your Raspberry Pi applications using Node.js, Python, or Node-RED, Arduino Cloud is the perfect platform to explore and elevate your projects.  

Get started with Arduino Cloud!

The post Control your Raspberry Pi GPIO with Arduino Cloud using Node.js | Part III appeared first on Arduino Blog.

Before yesterdayMain stream

SparkFun Digi X-ON LoRaWAN development kit combines Digi HX15 gateway with RP2350 IoT node and environmental sensors module

31 January 2025 at 16:19
SparkFun Digi X ON Kit

SparkFun has recently released the Digi X-ON LoRaWAN development kit an all-in-one IoT development kit designed to simplify the setup and deployment of LoRa-based IoT systems. It includes the Digi HX15 Gateway, SparkFun IoT Node for LoRaWAN, and the ENS160/BME280 environmental sensor, enabling rapid prototyping and connectivity with the help of the Digi X-ON cloud platform. The SparkFun IoT Node is built around the Raspberry Pi RP2350 microcontroller, which features 16MB flash, 8MB PSRAM, multiple GPIOs, LiPo battery support, microSD storage, and USB-C connectivity. It also integrates the Digi XBee LR module for long-range LoRaWAN communication with pre-activated cloud connectivity. With an onboard Qwiic connector and Arduino support, this development kit is ideal for applications like industrial monitoring, environmental sensing, smart agriculture, remote data collection, and more. Digi HX15 gateway specifications Microprocessor – STMicro STM32MP157C MPU with dual-core Cortex A7 @ 650 MHz, Cortex-M4 @ 209 MHz with FPU/MPU, 3D [...]

The post SparkFun Digi X-ON LoRaWAN development kit combines Digi HX15 gateway with RP2350 IoT node and environmental sensors module appeared first on CNX Software - Embedded Systems News.

Accelerate Your Docker Builds Using AWS CodeBuild and Docker Build Cloud

18 December 2024 at 20:10

Containerized application development has revolutionized modern software delivery, but slow image builds in CI/CD pipelines can bring developer productivity to a halt. Even with AWS CodeBuild automating application testing and building, teams face challenges like resource constraints, inefficient caching, and complex multi-architecture builds that lead to delays, lower release frequency, and prolonged recovery times.

Enter Docker Build Cloud, a high-performance cloud service designed to streamline image builds, integrate seamlessly with AWS CodeBuild, and reduce build times dramatically. With Docker Build Cloud, you gain powerful cloud-based builders, shared caching, and native multi-architecture support — all while keeping your CI/CD pipelines efficient and your developers focused on delivering value faster.

In this post, we’ll explore how AWS CodeBuild combined with Docker Build Cloud tackles common bottlenecks, boosts build performance, and simplifies workflows, enabling teams to ship more quickly and reliably.

2400x1260 generic dbc blog e

By using AWS CodeBuild, you can automate the build and testing of container applications, enabling the construction of efficient CI/CD workflows. AWS CodeBuild is also integrated with AWS Identity and Access Management (IAM), allowing detailed configuration of access permissions for build processes and control over AWS resources.

Container images built with AWS CodeBuild can be stored in Amazon Elastic Container Registry (Amazon ECR) and deployed to various AWS services, such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, or AWS Lambda (Figure 1). Additionally, these services can leverage AWS Graviton, which adopts Arm-based architectures, to improve price performance for compute workloads.

Illustration of CI/CD pipeline outlining steps for check in code, source code commit, build code, and deploy code.
Figure 1: CI/CD pipeline for AWS ECS using AWS CodeBuild (ECS Workshop).

Challenges of container image builds with AWS CodeBuild

Regardless of the tool used, building container images in a CI pipeline often takes a significant amount of time. This can lead to the following issues:

  • Reduced development productivity
  • Lower release frequency
  • Longer recovery time in case of failures

The main reasons why build times can be extended include:

1. Machines for building

Building container images requires substantial resources (CPU, RAM). If the machine specifications used in the CI pipeline are inadequate, build times can increase.

For simple container image builds, the impact may be minimal, but in cases of multi-stage builds or builds with many dependencies, the effect can be significant.

AWS CodeBuild allows changing instance types to improve these situations. However, such changes can apply to parts of the pipeline beyond container image builds, and they also increase costs.

Developers need to balance cost and build speed to optimize the pipeline.

2. Container image cache

In local development environments, Docker’s build cache can shorten rebuild times significantly by reusing previously built layers, avoiding redundant processing for unchanged parts of the Dockerfile. However, in cloud-based CI services, clean environments are used by default, so cache cannot be utilized, resulting in longer build times.

Although there are ways to use storage or container registries to leverage caching, these often are not employed because they introduce complexity in configuration and overhead from uploading and downloading cache data.

3. Multi-architecture builds (AMD64, Arm64)

To use Arm-based architectures like AWS Graviton in Amazon EKS or Amazon ECS, Arm64-compatible container image builds are required.

With changes in local environments, such as Apple Silicon, cases requiring multi-architecture support for AMD64 and Arm64 have increased. However, building images for different architectures (for example, building x86 on Arm, or vice versa) often requires emulation, which can further increase build times (Figure 2).

Although AWS CodeBuild provides both AMD64 and Arm64 instances, running them as separate pipelines is necessary, leading to more complex configurations and operations.

Illustration of steps for creating multi-architecture Docker images including Build and push, Test, Build/push multi-arch manifest, Deploy.
Figure 2: Creating multi-architecture Docker images using AWS CodeBuild.

Accelerating container image builds with Docker Build Cloud

The Docker Build Cloud service executes the Docker image build process in the cloud, significantly reducing build time and improving developer productivity (Figure 3).

Illustration of how Docker Build Cloud works, showing CI Runner/CI job, Local Machine, and Cloud Builder elements.
Figure 3: How Docker Build Cloud works.

Particularly in CI pipelines, Docker Build Cloud enables faster container image builds without the need for significant changes or migrations to existing pipelines.

Docker Build Cloud includes the following features:

  • High-performance cloud builders: Cloud builders equipped with 16 vCPUs and 32GB RAM are available. This allows for faster builds compared to local environments or resource-constrained CI services.
  • Shared cache utilization: Cloud builders come with 200 GiB of shared cache, significantly reducing build times for subsequent builds. This cache is available without additional configuration, and Docker Build Cloud handles the cache maintenance for you.
  • Multi-architecture support (AMD64, Arm64): Docker Build Cloud supports native builds for multi-architecture with a single command. By specifying --platform linux/amd64,linux/arm64 in the docker buildx build command or using Bake, images for both Arm64 and AMD64 can be built simultaneously. This approach eliminates the need to split the pipeline for different architectures.

Architecture of AWS CodeBuild + Docker Build Cloud

Figure 4 shows an example of how to use Docker Build Cloud to accelerate container image builds in AWS CodeBuild:

Illustration of of AWS CodeBuild pipeline showing flow from Source Code to AWS CodeBuild, to Docker Build Cloud to Amazon ECR.
Figure 4: AWS CodeBuild + Docker Build Cloud architecture.
  1. The AWS CodeBuild pipeline is triggered from a commit to the source code repository (AWS CodeCommit, GitHub, GitLab).
  2. Preparations for running Docker Build Cloud are made in AWS CodeBuild (Buildx installation, specifying Docker Build Cloud builders).
  3. Container images are built on Docker Build Cloud’s AMD64 and Arm64 cloud builders.
  4. The built AMD64 and Arm64 container images are pushed to Amazon ECR.

Setting up Docker Build Cloud

First, set up Docker Build Cloud. (Note that new Docker subscriptions already include a free tier for Docker Build Cloud.)

Then, log in with your Docker account and visit the Docker Build Cloud Dashboard to create new cloud builders.

Once the builder is successfully created, a guide is displayed for using it in local environments (Docker Desktop, CLI) or CI/CD environments (Figure 5).

Screenshot from Docker Build Cloud showing setup instructions with local installation selected.
Figure 5: Setup instructions of Docker Build Cloud.

Additionally, to use Docker Build Cloud from AWS CodeBuild, a Docker personal access token (PAT) is required. Store this token in AWS Secrets Manager for secure access.

Setting up the AWS CodeBuild pipeline

Next, set up the AWS CodeBuild pipeline. You should prepare an Amazon ECR repository to store the container images beforehand.

The following settings are used to create the AWS CodeBuild pipeline:

  • AMD64 instance with 3GB memory and 2 vCPUs.
  • Service role with permissions to push to Amazon ECR and access the Docker personal access token from AWS Secrets Manager.

The buildspec.yml file is configured as follows:

version: 0.2

env:
  variables:
    ARCH: amd64
    ECR_REGISTRY: [ECR Registry]
    ECR_REPOSITORY: [ECR Repository]
    DOCKER_ORG: [Docker Organization]
  secrets-manager:
    DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
    DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT

phases:
  install:
    commands:
      # Installing Buildx
      - BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
      - mkdir -vp ~/.docker/cli-plugins/
      - curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
      - chmod a+x ~/.docker/cli-plugins/docker-buildx

  pre_build:
    commands:
      # Logging in to Amazon ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
      # Logging in to Docker (Build Cloud)
      - echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
      # Specifying the cloud builder
      - docker buildx create --use --driver cloud $DOCKER_ORG/demo

  build:
    commands:
      # Image tag
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
      # Build container image & push to Amazon ECR
      - docker buildx build --platform linux/amd64,linux/arm64 --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .

In the install phase, Buildx, which is necessary for using Docker Build Cloud, is installed.

Although Buildx may already be installed in AWS CodeBuild, it might be an unsupported version for Docker Build Cloud. Therefore, it is recommended to install the latest version.

In the pre_build phase, the following steps are performed:

  • Log in to Amazon ECR.
  • Log in to Docker (Build Cloud).
  • Specify the cloud builder.

In the build phase, the image tag is specified, and the container image is built and pushed to Amazon ECR.

Instead of separating the build and push commands, using --push to directly push the image to Amazon ECR helps avoid unnecessary file transfers, contributing to faster builds.

Results comparison

To make a comparison, an AWS CodeBuild pipeline without Docker Build Cloud is created. The same instance type (AMD64, 3GB memory, 2vCPU) is used, and the build is limited to AMD64 container images.

Additionally, Docker login is used to avoid the pull rate limit imposed by Docker Hub.

version: 0.2

env:
  variables:
    ECR_REGISTRY: [ECR Registry]
    ECR_REPOSITORY: [ECR Repository]
  secrets-manager:
    DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
    DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT

phases:
  pre_build:
    commands:
      # Logging in to Amazon ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
      # Logging in to Docker
      - echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin

  build:
    commands:
      # Image tag
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
      # Build container image & push to Amazon ECR
      - docker build --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .

Figure 6 shows the result of the execution:

Screenshot of results using AWS CodeBuild pipeline without Docker Build Cloud, showing execution time of 5 minutes and 59 seconds.
Figure 6: The result of the execution without Docker Build Cloud.

Figure 7 shows the execution result of the AWS CodeBuild pipeline using Docker Build Cloud:

Screenshot of results using AWS CodeBuild pipeline with Docker Build Cloud, showing execution time of 1 minutes and 4 seconds.
Figure 7: The result of the execution with Docker Build Cloud.

The results may vary depending on the container images being built and the state of the cache, but it was possible to build container images much faster and achieve multi-architecture builds (AMD64 and Arm64) within a single pipeline.

Conclusion

Integrating Docker Build Cloud into a CI/CD pipeline using AWS CodeBuild can dramatically reduce build times and improve release frequency. This allows developers to maximize productivity while delivering value to users more quickly.

As mentioned previously, the new Docker subscription already includes a free tier for Docker Build Cloud. Take advantage of this opportunity to test how much faster you can build container images for your current projects.

Learn more

From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity

By: Yiwen Xu
13 December 2024 at 20:30

Modern application development has evolved dramatically. Gone are the days when a couple of developers, a few machines, and some pizza were enough to launch an app. As the industry grew, DevOps revolutionized collaboration, and Docker popularized containerization, simplifying workflows and accelerating delivery. 

Later, DevSecOps brought security into the mix. Fast forward to today, and the demand for software has never been greater, with more than 750 million cloud-native apps expected by 2025.

This explosion in demand has created a new challenge: complexity. Applications now span multiple programming languages, frameworks, and architectures, integrating both legacy and modern systems. Development workflows must navigate hybrid environments — local, cloud, and everything in between. This complexity makes it harder for companies to deliver innovation on time and stay competitive. 

2400x1260 evergreen docker blog e

To overcome these challenges, you need a development platform that’s as reliable and ubiquitous as electricity or Wi-Fi — a platform that works consistently across diverse applications, development tools, and environments. Whether you’re just starting to move toward microservices or fully embracing cloud-native development, Docker meets your team where they are, integrates seamlessly into existing workflows, and scales to meet the needs of individual developers, teams, and entire enterprises.

Docker: Simplifying the complex

The Docker suite of products provides the tools you need to accelerate development, modernize legacy applications, and empower your team to work efficiently and securely. With Docker, you can:

  • Modernize legacy applications: Docker makes it easy to containerize existing systems, bringing them closer to modern technology stacks without disrupting operations.
  • Boost productivity for cloud-native teams: Docker ensures consistent environments, integrates with CI/CD workflows, supports hybrid development environments, and enhances collaboration

Consistent environments: Build once, run anywhere

Docker ensures consistency across development, testing, and production environments, eliminating the dreaded “works on my machine” problem. With Docker, your team can build applications in unified environments — whether on macOS, Windows, or Linux — for reliable code, better collaboration, and faster time to market.

With Docker Desktop, developers have a powerful GUI and CLI for managing containers locally. Integration with popular IDEs like Visual Studio Code allows developers to code, build, and debug within familiar tools. Built-in Kubernetes support enables teams to test and deploy applications on a local Kubernetes cluster, giving developers confidence that their code will perform in production as expected.

Integrated workflows for hybrid environments

Development today spans both local and cloud environments. Docker bridges the gap and provides flexibility with solutions like Docker Build Cloud, which speeds up build pipelines by up to 39x using cloud-based, multi-platform builders. This allows developers to focus more on coding and innovation, rather than waiting on builds.

Docker also integrates seamlessly with CI/CD tools like Jenkins, GitLab CI, and GitHub Actions. This automation reduces manual intervention, enabling consistent and reliable deployments. Whether you’re building in the cloud or locally, Docker ensures flexibility and productivity at every stage.

Team collaboration: Better together

Collaboration is central to Docker. With integrations like Docker Hub and other registries, teams can easily share container images and work together on builds. Docker Desktop features like Docker Debug and the Builds view dashboards empower developers to troubleshoot issues together, speeding up resolution and boosting team efficiency.

Docker Scout provides actionable security insights, helping teams identify and resolve vulnerabilities early in the development process. With these tools, Docker fosters a collaborative environment where teams can innovate faster and more securely.

Why Docker?

In today’s fast-paced development landscape, complexity can slow you down. Docker’s unified platform reduces complexity as it simplifies workflows, standardizes environments, and empowers teams to deliver software faster and more securely. Whether you’re modernizing legacy applications, bridging local and cloud environments, or building cutting-edge, cloud-native apps, Docker helps you achieve efficiency and scale at every stage of the development lifecycle.

Docker offers a unified platform that combines industry-leading tools — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud — into a seamless experience. Docker’s flexible plans ensure there’s a solution for every developer and every team, from individual contributors to large enterprises.

Get started today

Ready to simplify your development workflows? Start your Docker journey now and equip your team with the tools they need to innovate, collaborate, and deliver with confidence.

Looking for tips and tricks? Subscribe to Docker Navigator for the latest updates and insights delivered straight to your inbox.

Learn more

Speed up your project’s compile time by up to 50% in Arduino Cloud!

13 December 2024 at 20:27

At Arduino, we know how precious your time is when you’re building your next big project or experimenting with new ideas. That’s why we’re thrilled to introduce a game-changing update to the Cloud Editor Builder — the engine behind compiling your sketches in Arduino Cloud.

This update is all about you: making your development faster, smoother, and more secure, so you can focus on what truly matters — creating.

Here’s what’s new:

Faster compilations: Up to 50% faster!

No more waiting around! With the new builder, sketch compilations are now up to 50% faster, enabling you to focus more on creating and testing your projects, and less on waiting. Two years ago, we significantly improved the Cloud Editor Builder, setting a new standard for performance.

And now, whether you’re working on a quick prototype or a complex IoT solution,  we provide you with faster compilation times, which means you can iterate and innovate more efficiently.

See compilation progress at a glance

One of the standout features of the new builder is the introduction of a dedicated compilation progress bar. Now, you can see exactly how far along the compilation process is, with clear visibility into its completeness percentage. No more guesswork — just a smoother and more transparent experience.

Your IoT projects, more secure

We’ve also made improvements under the hood, adding an extra layer of security and reliability to the Cloud Editor Builder. Your data and projects are safer than ever, giving you peace of mind while you create.

IDE vs. Cloud Editor: Which one fits your workflow?

We understand that every Arduino user has unique needs, which is why we offer both the Arduino IDE and the Cloud Editor. Wondering which option suits your workflow best? We’ve prepared a clear comparison table showcasing the key differences between the two tools. From compilation speeds to storage options, see how the Cloud Editor stacks up against the IDE.

Check out the full comparison table in this article.

Ready to experience the difference?

The new Cloud Editor Builder will be live in the coming days, and we can’t wait for you to try it! Stay tuned for updates, and get ready to enjoy faster compilations, improved usability, and enhanced security.

We’re excited to see how this update will elevate your projects. As always, we’d love to hear your feedback. Please share your thoughts, questions, and experiences with us on social media or Arduino Forum.

Let’s build something amazing together!

Ready to elevate your projects? Discover the full potential of the Arduino Cloud Editor and explore all its powerful features here. Need guidance? Dive into our comprehensive documentation.

The post Speed up your project’s compile time by up to 50% in Arduino Cloud! appeared first on Arduino Blog.

Simplifying IoT for smarter manufacturing: Join the chat with Arduino, AWS, and Atlas Machine

2 December 2024 at 15:50

We all know that the future of manufacturing lies in IoT — yet the path to adoption can sometimes feel daunting. But what if you could simplify the process and start seeing results quickly? That’s exactly what we’re going to explore in our upcoming Arduino Cloud Café webinar on December 10 at 5PM CET / 11AM EST.

–> Register now

This session is a unique opportunity to hear from experts at Arduino, AWS, and Atlas Machine as they dive into how industrial IoT is transforming manufacturing operations. Whether you’re just starting to explore IoT or looking for ways to optimize your existing systems, this webinar is for you.

What to expect

In this session, we’ll be sharing actionable tips and insights to help you easily integrate IoT into your operations:

  • Learn how to collect data quickly — without months of delays.
  • Understand how to retrofit your legacy equipment and get real-time visibility into your operations.
  • Discover how to integrate the data from Arduino devices with the rest of your business systems on AWS for smarter decision-making.

We’ll also be sharing real-world success stories, including how Atlas Machine & Supply leveraged Arduino (Opta and Arduino Cloud) and AWS solutions for predictive maintenance and remote monitoring across their global fleet of industrial equipment.

And don’t forget, we’ll have a live Q&A session at the end, where you can ask our experts anything. Feel free to submit your questions throughout the webinar, and we’ll do our best to address as many as possible.

Meet the speakers

We’re excited to be joined by a fantastic lineup of speakers who are experts in their fields:

  • Richie Gimmel, CEO at Atlas Machine & Supply
  • Danny Kent, IoT Development Director at Atlas Machine & Supply
  • Andrea Richetta, Principal Product Evangelist at Arduino
  • Gabriel Verreault, Senior Manufacturing Partner Solutions Architect at AWS

Why you should join

If you’ve been looking for a way to simplify IoT adoption in your manufacturing operations, this is your chance to learn from industry leaders who are making it happen. Whether you’re trying to modernize old equipment or integrate IoT into your larger business strategy, you’ll walk away with valuable insights and tips you can start using right away.

Save your spot today! Don’t miss out on this chance to hear from the experts and get your questions answered. We can’t wait to see you there!

The post Simplifying IoT for smarter manufacturing: Join the chat with Arduino, AWS, and Atlas Machine appeared first on Arduino Blog.

Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs

19 November 2024 at 23:00
Meta recently released its Llama 3.2 series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are...

Meta recently released its Llama 3.2 series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only small language model (SLM) variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and…

Source

Receive an alert when your device goes offline in Arduino Cloud

15 November 2024 at 20:36

You’re managing a network of IoT sensors that monitor air quality across multiple locations. Suddenly, one of the sensors goes offline, but you don’t notice until hours later. The result? A gap in your data and a missed opportunity to take corrective action. This is a common challenge when working with IoT devices: staying informed about the real-time status of each device is crucial to ensure smooth operation and timely troubleshooting.

This is where Device Status Notifications, the latest feature in the Arduino Cloud, comes in. Whether you’re an individual maker or an enterprise, this feature empowers you to stay on top of your devices by sending real-time alerts when a device goes online or offline.

What is “Device Status Notifications?”

Device Status Notifications allow you to receive instant alerts whenever one of your devices changes its connectivity status, whether it’s going offline or coming back online. You can customize these alerts for individual devices or all devices under your account, with the flexibility to exclude specific devices from triggering notifications.

We announced it a while ago, Arduino Cloud already supports Triggers and Notifications, allowing you to create alerts based on specific conditions like sensor readings or thresholds. With the addition of Device Status Notifications, you can now monitor device connectivity itself. This means you can now receive an alert the moment a device loses connection, providing a proactive way to manage your IoT ecosystem. For more details on the original feature, check out our Triggers and Notifications blog post.

Key benefits for users

  • Real-time monitoring: Get notified instantly when a device disconnects or reconnects, helping you take corrective actions promptly.
  • Customization: Configure your alerts to focus on specific devices or apply rules to all your devices, with the flexibility to add exceptions. You can also decide when the notification should be sent — either immediately upon a status change or after a set period of downtime.
  • Convenience: Choose to receive notifications via email or directly on your mobile device through the Arduino IoT Remote app, making it easy to stay informed wherever you are.

How to set up Device Status Notifications

Video link

1. Set up a Trigger

Go to the Triggers section and select “+ TRIGGER

2. Choose “Device Status” as your condition

Decide whether to monitor the status of:

  • A specific device (select “Single device”), or
  • Any device (select “Any device (existing and upcoming)”).

If you select “Single device,” you can choose the device that you want to be monitored.

If your selection is “Any device,” you can add exceptions for devices you don’t want to trigger the alert.

3. Configure what you are going to monitor

Choose whether to monitor when the device goes online, offline, or both. Then decide if the notification should be sent immediately or after a set period (options range from 10 minutes to 48 hours).

4. Customize the notification settings

Notifications are configured in the same way as any other Trigger. You can add the action of sending an email or a push notification to your phone via a push notification on the Arduino IoT Remote app.

Ready to test Device Notifications?

Want to make sure your IoT devices stay connected and functioning? Start using the Device Status Notifications feature today. Simply log in to your Arduino IoT Cloud account, and configure your notifications to stay informed whenever your devices go online or offline. 

Make sure you’re on a Maker, Enterprise, or School plan to access this feature.

And don’t forget to download the Arduino IoT Remote app from the App Store or Google Play  to receive real-time alerts on the go and stay connected, wherever you are.

Black Friday is here – Save Big on Arduino Cloud!

Take your IoT projects to the next level this Black Friday!

Black Friday Arduino Cloud deals 25% off Maker Yearly Plan

For a limited time, enjoy 25% off the Arduino Cloud Maker Yearly plan with code BLACKFRIDAY. Don’t miss this opportunity to access premium features and elevate your creativity. Hurry—this offer is valid for new Maker Yearly plan subscriptions only and ends on December 1st, 2024.

The post Receive an alert when your device goes offline in Arduino Cloud appeared first on Arduino Blog.

Why Testcontainers Cloud Is a Game-Changer Compared to Docker-in-Docker for Testing Scenarios

14 November 2024 at 22:39

Navigating the complex world of containerized testing environments can be challenging, especially when dealing with Docker-in-Docker (DinD). As a senior DevOps engineer and Docker Captain, I’ve seen firsthand the hurdles that teams face with DinD, and here I’ll share why Testcontainers Cloud is a transformative alternative that’s reshaping the way we handle container-based testing.

2400x1260 Testcontainers Cloud evergreen

Understanding Docker-in-Docker

Docker-in-Docker allows you to run Docker within a Docker container. It’s like Inception for containers — a Docker daemon running inside a Docker container, capable of building and running other containers.

How Docker-in-Docker works

  • Nested Docker daemons: In a typical Docker setup, the Docker daemon runs on the host machine, managing containers directly on the host’s operating system. With DinD, you start a Docker daemon inside a container. This inner Docker daemon operates independently, enabling the container to build and manage its own set of containers.
  • Privileged mode and access to host resources: To run Docker inside a Docker container, the container needs elevated privileges. This is achieved by running the container in privileged mode using the --privileged flag:
docker run --privileged -d docker:dind
  • The --privileged flag grants the container almost all the capabilities of the host machine, including access to device files and the ability to perform system administration tasks. Although this setup enables the inner Docker daemon to function, it poses significant security risks, as it can potentially allow the container to affect the host system adversely.
  • Filesystem considerations: The inner Docker daemon stores images and containers within the file system of the DinD container, typically under /var/lib/docker. Because Docker uses advanced file system features like copy-on-write layers, running an inner Docker daemon within a containerized file system (which may itself use such features) can lead to complex interactions and potential conflicts.
  • Cgroups and namespace isolation: Docker relies on Linux kernel features like cgroups and namespaces for resource isolation and management. When running Docker inside a container, these features must be correctly configured to allow nesting. This process can introduce additional complexity in ensuring that resource limits and isolation behave as expected.

Why teams use Docker-in-Docker

  • Isolated build environments: DinD allows each continuous integration (CI) job to run in a clean, isolated Docker environment, ensuring that builds and tests are not affected by residual state from previous jobs or other jobs running concurrently.
  • Consistency across environments: By encapsulating the Docker daemon within a container, teams can replicate the same Docker environment across different stages of the development pipeline, from local development to CI/CD systems.

Challenges with DinD

Although DinD provides certain benefits, it also introduces significant challenges, such as:

  • Security risks: Running containers in privileged mode can expose the host system to security vulnerabilities, as the container gains extensive access to host resources.
  • Stability issues: Nested containers can lead to storage driver conflicts and other instability issues, causing unpredictable build failures.
  • Complex debugging: Troubleshooting issues in a nested Docker environment can be complicated, as it involves multiple layers of abstraction and isolation.

Real-world challenges

Although Docker-in-Docker might sound appealing, it often introduces more problems than it solves. Before diving into those challenges, let’s briefly discuss Testcontainers and its role in modern testing practices.

What is Testcontainers?

Testcontainers is a popular open source library designed to support integration testing by providing lightweight, disposable instances of common databases, web browsers, or any service that can run in a Docker container. It allows developers to write tests that interact with real instances of external resources, rather than relying on mocks or stubs.

Key features of Testcontainers

  • Realistic testing environment: By using actual services in containers, tests are more reliable and closer to real-world scenarios.
  • Isolation: Each test session, or even each test can run in a clean environment, reducing flakiness due to shared state.
  • Easy cleanup: Containers are ephemeral and are automatically cleaned up after tests, preventing resource leaks.

Dependency on the Docker daemon

A core component of Testcontainers’ functionality lies in its interaction with the Docker daemon. Testcontainers orchestrates Docker resources by starting and stopping containers as needed for tests. This tight integration means that access to a Docker environment is essential wherever the tests are run.

The DinD challenge with Testcontainers in CI

When teams try to include Testcontainers-based integration testing in their CI/CD pipelines, they often face the challenge of providing Docker access within the CI environment. Because Testcontainers requires communication with the Docker daemon, many teams resort to using Docker-in-Docker to emulate a Docker environment inside the CI job.

However, this approach introduces significant challenges, especially when trying to scale Testcontainers usage across the organization.

Case study: The CI pipeline nightmare

We had a Jenkins CI pipeline that utilized Testcontainers for integration tests. To provide the necessary Docker environment, we implemented DinD. Initially, it seemed to work fine, but soon we encountered:

  • Unstable builds: Random failures due to storage driver conflicts and issues with nested container layers. The nested Docker environment sometimes clashed with the host, causing unpredictable behavior.
  • Security concerns: Running containers in privileged mode raised red flags during security audits. Because DinD requires privileged mode to function correctly, it posed significant security risks, potentially allowing containers to access the host system.
  • Performance bottlenecks: Builds were slow, and resource consumption was high. The overhead of running Docker within Docker led to longer feedback loops, hindering developer productivity.
  • Complex debugging: Troubleshooting nested containers became time-consuming. Logs and errors were difficult to trace through the multiple layers of containers, making issue resolution challenging.

We spent countless hours trying to patch these issues, but it felt like playing a game of whack-a-mole.

Why Testcontainers Cloud is a better choice

Testcontainers Cloud is a cloud-based service designed to simplify and enhance your container-based testing. By offloading container execution to the cloud, it provides a secure, scalable, and efficient environment for your integration tests.

How TestContainers Cloud addresses DinD’s shortcomings

Enhanced security

  • No more privileged mode: Eliminates the need for running containers in privileged mode, reducing the attack surface.
  • Isolation: Tests run in isolated cloud environments, minimizing risks to the host system.
  • Compliance-friendly: Easier to pass security audits without exposing the Docker socket or granting elevated permissions.

Improved performance

  • Scalability: Leverage cloud resources to run tests faster and handle higher loads.
  • Resource efficiency: Offloading execution frees up local and CI/CD resources.

Simplified configuration

  • Plug-and-play integration: Minimal changes are required to switch from local Docker to Testcontainers Cloud.
  • No nested complexity: Avoid the intricacies and pitfalls of nested Docker daemons.

Better observability and debugging

  • Detailed logs: Access comprehensive logs through the Testcontainers Cloud dashboard.
  • Real-time monitoring: Monitor containers and resources in real time with enhanced visibility.

Getting started with Testcontainers Cloud

Let’s dive into how you can get the most out of Testcontainers Cloud.

Switching to Testcontainers Cloud allows you to run tests without needing a local Docker daemon:

  • No local Docker required: Testcontainers Cloud handles container execution in the cloud.
  • Consistent environment: Ensures that your tests run in the same environment across different machines.

Additionally, you can easily integrate Testcontainers Cloud into your CI pipeline to run the same tests without scaling your CI infrastructure.

Using Testcontainers Cloud with GitHub Actions

Here’s how you can set up Testcontainers Cloud in your GitHub Actions workflow.

1. Create a new service account

  • Log in to Testcontainers Cloud dashboard.
  • Navigate to Service Accounts:
    • Create a new service account dedicated to your CI environment.
  • Generate an access token:
    • Copy the access token. Remember, you can only view it once, so store it securely.

2. Set the TC_CLOUD_TOKEN environment variable

  • In GitHub Actions:
    • Go to your repository’s Settings > Secrets and variables > Actions.
    • Add a new Repository Secret named TC_CLOUD_TOKEN and paste the access token.

3. Add Testcontainers Cloud to your workflow

Update your GitHub Actions workflow (.github/workflows/ci.yml) to include the Testcontainers Cloud setup.

Example workflow:

name: CI Pipeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      # ... other preparation steps (dependencies, compilation, etc.) ...

      - name: Set up Java
        uses: actions/setup-java@v3
        with:
          distribution: 'temurin'
          java-version: '17'

      - name: Setup Testcontainers Cloud Client
        uses: atomicjar/testcontainers-cloud-setup-action@v1
        with:
          token: ${{ secrets.TC_CLOUD_TOKEN }}

      # ... steps to execute your tests ...
      - name: Run Tests
        run: ./mvnw test

Notes:

  • The atomicjar/testcontainers-cloud-setup-action GitHub Action automates the installation and authentication of the Testcontainers Cloud Agent in your CI environment.
  • Ensure that your TC_CLOUD_TOKEN is kept secure using GitHub’s encrypted secrets.

Clarifying the components: Testcontainers Cloud Agent and Testcontainers Cloud

To make everything clear:

  • Testcontainers Cloud Agent (CLI in CI environments): In CI environments like GitHub Actions, you use the Testcontainers Cloud Agent (installed via the GitHub Action or command line) to connect your CI jobs to Testcontainers Cloud.
  • Testcontainers Cloud: The cloud service that runs your containers, offloading execution from your CI environment.

In CI environments:

  • Use the Testcontainers Cloud Agent (CLI) within your CI jobs.
  • Authenticate using the TC_CLOUD_TOKEN.
  • Tests executed in the CI environment will use Testcontainers Cloud.

Monitoring and debugging

Take advantage of the Testcontainers Cloud dashboard:

  • Session logs: View logs for individual test sessions.
  • Container details: Inspect container statuses and resource usage.
  • Debugging: Access container logs and output for troubleshooting.

Why developers prefer Testcontainers Cloud over DinD

Real-world impact

After integrating Testcontainers Cloud, our team observed the following:

  • Faster build times: Tests ran significantly faster due to optimized resource utilization.
  • Reduced maintenance: Less time spent on debugging and fixing CI pipeline issues.
  • Enhanced security: Eliminated the need for privileged mode, satisfying security audits.
  • Better observability: Improved logging and monitoring capabilities.

Addressing common concerns

Security and compliance

  • Data isolation: Each test runs in an isolated environment.
  • Encrypted communication: Secure data transmission.
  • Compliance: Meets industry-standard security practices.

Cost considerations

  • Efficiency gains: Time saved on maintenance offsets the cost.
  • Resource optimization: Reduces the need for expensive CI infrastructure.

Compatibility

  • Multi-language support: Works with Java, Node.js, Python, Go, .NET, and more.
  • Seamless integration: Minimal changes required to existing test code.

Conclusion

Switching to Testcontainers Cloud, with the help of the Testcontainers Cloud Agent, has been a game-changer for our team and many others in the industry. It addresses the key pain points associated with Docker-in-Docker and offers a secure, efficient, and developer-friendly alternative.

Key takeaways

  • Security: Eliminates the need for privileged containers and Docker socket exposure.
  • Performance: Accelerates test execution with scalable cloud resources.
  • Simplicity: Simplifies configuration and reduces maintenance overhead.
  • Observability: Enhances debugging with detailed logs and monitoring tools.

As someone who has navigated these challenges, I recommend trying Testcontainers Cloud. It’s time to move beyond the complexities of DinD and adopt a solution designed for modern development workflows.

Additional resources

How to customize your Arduino Cloud IoT dashboards on the go

24 October 2024 at 20:26

The Arduino Cloud has long been a trusted platform for makers, engineers, and developers to manage their IoT projects with ease. From tracking sensor data to automating smart devices, the cloud enables seamless connectivity. Complementing this, the Arduino IoT Remote mobile app gives users the power to monitor and interact with their dashboards from anywhere. Now, we’re excited to announce a new feature that enhances your experience even further: the ability to change dashboard layouts directly through the mobile app!

Let’s dive into this exciting new update, along with some other minor features recently added to improve your experience.

Change your dashboard layouts from the IoT Remote App

Previously, modifying or rearranging the layout of your IoT dashboards was only possible through the browser on a PC. While this worked well for desktop users, it wasn’t convenient for those who needed to make changes on the go. With the latest update, you can now modify the “mobile view” of your dashboard directly through the Arduino IoT Remote app.

It’s important to note that Arduino Cloud dashboards have two distinct views: mobile and desktop. This new feature allows you to customize the layout specifically for your mobile devices, without affecting the desktop version. So whether you’re monitoring your projects on your phone or tablet, you can now optimize the layout for a mobile-friendly experience.

By customizing the mobile view, you gain more control over how your data is displayed and interacted with on your phone—perfect for users who need a quick overview and control of their IoT systems while away from their desktops.

How to use the new layout feature

Using this new feature is simple. Here’s how you can rearrange your dashboard layout in the IoT Remote mobile app:

  • Open the Arduino IoT Remote app and log into your account.
  • Navigate to the dashboard you want to modify.
  • On the Settings menu of the dashboard, tap the  Rearrange button.
  • Select a widget by clicking on it, and move it around the dashboard to the new location or change its size.
  • Click on CANCEL to discard your changes or on SAVE to save your changes, and your updated layout will be visible across all your mobile devices.

What else is new on the IoT Remote app? 

In addition to the layout customization feature, during the past months we’ve introduced several minor updates to make your app experience even smoother:

  • Sync dashboard cover image: Now, you can set a cover image for your dashboard, and it will automatically sync across all your devices. Whether for branding, personalization, or easy recognition, this feature ensures visual consistency on every device you use.
  • Disable trigger from Notification Detail: You can now enable or disable a trigger directly from the Notification Detail screen. This feature provides quick control over automated actions, helping you fine-tune your project with minimal hassle.
  • Clear notifications via the Activity Manage Panel: Keep your notifications organized by clearing them all from the new Activity Manage Panel. This helps you stay focused by removing unnecessary clutter from your feed.

Install the Arduino IoT Remote on your mobile phone

These new features make it easier than ever to stay on top of your IoT projects from anywhere with your mobile phone. Whether you’re monitoring, controlling, or tweaking your dashboard, the Arduino IoT Remote app is the perfect tool for the job, and it’s free!

Ready to experience these new updates? Download the Arduino IoT Remote app today from the App Store or Google Play and take full control of your IoT projects from the convenience of your mobile device.

The post How to customize your Arduino Cloud IoT dashboards on the go appeared first on Arduino Blog.

❌
❌