Normal view

There are new articles available, click to refresh the page.
Today — 17 September 2024Main stream

Finally, you can DIY your own espresso machine

17 September 2024 at 01:38

Caffeine lovers take their coffee very seriously and that is most apparent when you dive into the world of espresso machines. To satisfy enthusiasts, an espresso machine needs to provide precise control over temperature, pressure, and flow to enable the perfect pull. But if you’re the type of true aficionado that isn’t content with any consumer off-the-shelf option, then you’ll be interested in the diyPresso One machine.

The diyPresso One kit costs €1,250.00 (about $1,390) and it isn’t meant to be a budget option. But it is more affordable than many of the high-end machines on the market. And, more importantly, its DIY and hackable nature means that you can tweak it and tinker with it until you have it exactly the way you like it. If you can’t make your perfect cup of espresso with the diyPresso One, then your perfect cup simply doesn’t exist.

That all starts with the open-source controller and software, which monitor the machine’s various sensors and oversee the brewing process. That controller is built around an Arduino MKR WiFi 1010 board, programmed with the Arduino IDE. The choice to use Arduino was critical, because it lets users easily modify the machine’s behavior through simple sketch changes.

The rest of the parts are of equally high quality. The enclosure is stainless steel formed into a beautiful minimalist design, with side windows so everyone can see the stunning copper boiler that was custom-made for the diyPresso One. And the machine takes advantage of E61 brewing group components, so users can swap them based on preferences or requirements. 

Kits are now available on the diyPresso site, but stock is limited.

The post Finally, you can DIY your own espresso machine appeared first on Arduino Blog.

Teledatics HaloMax Wi-Fi HaLow LGA or M.2 module supports over 1000 clients, have been tested at a 100+km range (Crowdfunding)

17 September 2024 at 00:01
TD-WRLS development board

Teledatics has launched a crowdfunding campaign for the TD-HALOM HaloMax Wi-Fi HaLow module available in LGA and M.2 form factors for long-range and low-power connectivity, as well as HaLow development boards based on the module and various daughterboards for expansion. The wireless module, powered by Newracom’s NRC7394 SoC, is the product of a collaboration between Newracom and Teledatics. According to Zac Freeman, VP of Marketing & Sales at Newracom, the HaloMax module is “the highest output power Wi-Fi HaLow module available on the market. The Teledatics TD-HALOM module transmits at the highest allowable FCC power output and offers a Maximum Range HaLow solution.” Earlier this year, Teledatics broke the record for the longest distance for a Wi-Fi HaLow connection using the HaloMax wireless module and TE Connectivity Yagi antennas. Two Raspberry Pi 4 Model B units were able to communicate over a distance of 106km between Mount Greylock and Mount [...]

The post Teledatics HaloMax Wi-Fi HaLow LGA or M.2 module supports over 1000 clients, have been tested at a 100+km range (Crowdfunding) appeared first on CNX Software - Embedded Systems News.

Arm: One Year After the IPO

17 September 2024 at 00:00

As Arm CEO Rene Haas said on the day of our initial public offering (IPO) one year ago “an IPO is just a moment in time”, with plenty of opportunities ahead to build the future of computing on Arm. Since September 14, 2023, we have moved at pace to fulfil this mission.

Across the entire stack from the foundational technology to the software, Arm has had a profound impact in the past year as a public company. This includes Arm Compute Subsystems (CSS) for multiple markets, the growing influence and adoption of Armv9, new Arm-powered silicon and technologies, rising software capabilities and various announcements and initiatives that showcase our leading role in AI and the global technology ecosystem.

However, this is just the start of our journey as a public company, with plenty of exciting new developments and growth opportunities in the future. This is made possible by our high performance, power-efficient technologies for the age of AI.

Check out our “One Year After the IPO” report below that provides more details about these momentous achievements during the past year.

The post Arm: One Year After the IPO appeared first on Arm Newsroom.

“Catch me if you can!” — How Alvik learns to dodge trouble with AI, featuring Roni Bandini

16 September 2024 at 23:20

Have you ever discovered a cool piece of tech buried in your drawer and thought, “This could make for an awesome project”? That’s exactly what happened to Roni Bandini, maker, writer, electronics artist – and Arduino Alvik Star! 

Bandini began coding at 10 years old, and has always found automatons and robots fascinating. About Alvik, he has said, “I really like this little robot—the elegance of its concept and design. As soon as I encountered it, I dove into several projects aimed at expanding its default capabilities.”

One of those projects in particular caught our attention, and we are excited to share it with you.

Getting the building blocks ready

After stumbling upon a tiny Seeed Studio XIAO ESP32S3 with an OV2640 camera sensor, Bandini saw its potential right away. It was the perfect tool to upgrade Arduino’s Alvik robot with computer vision. His mission? To teach Alvik to evade law enforcement officials – or at least a LEGO® police figure!

Since both the Alvik main board and the XIAO cam board use ESP32, Bandini used ESPNow – a fast communication protocol – to connect the camera with the robot. He then 3D-printed two support bars and attached them with a pair of M3 screws.

Learning to react fast!

But before the epic police chase could begin, Alvik needed some training. Bandini took pictures of the LEGO® police figure and a ball and uploaded them to Edge Impulse. He then exported the trained model as an Arduino library using the EON compiler, before importing the zip file into the Arduino IDE.

Once everything was set up and the MicroPython script created, Alvik was ready to roll. As it moved forward, the robot took pictures and processed them through a machine learning (ML) model. If it detected the police figure, Alvik would turn around and flash a red light. In other words, it was time to make a quick getaway!

For more details on this exciting project, including a link to a YouTube demo, visit Bandini’s blog post here.

Making it useful

However, the action doesn’t stop there. Although Alvik can drive autonomously, Bandini has also adapted a remote control from the 1980s to give himself even more control. How? By writing C++ routines that translate the remote’s coordinates into commands. These commands are then sent via ESPNow to the MAC address of the ESP32 in Alvik, where they trigger functions to move the robot.

Inspired by an old-school advertisement for the Omnibot 2000 robot, Bandini has even taught Alvik to bring him a glass of whiskey! While we don’t recommend this for anyone under the legal drinking age, there’s no reason why you can’t substitute it for your favorite refreshments!

New to robotics? Explore the Arduino Alvik page to learn more or head straight to the store to start your own adventure today!

The post “Catch me if you can!” — How Alvik learns to dodge trouble with AI, featuring Roni Bandini appeared first on Arduino Blog.

Yesterday — 16 September 2024Main stream

Idea Raised For Reducing The Size Of The AMDGPU Driver With Its Massive Header Files

16 September 2024 at 22:30
Following the weekend news of the AMDGPU kernel driver becoming too large that it's causing the Plymouth boot splash screen on slower Linux systems to time-out, longtime AMD Linux graphics driver engineer Marek Olšák expressed a new idea for helping to reduce some bloat from this AMD kernel graphics driver...

Secure by Design for AI: Building Resilient Systems from the Ground Up

16 September 2024 at 21:23

As artificial intelligence (AI) has erupted, Secure by Design for AI has emerged as a critical paradigm. AI is integrating into every aspect of our lives — from healthcare and finance to developers to autonomous vehicles and smart cities — and its integration into critical infrastructure has necessitated that we move quickly to understand and combat threats. 

Necessity of Secure by Design for AI

AI’s rapid integration into critical infrastructure has accelerated the need to understand and combat potential threats. Security measures must be embedded into AI products from the beginning and evolve as the model evolves. This proactive approach ensures that AI systems are resilient against emerging threats and can adapt to new challenges as they arise. In this article, we will explore two polarizing examples — the developer industry and the healthcare industry.

Black padlock on light blue digital background

Complexities of threat modeling in AI

AI brings forth new challenges and conundrums when working on an accurate threat model. Before reaching a state in which the data has simple edit and validation checks that can be programmed systematically, AI validation checks need to learn with the system and focus on data manipulation, corruption, and extraction. 

  • Data poisoning: Data poisoning is a significant risk in AI, where the integrity of the data used by the system can be compromised. This can happen intentionally or unintentionally and can lead to severe consequences. For example, bias and discrimination in AI systems have already led to issues, such as the wrongful arrest of a man in Detroit due to a false facial recognition match. Such incidents highlight the importance of unbiased models and diverse data sets. Testing for bias and involving a diverse workforce in the development process are critical steps in mitigating these risks.

In healthcare, for example, bias may be simpler to detect. You can examine data fields based on areas such as gender, race, etc. 

In development tools, bias is less clear-cut. Bias could result from the underrepresentation of certain development languages, such as Clojure. Bias may even result from code samples based on regional differences in coding preferences and teachings. In developer tools, you likely won’t have the information available to detect this bias. IP addresses may give you information about where a person is living currently, but not about where they grew up or learned to code. Therefore, detecting bias will be more difficult. 

  • Data manipulation: Attackers can manipulate data sets with malicious intent, altering how AI systems behave. 
  • Privacy violations: Without proper data controls, personal or sensitive information could unintentionally be introduced into the system, potentially leading to privacy violations. Establishing strong data management practices to prevent such scenarios is crucial.
  • Evasion and abuse: Malicious actors may attempt to alter inputs to manipulate how an AI system responds, thereby compromising its integrity. There’s also the potential for AI systems to be abused in ways developers did not anticipate. For example, AI-driven impersonation scams have led to significant financial losses, such as the case where an employee transferred $26 million to scammers impersonating the company’s CFO.

These examples underscore the need for controls at various points in the AI data lifecycle to identify and mitigate “bad data” and ensure the security and reliability of AI systems.

Key areas for implementing Secure by Design in AI

To effectively secure AI systems, implementing controls in three major areas is essential (Figure 1):

Illustration showing flow of data from Users to Data Management to Model Tuning to Model Maintenance.
Figure 1: Key areas for implementing security controls.

1. Data management

The key to data management is to understand what data needs to be collected to train the model, to identify the sensitive data fields, and to prevent the collection of unnecessary data. Data management also involves ensuring you have the correct checks and balances to prevent the collection of unneeded data or bad data.

In healthcare, sensitive data fields are easy to identify. Doctors offices often collect national identifiers, such as drivers licenses, passports, and social security numbers. They also collect date of birth, race, and many other sensitive data fields. If the tool is aimed at helping doctors identify potential conditions faster based on symptoms, you would need anonymized data but would still need to collect certain factors such as age and race. You would not need to collect national identifiers.

In developer tools, sensitive data may not be as clearly defined. For example, an environment variable may be used to pass secrets or pass confidential information, such as the image name from the developer to the AI tool. There may be secrets in fields you would not suspect. Data management in this scenario involves blocking the collection of fields where sensitive data could exist and/or ensuring there are mechanisms to scrub sensitive data built into the tool so that data does not make it to the model. 

Data management should include the following:

  • Implementing checks for unexpected data: In healthcare, this process may involve “allow” lists for certain data fields to prevent collecting irrelevant or harmful information. In developer tools, it’s about ensuring the model isn’t trained on malicious code, such as unsanitized inputs that could introduce vulnerabilities.
  • Evaluating the legitimacy of users and their activities: In healthcare tools, this step could mean verifying that users are licensed professionals, while in developer tools, it might involve detecting and mitigating the impact of bot accounts or spam users.
  • Continuous data auditing: This process ensures that unexpected data is not collected and that the data checks are updated as needed. 

2. Alerting and monitoring 

With AI, alerting and monitoring is imperative to ensuring the health of the data model. Controls must be both adaptive and configurable to detect anomalous and malicious activities. As AI systems grow and adapt, so too must the controls. Establish thresholds for data, automate adjustments where possible, and conduct manual reviews where necessary.

In a healthcare AI tool, you might set a threshold before new data is surfaced to ensure its accuracy. For example, if patients begin reporting a new symptom that is believed to be associated with diabetes, you may not report this to doctors until it is reported by a certain percentage (15%) of total patients. 

In a developer tool, this might involve determining when new code should be incorporated into the model as a prompt for other users. The model would need to be able to log and analyze user queries and feedback, track unhandled or poorly handled requests, and detect new patterns in usage. Data should be analyzed for high frequencies of unhandled prompts, and alerts should be generated to ensure that additional data sets are reviewed and added to the model.

3. Model tuning and maintenance

Producers of AI tools should regularly review and adjust AI models to ensure they remain secure. This includes monitoring for unexpected data, adjusting algorithms as needed, and ensuring that sensitive data is scrubbed or redacted appropriately.

For healthcare, model tuning may be more intensive. Results may be compared to published medical studies to ensure that patient conditions are in line with other baselines established across the world. Audits should also be conducted to ensure that doctors with reported malpractice claims or doctors whose medical license has been revoked are scrubbed from the system to ensure that potentially compromised data sets are not influencing the model. 

In a developer tool, model tuning will look very different. You may look at hyperparameter optimization using techniques such as grid search, random search, and Bayesian search. You may study subsets of data; for example, you may perform regular reviews of the most recent data looking for new programming languages, frameworks, or coding practices. 

Model tuning and maintenance should include the following:

  • Perform data audits to ensure data integrity and that unnecessary data is not inadvertently being collected. 
  • Review whether “allow” lists and “deny” lists need to be updated.
  • Regularly audit and monitor alerts for algorithms to determine if adjustments need to be made; consider the population of your user base and how the model is being trained when adjusting these parameters.
  • Ensure you have the controls in place to isolate data sets for removal if a source has become compromised; consider unique identifiers that allow you to identify a source without providing unnecessary sensitive data.
  • Regularly back up data models so you can return to a previous version without heavy loss of data if the source becomes compromised.

AI security begins with design

Security must be a foundational aspect of AI development, not an afterthought. By identifying data fields upfront, conducting thorough AI threat modeling, implementing robust data management controls, and continuously tuning and maintaining models, organizations can build AI systems that are secure by design. 

This approach protects against potential threats and ensures that AI systems remain reliable, trustworthy, and compliant with regulatory requirements as they evolve alongside their user base.

Learn more

Update on Native Matrix interoperability with WhatsApp

16 September 2024 at 07:00

Hi all,

Back at FOSDEM in February we showed off how Matrix could be used for E2EE-preserving messaging interoperability as required by the Digital Markets Act messaging interoperability - and we announced that Element had been working with Meta on integrating with its DMA APIs in order to connect WhatsApp to Matrix. You can see the video here, and we also demoed interop working at the technical level to the European Commission a few days beforehand.

Subsequently WhatsApp launched its DMA portal on March 8th, and the proposed Reference Offer (i.e. the terms you have to accept as a Requesting Party in order to interoperate) was revealed. The Reference Offer for Facebook Messenger was launched on September 6th. At the time of the WhatsApp launch we flagged up some significant unresolved questions - the main points being that:

  1. WhatsApp would require their users to manually enable DMA in settings before they can receive any traffic from interconnecting service providers (e.g. Element) - meaning that WhatsApp users would not be reachable by default.

  2. WhatsApp would require the client IP of any interconnecting users, in order to apply ‘platform integrity’ anti-abuse / trust & safety controls.

  3. WhatsApp would not allow an interconnecting service to buffer messages serverside.

  4. WhatsApp would require each Matrix server provider to sign a separate agreement in order to interconnect - i.e. you can’t bridge other server’s users unless those servers have signed a contract with Meta.

Now, the good news is that we’ve subsequently been talking with WhatsApp to see if we could progress these points - and we’re happy to say that they’ve listened to us and we’ve made progress on the first 3 items:

  1. Meta recently shared an update on the messaging interoperability user experience and will allow all EU users to be reachable by interoperable services by default. It’ll also give people the option of how they want to manage their inbox as well as a range of features like read receipts, typing indicators and reactions.

  2. We’ve come up with a plan with WhatsApp to reduce the amount of matrix user data we share with WhatsApp. WhatsApp’s interop solution however, doesn’t yet support multi-device conversations or shared conversation history like normal Matrix, which means that normal Matrix server-side synchronised history won’t work for these conversations.

  3. In terms of not allowing open federation: this looks unlikely to change, given Meta needs to know who is responsible for the servers who connect to them, and ensure they agree to the terms of use as required by DMA.

During discussions, another point came up which we’d previously overlooked: section 7.5.1 of the current reference offer states: Partner User Location. Any Partner Users that Partner Enlists or provides access to the Interoperable Messaging Services must be located and remain in the EEA”. In other words, interop would only be available to Matrix users physically in the EEA, which is obviously against the Matrix Foundation’s manifesto to provide secure communication to everyone. Moreover, to demonstrate compliance the Matrix side would have to geolocate the client’s IP.

It turns out that this limitation to EU users ended up being the biggest obstacle for productising the native Matrix<>WhatsApp bridge, as it is unclear whether it’s financially viable for anyone (e.g. Element) to launch such a bridge if it only works for Matrix users physically within the EEA (not to mention the costs and privacy issues of geolocating Matrix users).

Now, on one hand, deploying Matrix as a mature standards-based protocol for WhatsApp interop with native E2EE feels like a worthy goal: indeed, it effectively gives DMA interoperators a stable standardised API with pre-existing SDKs to implement against, rather than having to implement against proprietary and potentially shifting vendor APIs. So overall it moves the needle towards the end goal of Matrix’ mission.

On the other hand, this all may be moot if the return on investment of building DMA interop with WhatsApp via Matrix is too far away for any company in the Matrix ecosystem to be able to afford the investment, and if there isn’t an appetite for anyone to fund it. Funding constraints on both the Foundation and the ecosystem today are such that this work will only happen if explicitly sponsored by an organisation who is willing to commit to fund it.

So: if you are an organisation with users in the EU who would like them to interoperate with EU WhatsApp users via Matrix, and have the funds to sponsor development of building out an official production-grade Matrix<->WhatsApp bridge, please get in touch with me.

Alternatively, if the geographic constraints are a showstopper for you, please let us know.

We’re assuming that there may be smaller messaging providers, or domain-specific messaging services who want to connect their end-users through to WA end-users, and may be happy to be constrained to EU geography. However, bridge developers need evidence and financial support to progress this. Meanwhile, if you are interested in the strategic importance of the Digital Markets Act, this is an opportunity to put your money where your mouth is.

Looking forward to hearing feedback!

thanks,

Matthew

❌
❌