❌

Reading view

There are new articles available, click to refresh the page.

SimpleX network: large groups and privacy-preserving content moderation

SimpleX network: large groups and privacy-preserving content moderation

Published: Jan 14, 2025

Many people believe that it is impossible to moderate and prevent abuse in end-to-end encrypted conversations. This belief is incorrect β€” there is a way to prevent abuse and distribution of illegal content without any compromises to users' privacy and security of end-to-end encryption.

Anti-privacy lobbyists use this incorrect belief to advocate for scanning of private communications, which not only would fail to prevent abuse, but would make it worse β€” because our private data will become available to criminals.

So it's very important to understand how privacy preserving content moderation works, and educate the politicians who you voted for, and who is currently in the office, that we do not need to compromise privacy and security in any way to substantially reduce online crime and abuse.

This post answers these questions:

Large groups on SimpleX network

When we designed groups, we expected them to be used primarily for small groups where people know each other, with not more than 100 or so members.

But we learnt that people want to participate in public discussions remaining anonymous β€” it protects their freedom of speech. As an experiment, we are curating a small directory of groups that currently has almost 400 public groups, with the largest ones having thousands of members. You can connect to this experimental directory via SimpleX chat address.

Can large groups scale?

Currently the groups are fully decentralized, and every time you send the message to some group your client has to send it to each group member, which is very costly for traffic and battery in large groups.

We are currently working on the new group architecture when dedicated group members that run their clients on the server or on desktop with good internet connection will re-broadcast messages to all members β€” these members are "super-peers".

We will be offering pre-configured super-peers via the app, and you will be able to use your own super-peers, in case you are hosting a large private group, and to have a better control and ownership of the group β€” e.g., if we decide to remove our super peer from the group, it will continue to function thanks to your super-peer re-broadcasting messages.

This new design improves both privacy of group participation and censorship resistance of the groups, and also makes abusing the group harder.

Preventing abuse with anonymous participation

All public discussions are abused by spammers and trolls, whether anonymous or not. We have been evolving ability of group owners to moderate conversations by allowing to remove inappropriate and off-topic messages, to block members who send spam, and to make all new members who join their group unable to send messages until approved.

As support for large groups improves, we expect that the attempts to abuse may increase too, unless we add better moderation capabilities in advance.

v6.3 will add ability of the group members to send reports to the group owners and administrators β€” the beta version we just released adds ability to manage these reports, so group admins won't miss reports when members start sending them.

Other features that we plan to add this year to improve both usability and safety of the groups:

  • message comments β€” some groups may choose to allow only comments, when ability to send messages is restricted to group owners or admins.
  • ability to limit the maximum number of messages the members can send per day.
  • ability to pre-moderate messages before they can be seen by all members.
  • "knocking" β€” approving new members before they can join the group.
  • sub-groups β€” smaller conversations with the same members.

Preventing server abuse without compromising e2e encryption

Some categories of content may be prohibited by servers operators. An extreme case would be child sexual abuse materials (CSAM).

Many people believe that when conversation is end-to-end encrypted, the problem is unsolvable. This incorrect belief is used by unscrupulous lobbyists and politicians who attempt to mandate various types of content scanning under the guise of preventing CSAM distribution.

We wrote before about how such measures not only would fail to solve the problem, but would make it worse. If our private photos become available to service providers, they will eventually become available to criminals too, and will be used to abuse and exploit the users and their children.

An absolute majority of CSAM distributed online is publicly accessible. Many large tech companies failed to act on it and to remove CSAM from their services before it became an epidemic. We see it as a very important objective to eliminate the possibility to distribute CSAM from publicly accessible groups, even if it hurts network growth.

When we receive a user complaint about CSAM shared in any group, we remove the files and, in some cases, the links to join the group from our servers. Our approach to moderation preserves user privacy and security of end-to-end encryption.

How does it work? Let's go over the process step by step.

  1. A user discovered the link to join the group that distributes CSAM and sent a complaint to our support email address or via the app to SimpleX Chat team contact.

  2. Once we received the link to join the group, we instruct our automated bot to join it. If the complaint is confirmed as valid, the bot sends the information about the files sent in this group to the servers that store these files.

  3. Once the servers receive the file identifiers from the bot, they block the files.

File servers cannot look inside end-to-end encrypted files, and they don't even know file sizes β€” they are securely locked, and sent in chunks, across multiple servers. But if the bot that joined the group provides the address of the particular file, the server can delete this file. It doesn't allow the servers to access any other files.

In this way, the moderation is possible without any content scanning, and it preserves privacy and security of end-to-end encryption.

Privacy-preserving content moderation

Right now, when we act on user complaints, we delete uploaded files or the links to join the groups from our servers, and to the users it looks as if something stopped working.

We are currently rolling out the change to the servers that would mark these files and group links as blocked, so that users who try to download them or to join blocked groups can see that they were blocked for violating server operator conditions of use. This will improve transparency of moderation and reliability of the network.

Later this year we plan to do more than that β€” client-side restrictions on the clients that violated conditions of use by uploading prohibited content.

How would it work? When the client discovers that the uploaded file was blocked, it may, optionally, depending on the information in the blocking record, disable further uploads from the app to the servers of the operator that blocked the file. Also, when the client that tried to receive the file sees that the file is blocked, it may also refuse to receive further files from the same group member via the same servers.

In this way, the servers can restrict the future actions of the users who violate the conditions of use, while preserving privacy and security of the users and content – even of those users who violated the conditions.

We discussed this plan with the users, and we really appreciate their feedback. The current plan is quite different from our initial ideas, the users had a real impact. Users asked the questions below.

Can't users modify their clients code to circumvent these restrictions?

Yes, they can, but for this to work both sender and recipient would have to modify their clients. It's technically complex, so most users won't do it, and it is also hard to coordinate between users who don't know and don't trust each other.

So these measures would be effective, even though they can be in theory circumvented, as any restrictions can be.

Other services that identify users reduce abuse by blocking the user account. It is even easier to circumvent than changing the client code, and yet these measures reduce abuse.

Can't users use other servers?

Yes, they can. But in the same way as web browser is not responsible for the content you can access, SimpleX app should not restrict your communications with other servers based on blocking action from just one server.

That approach allows different server operators to have different content policies, depending on their jurisdiction and other factors. It also prevents the possibility of abuse by server operators.

Wouldn't these measures be abused?

With the proposed changes, server operators will only be able to prevent uploads to their own servers, which prevents any impact on other communications.

In the future we plan to increase the resilience to any server malfunction or abuse by using multiple different servers with each contact.

If servers were to apply any upload restrictions unreasonably, the users would simply stop using them.

At the same time, server operators need to have technical means to protect their servers from users' abuse, and the proposed client-side restrictions achieve it.

What additional measures are considered?

We published other technical ideas that could be used to prevent distribution of illegal content in this document. None of these measures compromise users' privacy or end-to-end encryption, and they can (and should) only be applied to publicly accessible content that other users complained about.

We technically cannot, and we won't scan all content. We actively campaign against any content-scanning proposals, because it violates our right to privacy, and it would result in huge increase of online crime.

The belief that it is impossible to moderate conversations when they are e2e encrypted is incorrect. It is possible when users themselves share conversation contents with server operators, in which case the operators can identify and, if necessary, remove this content. It is also possible to moderate conversations that users made publicly accessible.

Send us comments and questions

Let us know any comments and feedback to the proposed measures. This is still an evolving design, and it won't be implemented until later this year.

Your comments will help to find the right balance between users' and server operators' requirements.

Privacy and security improvements we plan this year

To increase privacy and security we plan to add this year:

  • quantum-resistant e2e encryption in small groups.
  • receiving proxy for files, to protect users IP addresses and other transport metadata from file senders' servers.

We see privacy and security as necessary for online safety, and prevention of abuse. If you don't already use SimpleX network, try it now, and let us know what you need to make it better.

Oppose digital IDs – they break the law and lead to mass scale surveillance

Oppose digital IDs – they break the law and lead to mass scale surveillance

Published: Dec 18, 2024

Starting next year, the UK government plans to introduce digital ID cards for the young people to prove their age when visiting pubs. While officials claim this system will remain optional, it's part of a broader government initiative to move more state functions online so that people can prove their identity for everything from paying taxes to opening a bank account using the government-backed app. This will be a step toward a society where every pub visit, purchase, and social interaction becomes a permanent digital record linked to a government-issued ID – a step to normalizing mass surveillance at scale.

Digital IDs are promoted as a way to fight law violations, and some politicians support them as a way to tackle illegal immigration. But digital IDs themselves break the law. Article 8 of the European Convention of Human Rights says: β€œEveryone has the right to respect for his private and family life”. It means that not only our right to privacy is enshrined in the law, but the right to have our privacy respected is also part of the law. Asking to present a digital ID when visiting a pub, even if it is optional, disrespects our privacy, and is therefore illegal.

Digital IDs would not stop people who decide to break laws. Pubs already can refuse to serve alcohol to young people and require the ID in case the age is in doubt. And illegal immigration can also be reduced without any digital IDs. But introducing digital IDs and collecting our actions, names and locations in one government-controlled database will result in making this information easier to access for criminals, and being exploited for financial and identity crimes.

What starts as a "convenient option" is likely to end as a mandatory requirement. The digital ID systems being pushed by governments and corporations aren't about making our lives easier. They're about tracking, monitoring, and controlling every move we make. And we can see where this road leads in China, when IDs and social scores created for convenience are used to prevent access to basic services and bank accounts as a punishment for legal social media posts that the government disagrees with. What started as a convenience, is now trialed to track the duration of public toilet visits.

The United Kingdom is a democratic country, and the law protects our right to privacy and freedom of speech. If we accept digital IDs as something required for simple things, like buying a drink, it leaves the door wide open to a range of privacy violations.

We call on everyone to oppose the digital ID systems. Do not use them. Do not install these systems in your pub, for as long as it is not legally required. Support local businesses that don’t use them. Protect your privacy and freedom by using software that respects them. Demand that your privacy is respected, as required by law.

To make your voice heard, email your MP expressing your rejection of digital IDs as a violation of European Convention of Human Rights in three simple steps:

  1. Copy the text below or click this link to copy it into email app:

Dear …,

I object to the introduction of digital IDs in pubs or any other public places for these reasons:

  1. It violates the European Convention of Human Rights, article 8: β€œEveryone has the right to respect for his private and family life” (https://fra.europa.eu/en/law-reference/european-convention-human-rights-article-8-0).
    Asking to present digital IDs when proof of identity is not legally required, even if it is optional, disrespects our privacy, and is therefore illegal.
  2. It will not be an effective measure in reducing the violations of the law. People who want to circumvent it, will find a way.
  3. It will increase crime, because combining a large amount of private information in a single system increases the risks of this information becoming available to criminals, who will exploit it for financial crimes and identity theft.

I kindly ask you to oppose this plan, both publicly and during any discussions in the UK Parliament.

Sincerely yours,
…

  1. Find the email address of your MP and copy it to the email.

  2. Fill in the blanks, edit the text if needed, and send it!

Public opposition changed government decisions in many cases.

It is your opportunity to tell the government which country you want to live in β€” please use it!

SimpleX network: preset servers operated by Flux, business chats and more with v6.2 of the apps

SimpleX network: preset servers operated by Flux, business chats and more with v6.2 of the apps

Published: Dec 10, 2024

What's new in v6.2:

What's new in v6.2

SimpleX Chat and Flux improve metadata privacy in SimpleX network

SimpleX Chat and Flux (Influx Technology Limited) made an agreement to include messaging and file servers operated by Flux into the app.

SimpleX network is decentralized by design, but in the users of the previous app versions had to find other servers online or host servers themselves to use any other servers than operated by us.

Now all users can choose between servers of two companies, use both of them, and continue using any other servers they host or available online.

To use Flux servers enable them when the app offers it, or at any point later via Network & servers settings in the app.

When both SimpleX Chat and Flux servers are enabled, the app will use servers of both operators in each connection to receive messages and for private message routing, increasing metadata privacy for all users.

Read more about why SimpleX network benefits from multiple operators in our previous post.

You can also read about our plan how network operators will make money, while continuing to protect users privacy, based on network design rather than on trust to operators, and without any cryptocurrency emission.

Business chats

We use SimpleX Chat to provide support to SimpleX Chat users, and we also see some other companies offering SimpleX Chat as a support channel.

One of the problem of providing support via general purpose messengers is that the customers don't see who they talk to, as they can in all dedicated support systems.

It is not possible in most messengers, including SimpleX Chat prior to v6.2 - every new customer joins a one-to-one conversation, where the customers see that they talk to a company, not knowing who they talk to, and if it's a bot or a human.

The new business chats in SimpleX Chat solve this problem: to use them enable the toggle under the contact address in your chat profile. It is safe to do, and you can always toggle it off, if needed - the address itself does not change.

Once you do it, the app will be creating a new business chat with each connecting customer where multiple people can participate. Business chat is a hybrid of one-to-one and group conversation. In the list of chats you will see customer names and avatars, and the customer will see your business name and avatar, like with one-to-one conversations. But inside it works as a group, allowing customer to see who sent the message, and allowing you to add other participants from the business side, for delegation and escalation of customer questions.

This can be done manually, or you can automate these conversations using bots that can answer some customer questions and then add a human to the conversation when appropriate or requested by the customer. We will be offering more bot-related features to the app and a simpler way to program bots very soon - watch our announcements.

Better user experience

Chat navigation

This has been a long-standing complaint from the users: why does the app opens conversations on the last message, and not on the first unread message?

Android and desktop apps now open the chat on the first unread message. It will soon be done in the iOS app too.

Also, the app can scroll to the replied message anywhere in the conversation (when you tap it), even if it was sent a very long time ago.

See who reacted!

This is a small but important change - you can now see who reacted to your messages!

Improving notifications in iOS app

iOS notifications in a decentralized network is a complex problem. We support iOS notifications from early versions of the app, focussing on preserving privacy as much as possible. But the reliability of notifications was not good enough.

We solved several problems of notification delivery in this release:

  • messaging servers no longer lose notifications while notification servers are restarted.
  • Apple can drop notifications while your device is offline - about 15-20% of notifications are dropped because of it. The servers and the new version of the app work around this problem by delivering several last notifications, to show notifications correctly even when Apple drops them.

With these changes the iOS notifications remained as private and secure as before. The notifications only contain metadata, without the actual messages, and even the metadata is end-to-end encrypted between SimpleX notification servers and the client device, inaccessible to Apple push notification servers.

There are two remaining problems we will solve soon:

  • iOS only allows to use 25mb of device memory when processing notifications in the background. This limit didn't change for many years, and it is challenging for decentralized design. If the app uses more memory, iOS kills it and the notification is not shown – approximately 10% of notifications can be lost because of that.
  • for notifications to work, the app communicates with the notification server. If the user puts the app in background too quickly, the app may fail to enable notification for the new contacts. We plan to change clients and servers to delegate this task to messaging servers, to remove the need for this additional communication entirely, without any impact on privacy and security. This will happen early next year.

SimpleX network

Some links to answer the most common questions:

How can SimpleX deliver messages without user identifiers.

What are the risks to have identifiers assigned to the users.

Technical details and limitations.

Frequently asked questions.

Please also see our website.

Please support us with your donations

Huge thank you to everybody who donated to SimpleX Chat!

Prioritizing users privacy and security, and also raising the investment, would have been impossible without your support and donations.

Also, funding the work to transition the protocols to non-profit governance model would not have been possible without the donations we received from the users.

Our pledge to our users is that SimpleX protocols are and will remain open, and in public domain, so anybody can build the future implementations of the clients and the servers. We are building SimpleX platform based on the same principles as email and web, but much more private and secure.

Your donations help us raise more funds β€” any amount, even the price of the cup of coffee, makes a big difference for us.

See this section for the ways to donate.

Thank you,

Evgeny

SimpleX Chat founder

Wired’s Attack on Privacy

Wired’s Attack on Privacy

Published: Oct 16, 2024

The Wired article by David Gilbert focusing on neo-Nazis moving to SimpleX Chat following the Telegram's changes in privacy policy is biased and misleading. By cherry-picking information from the report by the Institute for Strategic Dialogue (ISD), Wired fails to mention that SimpleX network design prioritizes privacy in order to protect human rights defenders, journalists, and everyday users who value their privacy β€” many people feel safer using SimpleX than non-private apps, being protected from strangers contacting them.

Yes, privacy-focused SimpleX network offers encryption and anonymity β€” that’s the point. To paint this as problematic solely because of who may use such apps misses the broader, critical context.

SimpleX’s true strength lies in protection of users' metadata, which can reveal sensitive information about who is communicating, when, and how often. SimpleX protocols are designed to minimize metadata collection. For countless people, especially vulnerable groups, these features can be life-saving. Wired article ignores these essential protections, and overlooks the positive aspects of having such a unique design, as noted in the publication which they link to:

β€œSimpleX also has a significant advantage when it comes to protecting metadata β€” the information that can reveal who you’re talking to, when, and how often. SimpleX is designed with privacy at its core, minimizing the amount of metadata collected and ensuring that any temporary data necessary for functionality is not retained or linked to identifiable users.”

Both publications referenced by Wired also explore how SimpleX design actually hinders extremist groups from spreading propaganda or building large networks. SimpleX design restricts message visibility and file retention, making it far from ideal for those looking to coordinate large networks. Yet these important qualities are ignored by Wired in favor of fear-mongering about encryption β€” an argument we've seen before when apps like Signal faced similar treatment. Ironically, Wired just a month earlier encouraged its readers to adopt encrypted messaging apps, making its current stance even more contradictory.

The vilification of apps that offer critically important privacy, anonymity, and encryption must stop. That a small share of users may abuse these tools doesn’t justify broad criticism. Additionally, the lobbying for client-side scanning, which Wired’s article seems to indirectly endorse, is not only dangerous but goes against fundamental principles of free speech and personal security. We strongly oppose the use of private communications for any kind of monitoring, including AI training, which would undermine the very trust encryption is designed to build.

It’s alarming to see Wired not only criticize SimpleX for its strong privacy protections but also subtly blame the European Court of Human Rights for upholding basic human rights by rejecting laws that would force encrypted apps to scan and hand over private messages before encryption. Wired writes:

…European Court of Human Rights decision in February of this year ruled that forcing encrypted messaging apps to provide a backdoor to law enforcement was illegal. This decision undermined the EU’s controversial proposal that would potentially force encrypted messaging apps to scan all user content for identifiers of child sexual abuse material.

This commentary is both inappropriate and misguided β€” it plays into the hands of anti-privacy lobbyists attempting to criminalize access to private communications. Framing privacy and anonymity as tools for criminals ignores the reality that these protections are essential for millions of legitimate users, from activists to journalists, to ordinary citizens. Client-side scanning can't have any meaningful effect on reducing CSAM distribution, instead resulting in increase of crime and abuse when criminals get access to this data.

We need to correct this narrative. The real danger lies not in protecting communication, but in failing to do so. Privacy apps like SimpleX are crucial, not just for those resisting mass surveillance, but for everyone who values the right to communicate without fear of their conversations being monitored or misused. This is a right we must defend and incorporate into law, as we wrote before.

Wired could have stood on the right side of this battle and helped normalize the demand for privacy, genuinely protecting people from criminals and from the exploitation of the increasingly AI-enabled mass surveillance. Instead they chose the path of spreading fear and uncertainty of encrypted messaging and tools that enable privacy and anonymity.

Spreading misinformation about privacy and security undermines trust in the tools that protect us, making it easier to justify more invasive surveillance measures that chip away at our civil liberties.

Wired did not respond to our request for comment.

❌