User:Robin Patterson/Mastodon privacy and security (CC-BY tips from EFF)

From Join the Fediverse
Revision as of 06:14, 2 December 2022 by Robin Patterson (talk | contribs) (Copying using CC-BY license)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

From https://www.eff.org/deeplinks/2022/11/mastodon-private-and-secure-lets-take-look - license CC-BY (and there is a Spanish version)

This post is part of a series on Mastodon and the fediverse. EFF also has a post on what the fediverse is, and why the fediverse will be great—if we don't screw it up, and more are on the way. You can follow EFF on Mastodon here, but don't expect a follow back.

Is Mastodon private and secure? Let’s take a look

By Bill Budington, November 16, 2022

With so many users migrating to Mastodon as their micro-blogging service of choice, a lot of questions are being raised about the privacy and security of the platform. Though in no way comprehensive, we have a few thoughts we’d like to share on the topic.

Essentially, Mastodon is about publishing your voice to your followers and allowing others to discover you and your posts. For basic security, instances will employ transport-layer encryption, keeping your connection to the server you’ve chosen private. This will keep your communications safe from local eavesdroppers using your same WiFi connection, but it does not protect your communications, including your direct messages, from the server or instance you’ve chosen—or, if you’re messaging someone from a different instance, the server they’ve chosen. This includes the moderators and administrators of those instances, as well. Just like Twitter or Instagram, your posts and direct messages are accessible by those running the services. But unlike Twitter or Instagram, you have the choice in what server or instance you trust with your communications. Also unlike the centralized social networks, the Mastodon software is relatively open about this fact.

Some have suggested that direct messages on Mastodon should be treated more like a courtesy to other users instead of a private message: a way to filter out content from their feeds that isn’t relevant to them, rather than a private conversation. But users of the feature may not understand that intent. We feel that the intended usage of the feature will not determine people’s expectation of privacy while using it. Many may expect those direct communications to have a greater degree of privacy.

Mastodon could implement direct message end-to-end encryption in the future for its clients. Engineering such a feature would not be trivial, but it would give users a good mechanism to protect their messages to one another. We hope to see this feature implemented for all users, but even a single forward-looking instance could choose to implement it for its users. In the meantime, if you need truly secure end-to-end direct messaging, we suggest using another service such as Signal or Keybase.

Despite its pitfalls, until recently Twitter had long had a strong security team. Mastodon is a largely volunteer-built platform undergoing growing pains, and some prominently used forks have had (and fixed) embarrassing vulnerabilities as of late. Though it’s been around since 2016, the new influx of users and new importance it has taken on will be a trial by force. We expect more bugs will be shaken from the tapestry before too long.

Two-factor authentication with an app or security key is available on Mastodon instances, giving users an extra security check to log on. The software also offers robust privacy controls: allowing users to set up automatic deletion of old posts, set personalized keyword filters, approve followers, and hide your social graph (the list of your followers and those you follow). Unfortunately, there is no analogue to making your account “private.” You can make a post viewable only by your followers at the time of posting, but you cannot change the visibility of your previous posts (either individually or in bulk).

Another aspect of “fediverse” (i.e., the whole infrastructure of federated servers that communicate with each other to provide a service) micro-blogging that differs from Twitter and affects the privacy of users is that there is no way to do a text search of all posts. This cuts down on harassment, because abusive accounts will have a harder time discovering posts and accounts using key words typically used by the population they’re targeting (a technique frequently used by trolls and harassers). In fact, the lack of text search is due to the federated nature of Mastodon: to implement this feature would mean every instance would have to be aware of every post made on every other instance. As it turns out, this is neither practical nor desirable. Instead, users can use hashtags to make their posts propagate to the rest of the fediverse and show up in searches.

Instances of Mastodon are also able to “defederate” from other instances if they find the content coming from the other instance to be abusive or distasteful, or in violation of their own policies on content. Say server A finds the users on server B to consistently be abusive, and chooses to defederate with it. “Defederating” will make all content from server B unavailable on server A, and users of server B cannot comment on posts of or direct message users of server A. Since users are encouraged to join Mastodon instances which align with their interests and attitudes on content and moderation, defederating gives instances and communities a powerful option to protect their users, with the goal of creating less adversarial and more convivial experience.

Centralized platforms are able to quickly determine the origin of fake or harassing accounts and block them across the entire platform. For Mastodon, it will take a lot more coordination to prevent abusive users that are suspended on one instance from just creating a new account on another federated instance. This level of coordination is not impossible, but it takes effort to establish.

Individuals also have powerful tools to control their own user experience. Just like on the centralized platforms, Mastodon users can mute, block, or report other users. Muting and blocking works just as you’d expect: it’s a list associated with your account that just stops the content of that user from appearing in your feed and prevents them from reaching out to you, respectively. Reporting is a bit different: since there is no centralized authority removing user accounts, this option allows you to report an account to your own instance’s moderators. If the user being reported is on the same instance as you, the instance can choose to suspend or freeze that user account. If the user is on another instance, your instance may block that user for all its users, or (if there is a pattern of abuse coming from that instance), it may also choose to defederate as described above. You can additionally choose to report the content to the moderators of that user's instance, if desired.

Federation gives Mastodon users a fuzzy “small town” feeling because your neighbors are those on the same instance as you. There’s even a feed just for your neighbors: “Local.” And since you’re likely to choose an instance with others of similar interests and moderators who want to protect their community of users, they are likely to tweak their moderation practices in a way that keeps their users’ accounts private from groups and individuals who may be predatory or adversarial.

There is a concern that Mastodon may promote insular communities and echo chambers. In some ways this is a genuine risk: encouraging users to join communities of their own interests may make them more likely to encounter other users who are just like them. However, for some people this will be a benefit. The universal and instantaneous ubiquity of posts made on Twitter puts everyone within swinging distance of everyone else, and the mechanisms to filter out hateful content have in the past been widely criticized as ineffective, arbitrary, and without recourse. More recently, even those limited and ineffective mechanisms have been met by Twitters’ new leader with open hostility. Is it any wonder why users are flocking to the small town with greener pastures, one that allows you to easily move your house to the next town over if you don’t like it there, rather than bulldozing the joint?

In 2022, user experience is a function of the privacy and content policies offered by services. Federation makes it possible to have a diverse landscape of different policies, attentive to their own users' attitudes and community customs, while still allowing communication outside those communities to a broader audience. It will be exciting to see how this landscape evolves.

Related Updates

Press Release | December 1, 2022

International Coalition of Rights Groups Call on Internet Infrastructure Providers to Avoid Content Policing

San Francisco—Internet infrastructure services—the heart of a secure and resilient internet where free speech and expression flows—should continue to focus their energy on making the web an essential resource for users and, with rare exceptions, avoid content policing. Such intervention often causes more harm than good, EFF and its partners... A bustling digital town square


Deeplinks Blog by Karen Gullo | October 6, 2022

Court’s Decision Upholding Disastrous Texas Social Media Law Puts The State, Rather Than Internet Users, in Control of Everyone’s Speech Online

The First Amendment and the freedom of speech and expression it provides has helped make the internet what it is today: a place for diverse communities, support networks, and forums of all stripes to share information and connect people. Individuals and groups exercise their constitutional right to host and moderate...

Deeplinks Blog by Gennie Gebhart | August 16, 2022

Bad Data “For Good”: How Data Brokers Try to Hide Behind Academic Research

When data broker SafeGraph got caught selling location information on Planned Parenthood visitors, it had a public relations trick up its sleeve. After the company agreed to remove family planning center data from its platforms in response to public outcry, CEO Auren Hoffman tried to flip the narrative:...


Deeplinks Blog by Meri Baghdasaryan, Katitza Rodriguez, Karen Gullo, David Greene | June 6, 2022

Speech-Related Offenses Should be Excluded from the Proposed UN Cybercrime Treaty

Governments should protect people against cybercrime, and they should equally respect and protect people's human rights. However, across the world, governments routinely abuse cybercrime laws to crack down on human rights by criminalizing speech. Governments claim they must do so to combat disinformation, “religious, ethnic or sectarian hatred,” “rehabilitation of...

Deeplinks Blog by Andrew Crocker | May 19, 2022

DOJ’s New CFAA Policy is a Good Start But Does Not Go Far Enough to Protect Security Researchers

The Computer Fraud and Abuse Act (CFAA), the notoriously vague anti-hacking law, is long overdue for major reform. Among many problems, the CFAA has been used to target security researchers whose work uncovering software vulnerabilities frequently irritates corporations (and U.S. Attorneys). The Department of Justice (DOJ) today announced...

Deeplinks Blog by Karen Gullo | April 28, 2022

EFF Statement on the Declaration for the Future of the Internet

The White House announced today that sixty one countries have signed the Declaration for the Future of the Internet. The high-level vision and principles expressed in the Declaration—to have a single, global network that is truly open, fosters competition, respects privacy and inclusion, and protects human rights and fundamental...

Deeplinks Blog by Jillian C. York, Gennie Gebhart, Jason Kelley, David Greene | April 25, 2022

Twitter Has a New Owner. Here’s What He Should Do. Elon Musk’s purchase of Twitter highlights the risks to human rights and personal safety when any single person has complete control over policies affecting almost 400 million users. And in this case, that person has repeatedly demonstrated that they do not understand the realities of platform policy at scale.