No, Hashtags Are Not Groups

In any social network, there is a basic problem: how do I share what I care about with other people, and how can I find the good stuff that interests me? It’s big enough of a challenge in centralized social networks with hundreds of thousands or even millions of users. Unfortunately, there are very real limitations in federated systems.

Over the years, people on the fediverse have advocated using hashtags as a tool for group communication, in lieu of a proper federated groups feature. It’s a trend that goes back all the way to the early days of Diaspora.

If you squint, the two seem to be awfully similar, but they work in fundamentally different ways. Every now and then, somebody on Fedi has the brilliant idea of using hashtags for group communication, without realizing why it’s a terrible solution. Let’s get into the details and demystify things once and for all.

How Tags Work

Those networks have a pretty decent solution: let users add metadata about their topic of choice. This is usually done in two ways: hashtags, and keyword matching.

In the first example, users deliberately add posts to an index, which offers a stream for other people to check out. This is a fairly straightforward mechanism for discovery, in that the content listed inside matches the tag being used.

Sometimes, a service will go on to connect several tags together under a unified topic, and recommend posts to people that interact a lot to a related term: maybe the person who likes #cats might be interested in #kittens? This approach can help people drill down from a more generic term to something more specific to what they’re looking for.

Keyword Matching works in a somewhat similar way to what’s being described, but with full-text search. Instead of a bunch of posts being indexed together by a tag, this is simply a query checking all of the posts a server knows about containing a given keyword. This method has become more prevalent over the last few years, and offers a means of discovery for statuses that don’t have their important words tagged.

Shortcomings With Federation

The big thing to remember is that these solutions are most effective in centralized networks, where a server might have hundreds of thousands or even millions of people. After all, social networking companies collect all this data and store it for years and years. Making content easy to discover is part of their business, so long as it keeps people on their network and increases engagement.

Two different instances, same hashtag

With federated systems, there’s a bit of a breakdown. While tagging works in a very similar way to traditional networks, your local instance is only able to share what it knows about. Two different instances could have wildly different results for the same hashtag, because they might federate with completely separate collections of other servers.

This might be fine for a bunch of people whose social graph is a closed circle, but it falls short of a community that someone can join, get updates from, and read through post history.

How Federated Groups Work

The topography of groups differs from hashtags in a few key ways. Namely: they are a distinct object representing a relationship between a Group Actor, and the subscribers that are a part of it.

A development screenshot of Groups in Pixelfed, courtesy of Dansup

Typically, a Group Actor has a designated user acting as an admin or moderator. These people can adjust memberships, decide whether a group allows people to join, and sets a default privacy level for all posts made within the group.

When a post is made towards a Group Actor, that Actor performs an `Announcement` action, which pushes the status to all subscribers. In most of the implementations I know about, this is done either by mentioning the Group Actor itself (@Group) using a bang tag (!Group) or posting on the Group’s wall, if it has one.

So, to summarize…a hashtag:

  • Is an index of posts about a specific term
  • Usually is contained somewhere within the post itself
  • The results of which are derived by what a server knows about, which can vary
  • Allows anybody to post anything without any real sense of barriers beyond instance-level moderation.

Meanwhile, a Federated Group:

  • Is a relay mechanism handled by an Actor and the people subscribed to it.
  • Barring network failures, all members should be able to receive (or, failing that, fetch) all posts relayed by the Group Actor
  • Typically has a notion of membership and moderation

Towards a Greater Federated Architecture

For some time now, I’ve been musing on how to move fediverse platforms forward – it feels as though we’re on the cusp of building something truly revolutionary, something that could render corporate social networks completely redundant for virtually everybody. But, despite all of our advancements, it remains just out of reach.

Over the years, I’ve been studying a handful of different fediverse platforms that bring a lot of interesting concepts to the table. As someone that has studied and reported on the developments of these various systems, I’ve decided to put together a summary of things I’d like to one day put into my own federated platform, should I ever develop enough brainpower to actually develop one.

Disclaimer: It’s entirely possible that I’m a complete moron, fail to grasp the domain areas these problems stem from, and that the solutions I’m talking about aren’t solutions at all. Feel free to ridicule me.

It’s also worth mentioning that this topic is largely about features, not the bigger and more complicated topic of how a bunch of independent platforms extend an existing standard and build for interoperability.

1. Unified User Account

One of the major boons that the fediverse advertises is that, hypothetically speaking, you can follow any platform from any platform. An end user can follow a PeerTube channel and watch videos from Mastodon, and can also comment on Plume blogs from the same account.

This is generally a great thing! However, the current implementation has two downsides:

  1. Type Asymmetry – Some fediverse platforms don’t support the same activity types as others. A PeerTube channel can’t get status updates from Mastodon, because it’s not really designed for that use-case. Plume users can’t upload videos, PeerTube accounts can’t listen to Funkwhale tracks, and PixelFed can’t play PeerTube videos.
  2. Account Proliferation – Because of the way things currently are, especially in context with the last point, it’s totally possible to have a bunch of accounts for different platforms. I personally have at least 6 accounts: I post status updates on Pleroma, photos on PixelFed, videos on PeerTube, music on Funkwhale, community group posting on Lemmy, and book reviews on BookWyrm. There’s probably a handful of other accounts I can’t even think of right now! Regardless, this leads to a user having their content and interactions broken up into 5 or 6 distinct streams across several different apps, rather than unified in one place.

Type Asymmetry

The solution to the first problem seems fairly obvious: for a generic social server, it makes sense to support the maximum amount of Activity Object types, which could probably be summed up as: “Note, Article, Image, Audio, Video, Place, and Event”, with support for Actor types of at least “Person and Group”.

Granted, not every piece of software in the ecosystem necessarily has to support this – Mobilizon, a federated events system, probably has no reason to care about PeerTube video activities or Funkwhale music activities. But, Mastodon or Pleroma might be interested in activities from all three.

Account Proliferation

The second problem is a little bit trickier, but I think the answer can be traced back to one of the core concepts that the Tent protocol was trying to achieve: link everything back to a user’s server.

Server at the top, with clients hooking into it at the bottom. Keep in mind that clients can literally be anything, and just act as various frontends to access the data.

In Tent’s world, a user has a core server that holds all of their data, and then basically does an OAuth dance with several applications that are all capable of rendering different activity types. The data stays on the user’s Tent server – apps are just different frontends for that data. Contrast this against the current fediverse, where each instance in the network is a fully-realized stack with built-in federation.

What I’m really envisioning is as follows: everything a user wants to do can be represented on the backend of their own instance, but broken out into different contexts on the client side through different frontends and applications.

A mockup showing the frontends of a status stream, image gallery, audio track sharing, and video service all showing different activity objects coming from the same Pleroma instance.

In fact, the same data can be re-used through different apps: maybe my mobile Twitter-Clone app only supports status updates, pictures, music, and videos, but maybe my mobile Instagram-Clone app only supports photos and videos. Both apps would still point to my server, and my profile would be a linear stream of objects relating to every service context that I’m using.

To be clear: I don’t think the right direction is to solely move towards an “everything is an app, and the server itself is headless” model. Instead, I think a good path forward is more along the lines of: the core server supports a bunch of types, provides a bundled interface that can be swapped out (Pleroma is doing this now), and is supplemented by various desktop, mobile, or web clients that are built around a particular set of experiences.

2. Every User is a Moderator

Well, maybe that sounds a bit hyperbolic. I don’t mean that every user has mod powers across instances (eg, someone on moderating someone on, or that users on every instance has moderator capabilities of their entire home server (imagine with 621,000 moderators). That might be chaotic.

One of the things I’ve been thinking about is that federated social platforms should instead give users maximum moderation powers over two things:

  1. Who gets to see what?
  2. What do I want to see?

Who gets to see What?

User moderation tools have come a really long way since the early days of diaspora*, where all a person could do to manage a harasser was hit the “Ignore” button and reach out to the people running their instance about the problem. It’s still not a perfect situation, but modern platforms like Mastodon and Pleroma at the very least have capabilities for blocking bad actors and silencing them.

The question is, how can this be expanded upon in a useful way?

I think the central goal should be to bring more capabilities down to the user level, rather than uphold a system that relies solely on moderator teams and admins. What might this look like?

Avoiding Context Collapse

When it comes to traditional social networks, it’s not uncommon for different groups of people to fall into different buckets, based on what platforms they’re on. From there, different conventions and expectations arise from those respective pockets of the Internet. In a federated system, the lines between sites becomes increasingly blurred.

What I’m trying to describe here is Context Collapse – where different social groups with different norms and expectations become folded into one. What’s appropriate for one context (FetLife) might be inappropriate for another (LinkedIn).

Some fediverse users handle this by creating alt accounts dedicated to a particular context – but this has always felt more like a hack than anything. Aside from registering an “After Dark” account far, far away from the original account, what improvements could we make for people who want to do everything, all in one place?

There are two concepts here worth talking about that actually have a bit of history: Aspects, and Facets. Both of these things might be accomplished today using ActivityPub’s concept of Collections.


Aspects were the premiere feature of Diaspora* when it launched back in 2011. In short, an Aspect is basically a set list of contacts: Friends, Family, Work, Whatever. A user can create as many Aspects as they’d like, and easily manage who got to be in which Aspect.

When posting, this created a dropdown menu to define visibility scope: which groups of contacts get to see this thing you’ve posted? From there, some magic key-signing happened in the backend, and private statuses would only get sent to those groups of people. In addition, you could also use Aspects to filter your timeline, to see statuses coming directly from that bespoke collection of contacts.

If you’re not in the group and somehow stumble onto the exact URL where the post lives, you just see a generic 404 message saying nothing is there, and you won’t see the responses your mutuals might be making to it, either.

Aside from Diaspora and platforms derived from Friendica, no other fediverse platform really seems to leverage this kind of message scoping. Platforms aligned with Mastodon pretty much only use PublicUnlistedFollowers-Only, and Direct as privacy scopes.


This is an inverse concept to Aspects, and to date, I’ve really only seen platforms based on Friendica do this. These platforms don’t even have a word for it beyond “multiple profiles for the same user”, but that doesn’t even begin to describe what it is. So, I’m calling these things Facets.

What is a Facet? To put it simply, it’s an additional profile tied to a singular user account, where certain fields are only seen by certain groups. This creates a comprehensive alternate profile where data fields and posts align to form a comprehensive persona.

So, I can throw a bunch of professional contacts into a Work aspect, and then associate them with a special profile that only shows up for them.

Everyone that’s not a work contact gets sorted into an alternate collection, and gets to see me as Wild Party Man.

I mean, yeah, extreme example, but you can totally do this with Facets.

All joking aside, it’s 100% possible to create a kind of social software that gives a user the ability to totally control how they present themselves to different groups of people simultaneously. This might be worth exploring, allowing one account to self-filter based on what’s important to it.

Permission Scopes for Activities

From a user perspective, it might make sense for users to dictate who can see their statuses, and who can interact with them. I would make the case that this is fundamentally important

Within the last few years, Twitter has been following a model where users can stipulate which people are capable of replying to their tweets. As stated above, Diaspora had Aspects, which dictated who could even see a private status in the first place.

Maybe some combination of those two things are necessary to reduce the amount of shitty interactions one can have when dealing with campaigns of harassment from trolls.

The trick is getting it to work reliably – what if one server doesn’t conform to another’s implementation for blocking, or private statuses, and simply evades those restrictions? The old attitude that GNU Social developers used to have was basically along the lines of “We can’t actually implement privacy in a meaningful way, so don’t bother implementing it.”, but I think that’s kind of a defeatist attitude.

Object Capabilities

The biggest advancement on this front might be from what Christine Webber is doing with CapTP and Spritely. It’s a little bit over my head, but the core ideas can basically be explained as: “Nodes in a network don’t have to fully trust each other; instead they should operate with mutual suspicion and act accordingly. They can use a standard set of instructions.” This can be further described using The Object Capability Model.

In a nutshell, the Object Capabilities can be thought of as “an unforgeable reference that can be sent in a message”, where a message “specifies the operation to be performed.” The Spritely Goblins docs helpfully offer this explanation:

“Furthermore, “object” in “object capability security” refers to security based on holding onto references to an external entity. What you hold onto is what you can do.”

— Spritely Goblins documentation

In other words, it’s supposed to be a secure way for a server to specify what operations are allowed to be performed on a resource – kind of like a contract mechanism that only gives up the goods if very specific requirements are met. Christine is doing some amazing research into the subject, and the fruits of her work might ultimately be the mechanism that people use to stipulate a set of conditions for mutual interaction.

Maybe this work will provide a foundation, where user rules locally stipulate what happens to their posts – if a user blocks or silences a troll, perhaps the troll’s responses will fail to append to a conversation tree. Or, perhaps the user’s server will refuse to show the post to the troll at all.

What does The User see?

Okay, so we’ve talked a little bit about having better control over who can see and interact with your own stuff, but what about having better control over what you can see? This is still something of a budding area: historically, people have relied on RegEx and word-lists to filer out statuses containing terms they don’t want to see, but this is actually pretty limited.

One of the most brilliant tools I’ve seen come into the space is Pleroma’s own MessageRewriteFacility, henceforth known as MRF. Basically, there’s a component at the server level that can interpret rules. Let’s look at the example SimplePolicy config that the docs use:

config :pleroma, :mrf,
  policies: [Pleroma.Web.ActivityPub.MRF.SimplePolicy]

config :pleroma, :mrf_simple,
  media_removal: [{"", "Media can contain illegal contant"}],
  media_nsfw: [{"", "unmarked nsfw media"}, {"", "A lot of unmarked nsfw media"}],
  reject: [{"", "They keep spamming our users"}],
  federated_timeline_removal: [{"", "Annoying low-quality posts who otherwise fill up TWKN"}],
  report_removal: [{"whiny.whiner", "Keep spamming us with irrelevant reports"}]

There’s a couple of things going on here:

  1. media_removal: strips out all media attachments coming from
  2. media_nsfw: applies a nwfw tag that puts media attachments coming from behind a Content Warning.
  3. reject: straight up drops all messages coming from the domain of
  4. federated_timeline_removal: accept messages from, but prevent those statuses from ever showing up in The Whole Known Network timeline.
  5. report_removal: ignores falsified reports coming from whiny.whiner that have been excessively queued up for spamming purposes.

MRF is incredibly powerful…but the downside is that this all sits at the admin level, and requires admins to manually edit configurations themselves. Bringing these capabilities down to the user level, and putting it into a GUI that’s simple and straightforward, would go a long way towards filtering out unwanted crap without wholly relying on admins to do everything for you.

3. Migrating Between Servers

One of the age-old problems fediverse applications struggle with: “If I decide to move over to another server, how do I take all of my stuff with me?”

Whereas the term “stuff” can refer to:

  • Statuses / likes / messages
  • Media
  • A list of who that account is following
  • A list of accounts that are following that account
  • Profile Data (avatar, header image, bio)

This ends up actually being pretty messy. Historically, most fediverse platforms only let users export a little bit of their profile data and list of people they follow with them. It’s possible to update your old profile to a Tombstone that points to your new profile, but that doesn’t really help you if your old server has been completely destroyed beyond repair. It doesn’t even bring your followers from your old profile to your new one.

Nomadic Identity

Some years ago, Hubzilla came up with a concept called Nomadic Identity, and it’s pretty fucking wild. Essentially, a user takes their existing account, clones it to another server through a profile import system, and then does an authorization handshake to say “Yes, I created that clone, it belongs to me!”

That clone and its parent can then act as relays for one another. If you post in one place, it gets mirrored to the other place. Interactions and notifications show up in each place. And the best part: your followers barely notice a difference. In fact, your stream on both ends can look virtually the same, since the data is synced between both channels.

Nomadic Identity seems like it might be the solution the fediverse needs for easily migrating around from one instance space to the next. Generally, the relay system it provides helps to bring censorship resistance to the network, and can help users move away from instances that might not be around for much longer. However, a few outstanding questions come to mind:

  1. How could a rogue admin mess this up for users? Is it possible to absolutely mangle the data that gets exported so that migration isn’t possible? Can an admin abuse secure access between two relays, and log into another instance through someone else’s relay on the admin’s instance?
  2. How might this look for migrating from one platform to another? Is there a future where it might be possible to easily migrate everything from Friendica into a Pleroma instance, for example?
  3. Generally speaking, the ability for users to resist censorship can be considered a positive thing. But, a larger question opens up: how might a user avoid being dogpiled by a troll using a bunch of different clone relays? It’s one thing to be able to block incoming content from a bad domain, but the problem becomes stickier when the attacker has clone relays scattered across friendly instances.

4. The Discovery Problem

I’ve written about The Discovery Problem in federated systems before, but here it is again in a nutshell: for a new user to stick around and continue to use a federated social network, there needs to be content that the person is actually interested in…but, sorting through the junk to find the good stuff is actually super hard. The situation can be so frustrating that people just give up on it.

Tag Following

Aside from Aspects, one of the other early innovations Diaspora* gave to us is the ability to follow tags, in addition to other people.

The mechanism itself was pretty neat: for any hashtag that you can think of, you can follow it, and tagged content will automatically insert itself into your stream. It took very little effort on the part of the user, and often brought a lot of interesting posts from the network directly to them.

Full-Text Querying

A newer development on platforms such as Mastodon and Pleroma includes the ability to conduct searches by keyword, instead of just relying on the tagging system. For discovery purposes, it’s great: you can see who’s talking about a given subject regardless of whether they bothered to use the tag index at all.

The feature remains a little bit controversial, and some instances actually disable it. The argument goes a little something like this: this method of querying opens up a can of worms for harassment, because a troll can simply look up a person’s name or some aspect of their identity and use that discovery mechanism to embark on a campaign of misery-making.

While that’s certainly a compelling argument, I think it actually makes a stronger case for Object Capabilities and more robust Privacy Scopes. The common factor that I find myself coming back to is that some user-level moderation tools are still kind of feeble, and some mechanisms purporting to deal with the problems once and for all end up getting subverted, because the core problem is deeper.

I’d like to believe that there’s a world where both ease of discovery and mechanisms to protect against harassment can in fact exist in tandem, and that this doesn’t have to be an either-or situation.

Topics, instead of just tags

Hashtags have long been a venerated way of appending content containing a tag to an index for discovery purposes, but unfortunately, it’s very limited. For example: let’s say that you were using the tag following mechanism that I just described, and you decided to follow #FreeSoftware, because you’re super interested in that.

But, suppose that not everybody used that particular tag? Maybe some people use #KDE to talk about the K Desktop Environment, and other people use tags for specific apps to talk about those? You potentially end up missing out on all kinds of stuff, simply because people use other tags in their posts.

The idea of a Topic object is simple: it’s similar to a hashtag in that it’s a content index, but…instead of relating to one specific hashtag, instead it relates to an array of different hashtags. Heck, maybe it even contains query terms, to check for posts that mention it, but don’t explicitly tag it.

Unfortunately, this concept doesn’t yet exist in any fediverse app, but the idea of joining indexes together in this way, and using it as a mechanism to bring interesting content to the user, might be a good way to cut through all the noise and show stuff they’re actually interested in.

Maybe this kind of thing could even serve as a type of Query Builder, where people can also stipulate what kind of topics they’d like to straight-up avoid seeing.

A Directory System

The Friendica family tree of platforms incorporate some kind of directory structure to make search and discovery easier for end users. Directories kind of work like a rudimentary search engine crossed together with a phone book.

Generally, the directory system offers two major scopes:

  • A local directory of people on the site
  • A global directory, shared between instances, of everybody.

In the example above, it’s also handy for discovering other types of actors, not just regular old user accounts: People, Organizations, Forums, and News appear as potential filters. As long as you opt-in to being indexed, your profile shows up in directory results.

What’s interesting is that, aside from platforms originally developed by Mike Macgirivn, the only other fediverse platform that seems to offer some kind of user directory is Mastodon, and the listings are incredibly sparse – although I suppose it looks a lot more slick, there’s virtually no way to sort or filter the list, and very little information of any kind.

In short, I think that having a robust directory tool would significantly help more people connect with one another. Obviously, even a feature like directories need to account for user privacy – for example, if somebody blocks you (or you block them), that person shouldn’t appear in the results.

If you’ve been with me this far, and have a good grasp on where I’m coming from, you might say “Ah! These things already exist in a platform! You’re clearly talking about Hubzilla and the projects that descended from it!”, and to some degree, you’re right.

Unfortunately, I strongly disagree with Hubzilla’s approach: it tries to do everything in a singular context, and its interface suffers from the density of a black hole. I have much admiration for the project, and many of my thoughts are inspired by it, but I think the UX requires special treatment to get right.

The core problem of Hubzilla’s design is not necessarily that it tries to do so much, but that its individual parts sometimes don’t follow the design conventions of the larger system, and users end up having to wade through an ocean littered with disconnected configuration menus to get the behavior they really want. You can configure the heck out of it and eventually get to a pretty comfortable setup, but at times it feels more like a kit for assembling a platform than anything else.

What I’m interested in is a system that allows for consistency and simplicity as the default, where the actual magic trick is that the default is in fact just a preset of configurations sitting on top of an infinitely configurable system. Unfortunately, that means that said system has to be built from the ground up.

In Conclusion

This is merely a summary of some of the biggest “top of the head” musings that I could come up with at this time. There are a lot of other big moving parts to consider, too:

  • How can we make frontends more customizable, to the point that users can easily build their own and have stuff “Just Work” with an underlying platform? (This was originally Point #5, but the concept is so big that it warrants an entire write-up).
  • How does a loose collective of independent projects move forward on new features while maintaining compatibility?
  • How do we deprecate older ways of doing things for better methods if the biggest players don’t care about them?
  • How on earth can these projects and the people running the infrastructure make enough money to stay afloat for years?
  • How do we prepare a system to migrate everybody off of corporate social silos en masse, so that we may effectively kill those services forever?
  • Will the fediverse ever be private enough / robust enough/ secure enough / decentralized enough?

I don’t have answers to these questions yet, and only the loosest of stray thoughts. A lot of the bigger answers here go beyond what software is capable of providing, and rely more on mutual aid and collaboration between a countless amount of people.

PeerTube’s Content Wasteland

PeerTube is a video platform that offers an amazing promise: imagine a video portal like YouTube that isn’t monetized, isn’t curated by algorithms, doesn’t censor people, and allows anybody to host it themselves. As a bonus, different video sites can connect together, so that a person can watch DIY hardware hacker videos from Diode Zone, niche Linux videos from Linux Rocks, and documentaries from TILvids.

These different pools of content on the internet beautifully converge into a wonderful stream of things to watch, where the user can pick and choose what they actually want to watch. It’s an amazing promise, and there’s a huge amount of potential in it. If there’s any way that people are going to get off of walled garden networks like YouTube, it’s likely going to be through open source federated communication systems like this.

There’s a major problem, though: people that log on struggle to find anything worth watching, and gradually lose interest. This problem can be further understood as two separate problems:

  1. It’s hard to find the good stuff worth watching.
  2. There’s a lot of garbage everywhere

Let’s break these both down. Afterwards, I’ll highlight a few ideas on how to work around this problem.

Part I – It’s hard to find the good stuff

The central problem is that looking for things to watch on the network is kind of a chore. This isn’t to say that there’s nothing to watch at all – there are more than a few sites in the network making lots of progress on building up their own connected communities (more on this in a bit). The issue is that you have to go down a rabbit hole to figure out where they are.

To find instances, I generally follow four paths of inquiry:

  1. Keyword search from my fediverse instance
  2. PeerTube’s Reddit / Lemmy groups
  3. Sepia Search
  4. Instance List

This one is generally the most straightforward – because I run a single-person Pleroma instance that connects to many other fediverse instances, I can perform full-text searches, like this one. When I’m logged in, this index shows me every status my server knows about that matches the query, which is pretty handy.

My rule of thumb here is that, since I know most of the people on the instances I’m connected to, I can scrounge around to see if anybody is announcing new instances being founded, new channels being started, and new videos being uploaded.

The downside is that you’re basically scrolling through a bunch of random status updates looking to see if anybody’s doing anything with PeerTube, and this way of searching doesn’t always provide something useful. Still, it can be a pretty good way to incidentally discover something being casually discussed, rather than formally announced.

Dedicated Groups

Community groups dedicated towards PeerTube are kind of the best way to find formal announcements of instances and videos. There are three worth knowing about: On Reddit, there’s r/PeerTube and r/PeerTubeVideos, and on Lemmy, there’s c/PeerTube. Occasionally, these can be a great way for people to announce what’s going on with the project, what problems users are experiencing, and where people can go to discover things to watch or even places to upload their own videos.

The downside, though, is that these groups are sometimes only sporadically updated. It’s useful as a supplemental resource, perhaps, but kind of disappointing.

Instance List

When things get really rough, and you’re hurting for content, it’s sometimes not a bad idea to just look up some instances and see what’s on them. The PeerTube project offers two entrypoints for parsing the instance list – the list itself, and the much more user-friendly Instance Picker.

So far, so good!

The picker is useful in the sense that it offers a filtering tool to narrow down the results between the 900+ reported public instances. It can help people find a place to upload videos to, or discover places to follow. However, the picker generally only gives a tiny bit of useful information about the instance. It doesn’t really tell you how well-maintained it is, whether there’s a terms of service, or what the most prominent videos or channels are.

An example result from the instance picker.

For me, the easiest thing to do from this point is to click the “See the Instance” button, and then click over to the “Local” tab. This at least tells us what kinds of videos are uploaded there, which gets us a little closer to content discovery.

BeerTube’s local page. This feels… really random…

Sometimes, the results are coherent. When it’s an instance that revolves around a specific theme, most of the content seems to align to it quite nicely. However, it’s a hot mess for general instances, like in the screenshot above. There’s a bunch of podcast entries, some random entries in Russian, and out of the frame of the screenshot, conspiracy theory videos about COVID vaccines.

At the very least, this technique can be useful to find instances to avoid.

Last but not least is Sepia Search, which is the PeerTube project’s attempt at a solution to this very problem. It’s a search engine that crawls through the catalogues of every public instance that opts into sharing search results with it.

On paper, it seems like a great idea: all of these videos have tons of metadata associated with them, so a global search system to parse through it all might appear to be an ideal way to reach for things that seem most relevant to curious viewers. There’s a reason that I’ve included this technique at the bottom of the list, and gives me the perfect segue into Part II of this article.

Part II – There’s a lot of garbage everywhere

Let’s try a few search results from Sepia, using common search topics. Here’s one for fashion:

That first result is pretty goddamned funny.

Alright, how about comedy:

Eh. Mostly YouTube mirrors.

Hmm, okay. Slightly better. How about linux?

Still kind of mixed results.

So, there’s definitely stuff, even if the index isn’t always the most accurate. Here’s the rub, though: try to find videos that were exclusively created for PeerTube, for the communities that are there, and not simply random media that’s being mirrored from other platforms. It’s virtually impossible.

The infrastructure that makes Sepia Search possible also leads to a secondary mechanism that predates it: instance following. Because of the way that PeerTube is put together, videos themselves aren’t directly shared between servers. Instead, the catalog entry containing metadata for that video is what gets shared. If you follow another PeerTube instance as an admin, you can add that instance’s video catalogue to your own, which in theory can create a more robust search index.

A few years ago, a feature was introduced to build on this: automatic following. That is, if an instance tries to follow you, your instance can automatically follow it back, meaning both servers will populate their own catalogues with unique results that belong to the other website. Taken to its logical conclusion, it’s also possible to automatically follow every new instance that’s listed on the PeerTube Instance list. So, what happens if you decide to follow as many instances as possible with this mechanism enabled?

You get this. Have fun, I guess?

Part III – A Way Forward

So far, I’ve largely focused on the negative aspects, rather than the positive ones. In spite of everything I’ve just said, there’s still things worth watching, because there are still communities of people out there making videos, uploading them, watching other videos, and commenting. What can we do to make the best of this nascent space? While none of these are a silver bullet in solving for The Discovery Problem of Decentralized Networks, they do help somewhat.

1) Build smaller communities with intention

It can be tempting, sometimes, for people running social nodes to try to make them as big as possible. If I get the more users, I can get more videos, and that will somehow make my community the biggest and the most engaged, right?

The truth is: in comparison to large communities, smaller ones are easier to moderate. In the early stages of an instance community, a good way to do this is to be intentional about who the early members are. Something that I do with my own instance at Spectra Video is that I manually invite people who are already interested in joining. I also set early expectations on what the Code of Conduct is, because those conventions can be carried forward with new members over time.

Disputes might be inevitable as my community grows, but the invite-only nature of it means that there isn’t a flood of new people signing up at random every five minutes.

2) Selectively connect with other instances

Instance following, despite the described pitfalls, is a legitimately useful feature. It’s just that an admin should learn to research every instance they connect with. For me, the biggest rule of thumb is asking myself these two questions:

  1. Does this place have interesting stuff?
  2. Does this community seem to share my values?

A good example of a great instance to follow is Diode Zone. It’s a fantastic community of people hacking away at DIY hardware projects, like this awesome Fake Walkie Talkie Ghost Tour video where a guy 3D printed his own walkie-talkie that plays a different audio file based on the device’s GPS location. Additionally, Diode Zone has a Code of Conduct / Terms of Service that align pretty well with what I have for my own community. So, following that instance seems obvious.

My instance’s Recently Added tab, showing the videos coming in. It’s still kind of random, but it’s mostly aligned with people that I know within the community, or stuff I care about

The benefit of doing this is that you’re effectively building bridges between islands, and kind of creating a larger community out of independently-run smaller communities. Spectra follows almost 40 of these little instances, and while it’s still a nascent effort, there’s a real opportunity for something great to happen.

Just don’t turn on the whole firehose of all content on the network, and you should be good.

3) Make your own stuff, too

This is probably the hardest thing, from my point of view. It takes time and energy to make videos, way more than just writing a status update or sharing a piece of art. Some people have tools and a pipeline that cuts down on the amount of work that needs to be done, but for most people, creating a video is still a massive undertaking. That doesn’t mean it’s not important.

The network still has something of a “First Mover” problem. Lots of people might be interested, but won’t join or post anything until they see something they’re interested in, created by someone they like. For those of us already here, this kind of puts the burden on us to try to make the network slightly better, by giving it something worthwhile for other people.

Creating videos and putting them out there is a fundamental step for getting engaged with other people in a community space that revolves around video. Similarly, watching videos and commenting with your own personal insights can cause other people in that space to interact back, and can give people incentive to return for more of that, if it’s good.

Understanding Fediverse Drama

There’s a spectre haunting the federated timeline. It started as an argument yesterday afternoon between two people, and now things have escalated to a point that no one can ignore it. “Person A said something racist!” someone proclaims. “Person B is queerphobic!” someone else yells.

The threads branch and split off. Dozens of takes are made by people with only tangential relationships to those in the argument. Screenshots are dug up, and entire oral histories are pieced together about one of the two people fighting — neither of which has stepped down from their respective soapboxes, by the way.

People begin to bicker and groan. Soon, a meta discourse happens in response to the discourse. Sometimes, a meta meta discourse spills over, and the tediousness wears everybody down. After a while, a few people remark about how they’re logging on less, or considering leaving.

What the fuck happened?

Anatomy of a Flamewar

The phenomenon of fediverse drama is itself nothing new. If anything, it draws from the rich and boundless tradition of people arguing online. By observing this tapestry of history, we can better understand what’s happening now.

Since the early days of Usenet and Bulletin Board Systems, various types of community members have aggressively asserted themselves in conversation. Sometimes, it’s initiated by a difference in beliefs or values: something as innocuous as preferring Captain Kirk over Captain Picard could turn into an arduous holy war between two frustrated grown-ass adults.

In situations such as this, the drama is driven by a member of the group raising a stink about someone else’s clearly incorrect selection of a binary choice, with well-meaning arguments gradually breaking down into a torrential downpour of insults. Sometimes, the aftermath of these kinds of interactions reinforced the personas and reputations of those that got caught up in it, with the loudest assuming that they were the toughest.

The divide can extend even further, when such disagreements can manifest as a schism between two different online communities. As a member of both the Ubuntu Forums and, I vividly remember the kind of drama the former community would pull on the latter. We would organize raids and put promotional branding in our accounts, and then just hang around boards in groups. Our raids predicated on starting drama only from being present while technically following the rules, like an annoying little brother that keeps saying “I’m not touching you.”

So it’s not necessarily just a factor of dissonance between values or identities between one group of people or the next. Often, there’s a minor mythology built from tales of past drama between factions.

Enter Twitter

Twitter took this whole dynamic into a new territory by obfuscating away the structure of a forum or different domains that mapped to specific communities. Instead of groups, instead of websites, the atomic units became user, tweet, and hashtag. While the tribal abstractions remained in the loose sense, the performative nature of the social network focused first and foremost on the individual, and by extension, their clout.

Bullying, harassment, and ostracism had existed on communication networks and mediums well before Twitter, but the network’s structure, user emphasis, and sheer availability to anyone with a smartphone gave rise to a dynamic where flamewars could be performed in a public square for everyone to see, where individuals could be harassed at home and at work for doubling down on a political position, and a cycle of endless dogpiling could continue for days on end.

Add in a recommendation algorithm that tells followers what the people they themselves follow engaged with, and you have a dumpster fire that can burn for years.

What about the fediverse?

As members of the fediverse, we sometimes really double down on this idea that the fediverse is special, compared to these other things that came before it. To some degree, that’s true!

We have a network of about 400,000+ monthly active users that’s spread over a couple thousand servers that run a medley of different open source servers and speak several federated protocols. Aside from just a forum or a microblogging platform, the fediverse also integrates with blogging platforms, video portals, music libraries, photo-sharing systems, link aggregators, and more. There are roughly 40+ projects in development right now to continue taking this grand experiment even further.

In spite of all that, though, the fediverse is more of the same. Although nobody initially planned it this way, we’ve all basically shoehorned a bunch of different forums together, with only casual regard for whether they should even be talking back and forth with one another. Imagine if 4Chan, Reddit, SomethingAwful, eBaum’s World, Tumblr, and Newgrounds were all shoved together in this way, and you’ll start to get an idea of why federation can make online communities more complicated.

This isn’t to say that we shouldn’t have federation. Indeed, having the ability to build and connect communities together, and even have some kind of meta-community based on the pockets in the network that you yourself connected to, is itself very powerful. In fact, I believe that this dynamic may hold the potential to eventually kill Facebook and Twitter, and render corporate social networks obsolete.

However, the reality is that every single instance with more than a certain amount of users can itself be considered a community of sorts. Aside from differing policies on content and behavior, there can be a dramatic divide in identities and ideological values that can sometimes be completely incompatible. Many of these communities have their own characters, mythos, and power symbols to tell a story about who their leaders are, and those stories are revised and redistributed by people of all sorts, in positive and negative ways.

What can we do about it?

I guess to bring this rambling, barely-coherent mess to a close, my main point is that there will always be drama in online communities. Long gone are those halcyon days where we could ignorantly assume that every node in the network should federate together, no matter what. There will always be drama, and silencing, and blocking, and little fiefdoms of power within gated digital communities.

People are not a monolith, and the millions of data points where differences can surface between two individuals is just an element of life itself. To some degree, that variability is part of what makes the world go.

From my point of view, the best things we can do involve tooling, policy, and administration. At the user level, the ability to self-police timelines and wield control over what they themselves see or don’t see is paramount to keeping this whole network going. This part of the equation has been steadily improving: it used to really suck in the Diaspora days, and the early Mastodon days practically required people to know RegEx to filter unwanted content out of their feeds. Still, we can and should make this easier and better for everybody.

When it comes to policy, it often makes sense for instances to federate with others that hold similar policies to themselves. After all, if you’re bridging different communities together, it might make sense to agree on what behavior and terms are acceptable. Mastodon has a server covenant that aims to do exactly that. It’s not a silver bullet solution by any means, but serves as one attempt to provide common ground for instances to work together.

This finally brings us to administration. It’s all very good to have policies that are agreed upon by different interconnected web communities, but policy falls by the wayside if there’s nobody to enforce it. Larger communities often have a team of moderators to deal with user reports and complaints, and sometimes feel obligated to take proactive measures to protect their own communities from harassment. Sometimes these measures go too far, such as when blocklists are haphazardly curated and shared with inadequate research into who should actually be blocked. Even so, it’s better than nothing.

As one last thought, the most important part of the network boils down to individual behavior. Conflict can’t always be avoided, and some principles can never be compromised on between two people. That being said, we are the ones driving this machine in the first place! It behooves us to try to develop a sense of empathy, patience, and kindness with people who are very different from ourselves, rather than endlessly feed into a clique dynamic where we all shit on each other to pass the time.