← All posts
Deep diveMarch 19, 2026

Our Federated Model

Federation Introduction

Server Federation as a concept connects multiple, independent server environments to enable unified access, resource sharing, or authentication across trust boundaries. It's pretty simple. But, we can explore that, because it can be hard to actually visualize how that's different.

Centralized Model vs Federated Model vs Our Model

The Typical Server

With a typical server, the server is the trusted party. It performs tasks for users after validating that they have permissions, and it manages the data however it sees fit.

Often a server will have a database of user logins, a database of user permissions, and a database of user data. To check if a user is who they say they are, it requests they login and checks the credentials they provide against those stored. To check if a user has permission to do something, the server takes the user id and checks its permissions in a table/database. Same with user data. The server functions like a clerk or secretary, they provide the service and you trust them. Things like password hashing or data encryption are done to secure data in the case of a breach, like locking cabinets & drawers in an office, but the server has access. This is why they can do a user password reset or account recovery — from the perspective of the server, these things can be arbitrarily changed.

We disliked that.

Federated Servers

With federation, you retain the same model, but you diversify the implementation.

With a centralized service, like Instagram, one group controls the servers. If you don’t trust Meta, you have to not use Instagram. With federation, a party would take the server & app code to make a protocol any group can use (they may also provide their source code). This means that if users don’t trust Group A then they can choose to only use Group B’s servers. They still need to trust Group B, but they have diversity of choice. There can also be diversity of implementation, ie two different Groups can make different apps that connect to the same server. Consider something like Email, where there are multiple providers and receivers to choose from. An example of a federated alternative to Instagram is something like Pixelfed.

Our Server Relay

We take that a step further, by treating the server as untrusted. Not a centralized controller, not a trusted service, but a party to the exchange. The reasoning is that we’re trying to deploy quickly and en masse. There’s not significant time for users to develop trust around new entities. So, centralizing trust even with a variety of groups to choose from is a gamble. We also lack immense interest in personally storing user data, or running the backup/sharing services. If the funds become available we will, but we wanted to open it up. The solution we thought of was removing (or dampening) the need for trust.

The server does not have a user’s login information. Instead, the user uses their encryption keys — a user has a public key that can encrypt things, and a private key that can decrypt things. The user shares their public key with the server. If the server wants to verify a user, it generates a secret, let’s say a random sentence, and encrypts it with the user’s public key. Then, it asks the party claiming to be a user what the secret is, given the encrypted message. If the party on the other end can answer correctly, it means they either have the (private) decryption key, or had a lucky guess. This process is called a challenge. This also means a user can upload encrypted volumes to the server, and the server can be sure it’s the user without knowing what’s in the volumes. A similar process happens for sharing with other users.

An example exchange

When a Friend wants to download data, another process is needed.

First, the Friend can get the users public key. The server knows what volumes the user can request, and will only serve those volumes. But more importantly, the user has another key — the VEK — which encrypts individual volumes. Without this, even a user who has the volumes can’t read them (this protects against a server that implements sharing poorly, ie). The System can provide the Friend their VEK with a few layers of security. First is TLS, just normal website encryption. This protects against people watching the connection, but the server can decrypt this. However, the Friend and System can securely talk by exchanging keys.

Imagibe encryption like putting something in a locked box. Your public key is a lock that only opens to the last key used to make it, and your private key is the actual key to unlock it. You can copy your lock or your key, and both you and your friend have your own versions. You want to send messages the server can’t read, and the server is an armored car.

If your friend has your lockbox, which everyone has because it’s public, they can put a copy of their key in it. They send the box to you, the only person who can unlock it. You unlock the box and take out the copy of your friend’s key, which you combine with a copy of yours to create a new key, say A+C. You can then lock the box using your friends key, and send it to them. Your friend, C, did the same thing with System B, but they created the new key B+C. Your friend never saw your private key, A, so they can’t solve your challenges or make new keys using yours. The server never saw any of the keys, and so has no idea what was exchanged.

Now you have a shared way to securely message, and can exchange your VEK for the volumes your friend has.

Here's an example video I found that's a reasonable length:

The 'friend' could do some malicious things, like save the VEK to decrypt data even after their permission was revoked. But our app doesn’t support this (we actively try to prevent it), the relay server’s enforcement of permissions acts as a second layer of defense, and the user being able to re-encrypt volumes with a new VEK acts as another (though this isn’t presently implemented).

The federated relay acts as an independent party to almost all operations. It’s expected to fulfill tasks, and the security model assumes it will try to snoop on data — which is why all data is encrypted even to the server.

The federation model is not what causes the security behavior, that’s instead something adjacent to a zero-trust model. Federation is just the independent servers model. We’ve combined them. Hopefully this provides some explanation.

Model Perspectives

Relay Server

The server has some general ability to see metadata, and oversee its own health. Its feature set is:

  • Relay operators can see number of volumes (overall & by type), padded sizes of volumes, total storage (including metadata), but not contents of user volumes
  • Relay operators can see unique user IDs and selected handles, user occupied storage, whether they've registered as a friend or system. However - UUIDs are random & regenerate if the app is reinstalled, meaning users cannot be identified between relays or device migrations unless they opt to share that information or reuse handles. Because relays can be independently operated, a user handle is only reserved on a relay it's registered with, meaning users need to exercise discretion when sharing (friend handle doesn't mean friend).
  • Relay operators can see number of active friend links but not links themselves
  • Relay operators can see the sharing relationships that exist on the server, and what has been shared, but not the content of what's shared.
  • Relays have built in rate limits, but these limits can be configured. Relays support TLS, with self-signed or CA signed certs
  • Relay operators can see number of active sessions, can invalidate sessions in mass, cleanup stale data, and see unusual server/user behaviors (mass challenge, attempts, connection IPs (unavoidable, relay specific policy), unusual storage growth) and can delete users from the server.
  • Relay operators can perform general maintenance, but cannot completely assist users due to the above restrictions.

Clients

The clients are privy to significantly more data, assuming data has been properly shared & the server has allowed exchanges.

  • System users can always access their own data, and see it whenever the associated feature is enabled.
  • Users can share with volume-level granulatiry, giving permission to specific features. This means users can do things like share analytics insights but not the member identities, share journals but not members, share members + journals + analytics but not polls, etc;
  • Federated servers can be run with self signed certificates, and users can approve self signed certificates. This allows users to run servers entirely locally, or within ie their own house
  • Sharing & System friendship always requires system approval. Even if a user has a System's invite code, that SYstem has to do a final confirmation to accept them as a friend
  • Users can have optional handles, which the server can see, and are reserved to the user
  • Data is private even to server operators
  • The Friend app client prevents screenshots and screen recording of shared data.
  • User can sync manually, or app can automatically sync intermittently
  • The Friend app does not store data to disk, and does not store decrypted data in cache, meaning that they do not keep live copies insecurely, and without capturing their key exchange the System user can invalidate their sharing to prevent access/decryption
← All postsBack to PluralLog →