Discussion: A Next Step In Federated Data Protections

I am building out go-fed/apcore. apcore is a server framework that provides both an ActivityPub server and an OAuth2 server in one, with support for things like webfinger, NodeInfo2 (soon), http signatures, some basic ActivityPub handlers. It’s super hand-holdy. I’d like to propose a flow like the following to include in apcore and welcome feedback.

motivating scenario

Given: I am operating an apcore app on Instance A; there is User B on an apcore app on Instance B; User B has Data1, Data3, and Data4 that has federated to my Instance A:

  1. User B goes to Instance A
  2. User B clicks on a “federated data checkup” button (I don’t have a good name for this)
  3. This initiates the OAuth2 Authorization Code grant flow between Instance A and Instance B. Instance A redirects User B to Instance B, who logs into Instance B, and Instance A obtains an authorization code. In this case, an authorization code is given, but no scopes are granted. Instance A just knows Instance B said “User B is authorized to act as User B” for the current web session only (the only time the auth code is tied to the user).
  4. Now User B can do the following, until their session ends:
  • Request Instance A to count/list all federated Activities and Objects where User B is the author or attributedTo.
  • Request Instance A to delete some/all of the data counted/listed, but inform them that Instance A may still fetch and cache their content in the course of routine operations serving its users.

The benefits to User B:

  • Never has to create an account on Instance A
  • Does not grant Instance A any scoped powers
  • Sees their data on other instances (digital right to know)
  • Have the power to self-service (digital right to delete) their data cached on other instances.

a big can of worms

In addition to listing and deleting their cached data, apcore could also let User B opt into an option that says:

  • Never process my federated data. I understand Instance A can still process derivative content produced by others that references or contains my data (such as Announces). This requires Instance A to record me as opting in.

how is this different than blocks?

Blocking, muting, ignoring, or any other of these sorts of actions would, using the scenario above, be on Instance B. This focuses on a different problem (a user’s digital rights) and attempts to expand it to peer instances, so this would bring User B’s digital right to Instance A as well. However, there are similar end-result effects.

closing thoughts

This is a path I’d like to go towards with apcore if possible, so that all app developers that use it are encouraged to interoperate and respect user’s digital rights in this way.

I understand other federated software isn’t necessarily an oauth2 server and would require work to reach par on this.

Edit: Link to fediverse discussion: https://mastodon.technology/@cj/103121031059732294

First: I’m not proceeding with this proposal.

I’m following up on this thread after some discussion. Unfortunately, I think I got blocked by some folks over entertaining this idea, which is a shame. But I’ll try to fairly characterize the conversations that did come out of it. I think it’s important to document for posterity, so that others can know this idea has been explored, feedback was accumulated from the Fediverse, and has been deemed terrible.

What was this proposal? It was a way to self-service GDPR-like end-user digital rights on the Fediverse, from peer servers.

Let’s go through the few pros I built up in conversations, letting the idea run on its legs:

  • This is not a security feature. This is to let a peer know “the authorized user B can exercise the digital rights of user B”.
  • Quick, easy self-service “right to know/forgotten” one’s data on good-faith federated peers
  • Raise industry standard to self-service, which could provide helpful standard if things wound up in a court anywhere

And the cons:

  • Relying on the GDPR absolutely sucks, for multiple reasons.
    • Sending GDPR letters should be treated as dangerous (meatspace vulnerability).
    • GDPR is not worldwide comprehensive. Hello, USA.
    • To do legal enforcement against a bad actor would require significant personal/group resources.
  • Delete w/ user targeting can act as a “right to be forgotten”.
  • Bad actors will just ignore these requests anyway. This is particularly dangerous because they can use this information to target people.

It took me a particularly longer while to realize that, once the law gets involved, it is already way too late.

My conclusion: Don’t do. It’s not worth letting the good actors be good if it means the bad actors get their way.

2 Likes