Autonomous identity for the pluriverse based on OAuth/OIDC

Both the primordial fediverse of ActivityPub as well as the federated Matrix have been mulling over various private-key approaches to the ideal of decentralized or nomadic identity, but I think we’ve been trying to solve too many deep-rooted problems in one go. This has kept us in a holding pattern for many years:

Meanwhile there’s a major convergence of OAuth/OIDC support across apub applications, Matrix is going all-in on it as its root default, and other social web protocols are tagging along as well.

ActivityPub

Matrix

Other

  • Solid-OIDC

  • 2023 Protocol Roadmap | AT Protocol
    “Auth refactor: We want to improve both third-party auth flows (eg, OAuth2), and to support verifiable inter-service requests (eg, with UCANs). These involve both authentication (“who is this”) and authorization (“what is allowed”). This work will hopefully be a matter of integrating and adapting existing standards.”

Core Identity

I’ve been thinking intently about identity since the start of this year:

This line of thinking brought me to a framing that helps me categorize different web applications for my own purposes:

  • Stream = Declarative, linear flow.
  • Bonfire = Discursive, omnidirectional flow.
  • Garden = Contemplative, bottom-up flow.
  • Identity = Conduit of flows.
    Your identity (ID) both inhabits and holds the places and behaviors which the Bonfire, Stream and Garden symbolize, simultaneously experiencing and expressing itself through those outlets.

These metaphorical constructs exist in the web as concrete protocols:

There are no strict boundaries in the Stream/Garden/Bonfire trio. A blog for instance can behave like a bonfire when it is more discursive through its comment sections, it can be consumed in stream-form via an RSS reader, and it takes the shape of a garden when it’s deeply interlinked and less concerned with chronology.

Likewise, Reddit’s individual threads are bonfires, its frontpage feed is a stream and its ‘best of last week/month/year’ is an organically structured garden.

There’s no need to agree on the exactness of these analogies. What I do hope we can agree on is that identity ought to be distinct from any specific authorship protocol.

Mitigating sysadmin authority

Like the separation of church and state, it seems prudent to keep the management of our digital identities separate from our social network servers.

My default ‘fedi ID’ is currently hosted on writing.exchange/@erlend - a Mastodon instance. This is already a big improvement from letting Twitter (or Google, or GitHub) be the chief custodian of my digital identity, since they won’t even let me move elsewhere if I’d like to do that. Mastodon makes that possible, thus giving me some genuine ownership over my contact list.

But I’m still beholden to an external server admin. Should writing.exchange go down, that’s my fedi ID gone, along with all my followers (contact list). This has already happened to several fediverse instances being run by hobbyists who for whatever reason (lost interest; finances; health; technical issues) stopped running their server, in some cases with no warning whatsoever.

Users on a fediverse server are also disempowered in more subtle ways:

  • If an instance admin defederates from another instance in the fediverse, users won’t be notified of any followers they had from that instance; they’re just silently lost
    (Apparently Mastodon has plans for this, but I don’t think instances should be fully trusted with this responsibility anyhow.)

  • A user can be banned, their account made inaccessible, unable to take their data elsewhere. Regardless of legitimate reasons for banning users, invalidating their default online persona is a kind of digital death sentence without opportunity for appeal. Very few offenses actually merit that kind of punishment.

Bluesky solves this in a web3 kind of way as a self-authenticating social protocol, based on a combination of DNS names and Decentralized Identifiers. However, some number of unknown unknowns remain in this space, and for now their solution depends on a centralized authority to function.

However, I do think some limited degree of centralized authority is necessary as a first step out of our current identity entanglement. The internet runs on all sorts of centralized technologies - e.g. Let’s Encrypt - and we go along with it because their innate openness grants us credible exit. The loss of Let’s Encrypt isn’t an existential thread to my online persona, but the loss of of my Google/GitHub/Mastodon account very much is.

Domain-based accounts

The shortest path to a baseline of identity autonomy as far as my fedi ID is concerned would be if Mastodon allowed me to log in with my own OIDC account, instead of insisting on making one for me.

I can imagine three different degrees of OIDC autonomy:

  • Completely self-hosted OIDC provider.
  • 3rd party OIDC provider service, entrusted to do auth on behalf of my own domain.
  • 3rd party OIDC provider service that provides a (sub-)domain for me.

Even the last option is an improvement over the status quo, because even though I’m still entrusting my identity management entirely to a 3rd party, at least my identity isn’t tightly bundled together with my social networking persona. And that 3rd party would generally be more trustworthy/reliable than your average ActivityPub instance if some institutional actors similar to Let’s Encrypt have stepped up as providers, like Mozilla, Linux Foundation etc.

So what I want is for Mastodon to let me sign up on its server via my personal domain name, the same way currently I log into Mastodon/ActivityPub clients via my personal (but not autonomous) Mastodon domain.

I.e. when signing up for an account on an instance like writing.exchange or mastodon.social, I should get to sign up with erlend.sh via web sign-in.

:warning:
My inability to follow the exact technicalities of authentication specifications is a limiting factor here, so I will need some help correcting or expanding upon what comes next.

I know what I want is something IndieAuth-like, which apparently Mastodon is close to supporting already. But it seems more accurate to refer to the functionality I need as plain web sign-in, because IndieAuth is apparently not OIDC compatible.

But that general method of signing in with ones personal domain is essential for self-hosted/indie OIDC to work as a form of SSO in spaces that are outside of your own control, e.g. a Mastodon instance, because we can’t rely on prefilled provider options such as Google, GitHub & Facebook.

Web sign-in presents a middle road between the broken status quo and the ideal state we need to get to. It doesn’t solve the subtle lock-in effects of a traditional fediverse instance on its own, but it realizes the essential first step of making netizens’ online identity independent from their impermanent choice of fediverse instance.

As such, domain-based accounts, especially when self-hosted, serve the function of a minimum-viable ‘nomadic identity’.

minimal definition of user agency:

  • Own your ID
  • Own your content
  • Own your contacts

Another way to put this is that I wanna make the equivalent for the fediverse of what my former boss Jeff Atwood envisioned for Discourse as a “gravatar on steroids”.

Implementation

Whether Mastodon will be the pioneer of OIDC-powered web sign-in remains to be seen.

This Kitsune issue served as a seeding ground for the ideas I’m conveying with some degree of greater clarity here. Kitsune is a particularly exciting candidate for this exploration because it aspires to support multiple domains as well as domains as usernames, which aligns perfectly with domains as root authorities of identity.

It’s also made in Rust, further aligning it particularly well with Rauthy which, to my knowledge, is the most mature OIDC server/provider around that is optimized for self-hosting. I’ve previously written about how Rauthy could essentially be used as a compatibility layer between ActivityPub and Solid, but Solid is in no way a prerequisite for any of this ‘autonomous fedi-ID’ MVP to work. The storage layer backups of post content & contacts could just as well be built directly on top of Rauthy, or an experimental protocol like Solid-lite.

Proving this out will require:

  1. An implementation of web sign-in/up in the likes of Mastodon or Kitsune.
  2. Support for web sign-in in Rauthy.
  3. A hosted instance of Rauthy, configured for ActivityPub SSO.

Personally I’m also very interested in the fully self-hosted use case of Rauthy, which could be operated through a Tauri desktop app and bundled together with a basic site generator:

This topic serves as an open-ended call to action for anyone who might be interested in pursuing this together with me and a bunch of other folks such as the maintainers of the Rust projects mentioned herein.

4 Likes

These problems are not unsolvable and in fact were solved long time ago (see Hubzilla and Zot).
FEP-c390 is almost 1 year old, it is a very simple mechanism that can be quickly implemented. It’s not like we’re desperately looking for solutions here in Fediverse. We have them.

OIDC is not really a solution, it is just kicking the can down the road. As an ordinary user, you still don’t own your identity, but instead of instance operator your identity is controlled by a 3rd party (identity provider). It will open the door for things like “Sign In With Google”, literally a recipe for re-centralization and corporate capture.

2 Likes

I’m a big fan of FEP-c390 and have been advocating for it all year. I do believe it’s key to a longer term solution, strong emphasis on long term. It has yet to get any traction in the wider fediverse. Meanwhile, OIDC is implemented (or on its way) practically everywhere.

I see it as an intermediary step. Something for the ecosystem to converge on, leading up to the much larger coordination challenge of truly decentralized identity.

You are making perfect the enemy of good here. If you self-host your own OIDC provider you do indeed own your identity, and doing so is much more realistic than a resource intensive and always-on apub server.

And even if you rely on a 3rd party host, an important separation of concerns has taken place, thus taking one step closer to full autonomy.

That’s simply not true. Web sign-in (like IndieAuth) by OIDC is the antidote to the auth-monopoly of the megacorps.

OAuth/OIDC is already implemented in most of the popular apub applications, and they’ve opted not to add ‘social logins’. Nothing I’m proposing here would change that.

2 Likes

What do you mean by “full autonomy”? And how OIDC is one step closer to that?

Autonomy is when you control your private key, OIDC is a very different thing. Given an option, most people would choose convenience and use 3rd party provider (no autonomy). Then lock-in happens and it gets much harder to switch to a better technology. Also, people who are able to self-host have no reason to deploy OIDC provider because the footprint of a good AP server is comparable to the one of Rauthy (which you used as an example).

OAuth, but not OIDC.

I don’t understand how just signing in with OIDC changes anything about whether you own your identity, or have autonomy with it, or whatever. I think you’re making some specific assumptions about what an identity even is, and it would help if you could unpack those.

As far as anyone on the fediverse is concerned, my identity is my actor ID, which is a URL controlled by my server. OIDC doesn’t change that. If I sign up for a 2nd service, the fediverse sees that as two actors. So this makes OIDC nice to have, but not advancing any real goal related to identity. Even if you host your own IDP.

2 Likes

It’s not completely orthogonal though. OIDC can serve as an arbitrator of private keys:

This sort of thing paves a path towards incremental adoption of private keys rather than working in opposition to it as some either-or proposition. It’s not a lock-in scheme, it’s an onboarding mechanism.

An AP server requires persistence. An identity server does not. You only need to run an identity server when you’re authenticating yourself online, which makes it particularly well suited to self-hosting even by non-technical people:

  • Persistent uptime is not a concern.
  • Attack surface is minimized by limited windows of uptime.
  • Total resource usage is orders of magnitude smaller than always-on server :leaves:

Right, thanks for making this clear. That’s the kind of technical exploration I need help with thinking through here, in a ‘it could work if…’ kind of way rather than ‘it won’t work because’.

For instance, could a lightweight OIDC provider like Rauthy be extended to act as an ‘actor ID provider’? Thus letting me extricate and properly own my actor ID from whichever apub server I am effectively renting as a pipe-on-demand.

The Rauthy maintainer has clearly expressed openness towards this type of functionality. They already added WebID logic to accommodate Solid.

2 Likes

Could it work? Maybe. Other servers are going to retrieve your AP objects by making get requests to the ID url*. So you could conceivably have some situation where the IDP will host your actor object, while the fedi server does all the federated messaging, and presumably hosts all your content. But then the fedi server likely also needs to be able to modify the actor object stored by your IDP, because the fedi server has to have access to your private keys so that it can authenticate you to it’s peers, but the IDP server needs to serve the corresponding public key as part of your actor object so that the peers can verify your signatures.

This all gets very complicated very quickly, and for a benefit that seems pretty marginal. I think it’s also straining the role that identity providers play, which is really to be an authentication authority within an IAM system. I think in the past you’ve proposed a standard of being able to effect a credible exit from one fedi host to another. I think that’s a lot more manageable, and the benefit is clearer. The main technical requirement to accomplish that is to be able to take your private key with you and import it to the new server so that it can generate activities for you that 3rd parties will recognize as authentic.

*As I understand it, Mastodon doesn’t do this consistently, and will only do actor lookups via webfinger in may situations. But that’s not a behavior envisioned by the AP spec, and not the way any other fedi software works, as far as I know.

1 Like

All very complicated. Why not simply use nostr public keys like soapbox/ditto. This is going to work, is already in progress in major clients, adds massive developer base, eco system and apps. Also compatible with Solid (OS / lite).

This will be working quite soon, id suggest hopping on board.

Nevertheless, it will be interesting to see if other approaches are tried.

It’s one 64 char field in the profile. Allows sigs, encryption, migration, portability, single sign on, and much more.

An issue relevant to this topic was raised by @bumblefudge in swicg:

A reply from @bengo seems to frame the technical possibilities at hand much better than I was able to:

  • @evanp’s FEP-d8c2 here is relevant and, upon skim, generally aligned with the way I’d expect someone to try ‘oauth 2.0’ against the oauth... affordances in the AP spec. fep/fep-d8c2.md at main - fep - Codeberg.org

  • in SocialWG in 2015/2016, I was implementing an OpenID Connect provider (using pyoidc) at the time (to power accounts.livefyre.com), and I was quite happy with it at the time for providing a much more specificly implementable set of behaviors than the ‘oauth 2.0’ framework, while still being an implementation of that framework, and iirc I suggested at the time maybe we should encourage OIDC for Dynamic Client Registration, etc.
    I believe there was a lack of consensus in the group both whether to make normative language around auth as well as, if we could agree on that, what would make sense to recommend. On top of that, to go to WG, i recall that quite a few normative references had to change to informative because, for example, there weren’t necessarily good ‘final’ TR/RFCs of some of this stuff for the ActivityPub TR to rely on. i.e. we needed to go make auth standards that would be official enough to be referred to from W3C TRs (or FEPs) in the future.

  • +1 OIDC Dynamic Client Registration and e.g. I also like how it specifies encoding the OAuth 2.0 Authorization Request in JSON (because the application/x-www-form-urlencoded encoding is less expressive and tedious to translate between as your auth{n,z} requests get more complex)

  • In general I’d like for there to be well defined profiles for going through this process and just using a cryptographic public key as the oauth client id. ‘Client Registration’ for a noncryptographic client id shouldn’t be required for everyone. Not everyone needs a database of metadata about clients, and in fact it is an operational expense and data liability that many would probably rather not take on. (I would rather not)

    • I would also like to put on people’s radars ~OAuth 3~ GNAP which encourages cryptographic authentication for clients as well. I encourage anyone to sketch a GNAP profile for ActivityPub. But I also think it’s important to have OAuth2-related profiles that build upon the already-in-TR oauth vocabulary items defined by AP. Since this issue is mostly about OIDC/OAuth2, i encourage GNAPPy discussion elsewhere to not derail the thread.
  • I also strongly recommend folks become familiar with ‘Self-Issued OpenID Connect Providers’ aka SIOP, which is one of the things about OIDC I liked so much from 2015-2016, but felt abandoned for awhile until getting more adoption recently e.g. by the European Union

    • Final: OpenID Connect Core 1.0 incorporating errata set 1
    • Self-Issued OpenID Provider v2 - draft 13 (quote edit)
    • imho more ActivityPub apps, whether they are clients or the web-ui served by ‘instances’, should have affordances for an end-user bringing their own authentication process to the instance, instead of assuming that your ‘identity server’ and your ‘activitypub server’ are always (or even should be) operated by the same folks. My preference is to have a social web where I bring my own identifier to any number of ActivityPub Clients and Servers (which do not try to be my identifier-provider as well), and thus I have both ‘single sign on’ and ‘portable identity’ and whatnot by virtue of my ActivityPub Actor Identifier not being controlled by the provider of my inbox/outbox servers (and I can switch/compose providers whenever I want sans lock in).
  • Actor Identifier Resolver, Authorization Server, ActivityPub Inbox Server, and ActivityPub Outbox Server, should all be able to be provided by different domains and service providers, and I should be able to bring my own identifier to all of them, whether that identifier is rooted in my own domain name (e.g. https://bengo.is/actor.json), from which you can find an Actor and related oauthAuthorizationEndpoint, but also could be rooted in my own cryptographic identity (e.g. did:key:z6MkfbsERaJ7rtJQWMWtYMxNED56bhQMgrNu8CRdUjB5LfRp or perhaps a dweb: URL). One of the benefits of the latter over the former is that claims by that identity may be verifiable even if you are lacking connectivity to resolve a domain name via DNS or ability to connect to any IP addresses you find via DNS, and it removes the Actor Identifier Service and the Authorization Service as semitrusted intermediaries that could act maliciously without accountability (e.g. not allow certain folks to resolve your identifier to an Actor Object without you knowing)

1 Like

whoops sorry i missed this, I opened that github issue impulsively while on a SocialCG call, without checking for prior art/backlinks. just chiming in to say that SIOPv2 is great, it was created for exactly this purpose (a shim to allow cryptographic wallets/authenticators to manage their own keys in a way that OIDC RPs could recognize and interoperate with), with the explicit purpose of making an upgrade path from OIDC to portable/self-managed identifiers and authentication. it’s been mainline in the current version of OIDC for a year now, give or take, and might be helpful in letting mastodon interoperate with users that “BYO keys”, as ben puts it in the github issue linked above

2 Likes

All good. I think some people are easier to reach on github/swicg than here, plus this topic starts off with some long winded meanderings, so I’m happy to relegate this discussion to a point of reference.

I’ve shared some thoughts on how the MIMI/MLS standard pertains to nomadic identity. Similarly, it’s also related to OAuth/OIDC:

draft-barnes-mimi-identity-arch-01 - Identity for E2E-Secure Communications

5.3. Verifiable Credentials

Certificates and PKI protocols tend to be a bad fit for
authenticating user identities. Systems like SAML [saml] and OpenID
Connect [oidc] are more commonly used for user identity, but only
produce bearer tokens, not the public key credentials required for
E2E identity – using bearer tokens for E2E identity would allow the
verifying client to impersonate the presenting client! Likewise,
because the verifier needs to check a bearer tokens validity directly
with the issuer, the identity authority learns every verifier to whom
a client authenticates.

More recently, there has been work to apply the W3C Verifiable
Credentials (VC) framework to this problem [W3C.vc-data-model]. The
VC model aligns well conceptually with the above architecture, and
some of the required protocols are in development:

  • Credentials would be verifiable credentials or verifiable
    presentations.

  • The identity authorities would be Issuers in the VC model.
    (Likewise, the presenting client would be a Holder and the
    verifying client a Verifier.)

  • The issuance process here corresponds to the issuance interaction
    in the VC model, for example using OpenID for Verifiable
    Credential Issuance [openid-4-vci]

  • The presentation process here corresponds to the presentation
    interaction in the VC model, for example using an integration with
    the E2E encryption protocol analogous to the X509Credential
    integration in MLS mentioned above.

  • The verification process here corresponds to VC verification,
    using a mechanism such as [StatusList2021] for revocation.

A VC-based model for E2E identity is clearly still incomplete, but
given the good conceptual alignment and potential for a better fit
with user identity than PKI, it seems like a promising candidate for
further development.

It has two authors in common with the aforementioned ‘Self-Issued OpenID Provider’. Exactly how they relate to each other however is beyond me. My point is just that the OpenID standard shouldn’t be disregarded as orthogonal to decentralized identity.

Interesting post

More creative use cases for delegating authorization (authz) and authentication (authn) to an external identity provider will become available when OpenID Federation receives wider adoption after release:

This will come with better support for the OpenID Connect Registration extension, which depends on the OpenID Connect Discovery extension.

This would allow for dynamic client registration for third parties and eventually offer users the choice to login with their identity provider at hands; here: their Mastodon instance.

When sites will offer to login via OpenID Federation, it will allow to decentralise identity provisioning to:

establish trust between an RP and an OP that have no explicit configuration or registration between them in advance

Mastodon instances that activate their OP (OpenID Provider) could then dynamically create client configurations for the intended RPs (OAuth 2.0 Clients using OpenID Connect; websites one visits and where one wants to login with Mastodon), in so that sites offering Mastodon login would not need to know in advance from where authentication attempts will happen, while still allowing for secure and private authentication/authorization.

This maps quite well to the decentral deployment topology of Mastodon instances across the web.

Given the course of the discussion, can we update the title here to name OpenID connect specifically, and delegate all OpenID 1.x discussion around rel=me logins to

OpenID Federation is still a draft, and the whole chains of trust thing likely poses an implementation issue for the fediverse as there’s no central source of trust. There’s this ticket open about it: swicg/general#38

Was raised during the joint Solid / SWICG meeting a few months ago.


On a similar note, the ATprotocol has published its own approach to OIDC in the context of a decentralized application.

ATPROTO describes how decentralized entities can communicate in order to provide a world class social networking technology. Users’ messages are exchanged from their own/shared Personal Data Servers (PDS) using a federated networking model.

As of writing of this document, there are basically two ways to obtain credentials to control a user’s PDS:

  • Using the user’s credentials (user handle + user password)
  • Using an “app password”

Only the latter method is safe to use with a third party application or service (client) that would like to act on the user’s behalf. However, this method of obtaining credentials is not very user friendly and does not provide the UX end users are used to.

OAuth2 is a well known, well documented framework specifically made for granting user credentials to third party clients. However, it was not initially designed to work in a fully decentralized environment, such as the ATPROTO network. The main issue blocking its direct adoption is that, in OAuth2, clients need to be pre-registered and known by the Authorization Server (AS) before credentials can be granted. The OAuth 2.0 Dynamic Client Registration Protocol describes how clients can dynamically register into an AS. However, this method has several disadvantages:

  • Clients need to keep a state for all the AS they registered to, making them harder to implement and maintain.
  • The client credentials obtained from the AS during the registration can get lost, without any way for the clients recover them autonomously.
  • The protocol does not provide a good protection against “theft of identity” (a non-legitimate client registering with the same name, logo, etc. as another, legitimate, client).

This proposal describes an alternative way of performing client registration. This method is relies on being able derive the client metadata document from its client id, allowing clients to be registered on the fly.

This proposal also describes the minimal OAuth requirements that clients and ATPROTO servers must implement in order to be able to interact with each other. The choices in this document are based on state of the art security practices and were largely influenced by DRAFT-OAUTH-BROWSER-BASED-APPS.

The goals we try to achieve through this framework are:

  • Allow clients to obtain user credentials in order to interact with users’ PDS, without having to be registered with those AS beforehand.
  • Allow the Authorization Server (Entryway) to verify that authorization requests are coming from a legitimate client, and are properly formatted & scoped for that client.
  • Ensure that clients never lose their ability to interact with the Authorization Server (avoid loss of credentials).
  • Allow backend, browser based & native apps to obtain credentials using state of the art security practices.
1 Like

The possible alignment at hand seems clearer now thanks to the emergence of “B.Y.O. Actor ID”:

Unbundle the services and concerns of a typical instance

  1. Sign everything: Recognize client-side cryptographic signatures as proof of authorship (by implementing FEP-8b32: Object Integrity Proofs), in addition to the current practice of relying solely on the instance URL.
  2. B.Y.O. Actor ID: Using Object Integrity proofs enables Identity Hosting to be separated from the other instance concerns. Actor profiles can now be hosted separately from the instance (including as a static JSON object on a personal website), which in turn enables service providers to offer their users a “BYO (Bring Your Own) domain name” feature.
  3. Separate Inbox/Outbox: (Optional) The previous steps enable message transfer and Inbox/Outbox hosting to be outsourced to separate service providers (the Actor profile links to these in the usual manner).
  4. Separate Object and Collection hosting: (Optional) Similarly, AP Objects and Collections can now be stored on domains separate from the Actor’s domain (since authorship and controller-ship can be proven cryptographically, in a domain-independent way). This enables the user to migrate storage service providers without having to change their Actor ID.
2 Likes

This has been largely completed:

Just need a fedi-enabled app to be compatible with our thing (Weird) as a Relying Party.

My summary talk at IPFS Camp:


For a deeper dive into what we’re working towards now:

4 Likes
1 Like

@erlend_sh I am slowly moving Mastodon’s OAuth implementation closer to standards and towards using ideas from OIDC (e.g., profile scope or userinfo endpoint), but this work is really slow as we need to ensure we don’t accidentally introduce breaking changes.

We will eventually be migrating to using Client ID Metadata Documents, I just need to figure out how much funding it’ll take to get those implemented, which is currently hard for me to estimate (because I’ve not been able to actively develop the doorkeeper codebase yet)

2 Likes

@thisismissem @apitman @aaronpk @FenTiger
any thoughts on this doc in a post-atproto (OAuth+did:plc) world?

I did some hand waving about a possible coupling of OIDC and did:tdw here:

@erlend_sh just reading the abstract, that makes absolutely no sense at all to me. Then again, I’ve briefly seen the OpenID4VP drafts and those make very little sense to me too, since they’re basically going “we want a verifiable presentation from this wallet” and then somehow have gotten to “bolting this on top of OpenID is the right thing to do”, even though there’s little overlap in my mind.

Hah no worries, I’m just on the wrong track with this one then.

edit: it seems like my associations made much more sense in the context of did:tdw

Yes, that is correct. I would guess that any SIOP deployments using DIDs are using did:web and any use of did:web can be improved by transitioning to did:tdw.

Interesting stuff!