Achtung! I’m going to reply to a whole bunch of comments in one, so this will be an epic wall of text. Take as directed and see your doctor if pain persists
It’s the nature of the beast. A W3C Working Group (which as discussed already is requiring to issue a formal W3C standard), has a charter with a time limit. Perhaps to avoid paralysis by analysis, and make it more likely that something is produced? In the particular case of AP, it was also… for lack of a better word… the politics of the situation.
When the IndieWeb, OStatus fediverse, Diaspora federation and (proto-)Zot-verse were all totally separate, that was frustrating. Because the potential network effects of all of them were reduced by the fragmentation (like we’re seeing again now with AP, BlueSky and Nostr). But it was also uncomplicated. You were clearly in either one or the other. Once apps started to go multi-protocol, and the various networks starting inconsistently overlapping, it became hella confusing.
Plus a lot of the legacy projects wanting to add new cross-server functionality, or federate existing local-only features, in ways that weren’t possible with the protocol(s) they were using. On top of that, a lot of existing projects wanted to add federated social functions using an open standard (eg GNU MediaGoblin), and there were developers with new project ideas wanting to do the same.
All of this and more added up to a lot of pressure to get a W3C Working Group chartered as soon as possible. Once that happened, there was a formal deadline, and a feeling that producing a spec, however imperfect, was better than demoralising everyone by letting the charter expire with nothing published.
Maybe, but it’s the only way to design a standard. Because there’s no point in writing a theoretically perfect spec from first principles, if the only person who ever uses it is the author(s).
Exactly right. So what happens in a standards process is that you try to gather all the developers whose software is getting the most real world use (plus others keen to work interoperate them using a standardised protocol set). In the case of AP, that included the clusters of projects listed above.
Then you hammer out an agreement they can all live with and agree to implement. Which necessarily results in a spec built around what those developers have tried in their own projects. Designing in the successful approaches and avoiding the failed ones. It also requires compromise; not letting the perfect be the enemy of the published.
This comment suggests you’re not quite clear on the function of FEPs. Rather than incrementally changing the basics of the AP spec, they’re designed to add things on top of a vanilla AP implementation, in a standard way. AFAIK, where possible, they don’t overlap or contradict each other, and they certainly aim to avoid contradicting the published AP spec.
The FEP approach was inspired by the things are done in the XMPP world. The core XMPP protocol defines only a minimal set of functions that would be used in any federated messaging or presence system, so it hardly ever needs to be changed. To federate the functions you need for a particular kind of app (eg a modern, E2EE messaging app with voice/video chat), you work with a group of developers working on XMPP apps like that, and chunk those functions into XMPP plug-ins, or XEPs.
With both the XEP and FEP processes, groups of implementers can define meta-standards, made up of the core protocol plus a list of extensions, some essential, some recommended, some optional. I haven’t seen that for the fediverse yet, but the FEP process is still fairly young. In XMPP world there is the Modern XMPP meta-standard (which I believe is itself codified as an XEP).
I apologise for expressing myself so unclearly, because that’s exactly the opposite of what I meant. Which was more analogous to “don’t try to shoehorn every possible use of a website into HTTP”. Implementing vanilla HTTP requires no less and no more than the minimum requirements of any website. Other functionality used by some websites, but not all, are defined in other protocols, which can have their own standards (eg WebRTC, AP).
cough Diaspora protocol cough
Good, 100% agree. AP should be the only spec you must implement to federate at all (within the fediverse, for as long as AP is its standard protocol).
Exactly. What you add on top of vanilla AP (from the FEP tool shed or elsewhere) will depend entirely on what kind of app you are building, and what kind of UX you want to provide to the people using it.
One advantage of this approach is that adding AP support to a federation library would be fairly simple, and wouldn’t need to be updated often. It also means developers can build domain-specific federation libraries that implement vanilla AP, plus whichever FEPs are appropriate to the target use cases (eg one for federating long form video services, or one for podcast services). Most decentralised social media developers want to build UX, not federation plumbing, so if they can pick the right set of plumbing off the shelf, they’re more likely to integrate AP federation into their apps.
As @stevebate says, it might not actually be a problem. But if it is, there are two ways to fix it.
One is to delete OAuth from the spec, and hardcode in another authentication standard. But then in a few years time, when that one’s no longer flavour of the month either, you’re back in the same position. Waiting for the glacial processes of the W3C to charter a Working Group, to swap in yet another one.
OTOH, since client authentication is not the core business of AP, you could instead delegate the exact choice of authentication protocol for fediverse apps to an FEP. That way as best practice evolves, new FEPs can define how newer authentication protocols are used in the verse. It’s up to project devs to decide when to implement them, and how long to maintain backwards compatibility with older ones. While always remaining AP-compliant.
What would be useful in the vanilla AP spec would be defining a way for implementations to signal which authentication methods they support. So clients can implement more than one for different purposes, based on FEPs standardising the uses of each. They could even define a hierarchy of preferences; ideally Protocol N, but failing that Protocol O, and as a last resort Protocol P.