CORS restrictions

Degrades performance how? Implementing a trusted proxy allows you to implement caching, prefetching, and other types of performance improvements that are impossible with each client making it’s own independent request.

Cache invalidation is one of the two hard things in computer science, the others being naming things and off-by-one errors. Introducing another layer of caching on top of the ones browsers already have is counter-productive: many objects require authentication for access, in which case you cannot share them between users. You’re also introducing additional latency on top of every request. It’s another service for every server to implement, and to secure so that people cannot turn create an account and get an anonymous proxy with which to access the web (how would you secure such a service so that it cannot access anything other than ActivityPub endpoints?). And every C2S client needs to know how to discover it.

As a user, i would find it very surprising if viewing a post from a user or even just viewing my home feed exposed my IP address to untrusted remote servers.

You’re in for a major surprise then: browsers already pull resources of all kinds from all over the web; from CDNs and from static services like S3. There are solutions to these problems already; VPNs, request headers, preventing third-party cookies. None of those problems will get any better for proxying requests, and not proxying requests won’t make the current situation any worse than it already is.

Last, many ActivityPub objects require authentication to pull the request, so the receiving server will likely know the identity of the user making the request. Their IP address is of minimal value at that point, but if you want to hide your IP address, you should be using a VPN or the <img> tag is going to give you away anyhow. There is nothing preventing a server from adding a unique identifier onto the end of each bit of media on any object they serve up, e.g., https://example.com/my-image.jpeg?id=123456.

1 Like