A namespace for things defined in FEPs

Ok, now understood.
Let’s stick to your example and avoid a few misconceptions, so I’m adding my “assuptions”:

  • We want to extend the type "Video".
  • The Community voted for the FEP process to do so
  • One type Video can be extended by many FEP
  • Each implementation has a different "@context"

Your example would now put a “constraint” in the FEP-process saying
“The community has already voted how to use framesPerSecond in Video, either specify it as another name or use the available definition.”

To stick to this concrete example, I would then never use this extension.
Why?
My personal opinion would be that “framesPerSecond” should always come from the most authorative source. This is the camera which produced the Video. This camera (most modern cameras) but also modern Video Editing software will store it as Linked Data already in the property xmpDM:videoFrameRate.
These things coming from devices are from the exif, IPTC and XMP namespaces.

short history: The old namespaces were for metadata for image coming from device (exif) and editor (iptc), both had different standards bodies and for Video it was even more. The “Extensible Metadata Platform (aka ‘xmp’) Consortium” formed. By the time they worked closely together with W3C LD working groups but with own specs.
Each namespace is well documented but I am recommending the Overview Page first.

So, there is mostly no need by the user to specify it then, except choosing to change the fps with your software.
The “industry” choosed to use xmpDM and produces the devices, which means, cameras may write fps in e.g. native EBML/matroska or mp4 metadata and in LD it ends in xmpDM:videoFrameRate
This is the only source from devices.
The framerate is a metadata-piece which has to be in every video.
Technically you can destroy the XMP bag and the file is fine (bag = nicer word in IPTC/XMP for LD-Container).
But this will not delete the info in the file. Things like width, height, codec, framerate are needed to make a video a video. If you delete these bits, you destroy the video-file.
But otherwise we should give user control over metadata as described in would-media-captioning-make-a-good-fep
After writing I will go on with GitHub - redaktor/mediaproxy: A proxy to cache media and deliver ActivityPub markup -
this is the building block like in mastodon or peertube to cache media from different sources or to resize, stream etc. it. But it can do a ton of things more cause it has content-negotiation. Had described it for Will here ← this example is for Image but we added the Video part meanwhile and cause Framerate is mandatory it would be in xmpDM:videoFrameRate cause as you see in the source we parse the following filetypes bit by bit mp4, mkv, mov, 3gpp, 3gpp2, enc-isoff-generic, jp2, mj2, quicktime, vnd.dvb.file, webm, 'x-m4a' and 'x-m4v', 'x-matroska', the few new Apple formats are in work, so we have fps from every favourite webvideo type but as it is used by the player.

Anyway, I would more prefer to have an option like
{"type": ["Video", "asExt:FEPxxx1Video", "asExt:FEPxxx2Video"]}
so that FEPs can live independently.