Linked Data: Undersold, Overpromised?

I dont think there actually any real linked data ‘experts’. And anyone that calls themselves one, probably isnt

The solid discourse area was set up by a commercial entity. From my experience, they are reluctant to help out or support people using solid. The vibe in this forum better IMHO

Fundamentally consider a programming language where every variable you use MUST be a URL, and SHOULD link to another quite complicated page of meta data for that URL. And where the only data structure you are allowed to use is a Set. Arrays are an after thought, shoe-horned in, that no one understand. This programming language does not allow things like addition without specialist servers with atomic updates, which still have not yet been built

That’s the state of linked data

It’s certainly useful in some situations. But to say it’s useful in all situations is wrong. That’s the mistake that Linked Data “experts” make. They make promises they dont understand, and for the most part, dont even use. Then when stuff breaks, there’s often no one to help.

LD should be used to solve a narrow set of problems. Such as merging data from different websites. Or in scenarios where links are under used and high value

At the time of making ActivityPub the idea was that if everyone used this ‘standard’ it would be possible to create a rich network effect through interop. Even with the limitation. Well, understandably developers struggled with the limitations, and some rejected them. Rather than to keep pushing linked data, where the earth has been salted, a better way is to accept its usefulness in some situations, explain it, and also accept the limitations

Linked data should be viewed as a variable scope. One higher than global variables. Then programmers have a range of tools to achieve their goals.

Edit: A possible solution.

  1. Recognize JSON-LD as a form of linked data, which has a syntax for representing hyperlinks, for representing things, types, and some common properties
  2. Recognize JSON as a super set of JSON-LD, with all the features, plus more on top, such as Lists or Arrays of typed things
  3. Match slow changing vocabs, to slow changing software, and allow new types of innovation and interop through JSON

What do you mean with “the forum” ?

@naturzukunft I already have LOD on the list of candidates to add. I am behind on README maintenance, but keep adding entries to the issue. With the “the forum” I refer to Solid community forum where hardly anyone from the core team or Inrupt seems to really want to interact.

@melvincarvalho thank you for that elaboration. Some good food for thought for me there.

I did not want to draw attention to LOA, but to the links I collected in the linked chapter.

1 Like

Just bumped into a listing of various ways to serialize RDF Linked Data:

Copying the summary:


  • Use Hex-Tuples if you want high performance in JS with dynamic data.
  • Use JSON-AD if you don’t have to support existing RDF data, but do value JSON compatibility and type safety.
  • Use HDT if you have big, static datasets and want the best performance and compression.
  • Use N-Triples / N-Quads if you want decent performance and high compatibility.
  • Use JSON-LD if you want to improve your existing JSON API, and don’t need performant RDF parsing.
  • Use Turtle if you want to manually read & edit your RDF.
  • Use Notation3 if you need RDF rules.
  • Use RDFa to extend your existing HTML pages.
  • Use RDF/XML if you need to use XML.
  • If you can, support all of them and use content negotiation.

This is a very nice piece.
But I think in a federated world we can leave out a bit:

  • Hex-Tuples (draft) is the format by the writer and ‘high performance’ means billions and data is static in our case, also nobody uses NDJSON yet.

  • JSON-AD solves what AP already solved, Atomic Data (see also ‘Advocacy’ later)

  • HDT – probably billions and billions like in a twitter world

  • RDF/XML - cause I can’t think of plain XML use cases


  • JSON-LD is the default anyway

and then

  • “Turtle if you want to manually read & edit your RDF.” This includes e.g. manually reading and editing of the vocabulary used in the fediverse but ‘before JSON-LD’
  • “N-Triples / N-Quads if you want decent performance and high compatibility”
  • “RDFa to extend your existing HTML pages” (e.g. w. AP objects, schema or mf2) but ‘after JSON-LD’.

What is left out here is Advocacy.
This is why we can also use ActivityStreams itself for Tuples (double and triple) in the form of Profile and Relationship

Usually 1 software developer defines the @context but neither ActivityPub Instances/Groups nor Users.
To keep the @context small and let everyone “extend ad-hoc”, we can use Profile and Relationship as attachment.
The benefit is that each edge can be a reusable public ActivityPub Object owned by anyone.
Profile could say Alyssa:Portrait describes Bob:Bob – or
Relationship could Alyssa:Alyssa wdt:director Universal:NextBigThing or whatever.

Hextuples IMHO is overkill and lots of technical debt which are a source of bugs

Main thing that’s needed for a social web is JSON with a standardized way of expressing hyperlinks. In JSON-LD that’s using @id or id as a key

Big issue with RDF is that it’s not compatible with plain old JSON, as that has yet to be standardized. There’s not really a will to do it, so we are stuck with hextuples.

However if the AP community got together we could do that for the social web. What would be required would be a way to take plain old JSON keys and put them in a triple store (which requires URs).

Something like:

key <–> URI

foo <–> json:key:foo