Skip to content(if available)orjump to list(if available)

MillenniumDB: Property graph and RDF engine, still in development

j-pb

These guys write really great papers!

We implemented a simplified version of their ring index for our data space (https://github.com/triblespace/tribles-rust/blob/master/src/...), and it's a really simple and cool idea once you wrap your head around it. Funnily enough, we build this even before the paper was officially published, because we found a preprint on one of the authors blogs. The idea itself was published by them before but their new paper made this a lot easier to understand. (burrows wheeler transforms vs. stable column sorting).

It's really too bad that the whole linked-data space is completely gunked up with RDF.

Ps: If anyone plans on implementing their ring index, using 0 based offsets makes the formulas much more streamlined, their paper uses 1 based indexing and they have to +/-1 all over the place.

bawolff

The whole ring index thing is one of the more fascinating ideas i've read about (i didnt realize milleniumDB was same authors). Sent me down a whole rabbit hole of learning about succinct data structures and burrows-wheeler transform.

Sometimes you encounter a computer science idea that just sounds like pure magic.

smarx007

I think if someone is just trying out RDF, it is better to start with Apache Jena/Fuseki or Eclipse RDF4J. Maybe https://github.com/oxigraph/oxigraph if you like to live dangerously (i.e. to use pre-1.0 DBMSs).

Use of other systems involves factoring tradeoffs and considerations that are probably not the best for the newcomers. For example, qLever mentioned here is good in query performance and relative disk use but once the import is done, it's essentially a read-only DB and completely unsuitable for a typical OLTP scenario.

Having said that, the Chilean research group that is driving the development of MilleniumDB is very well-regarded in the RDF/semantic web querying space.

FjordWarden

If you expect Jena to be more battle-tested because it is older, forget it, if the process is killed by a unexpected shutdown or some other reason it results in data corruption. At least this was my experience a few years ago.

I found graph databases a beguiling idea when I first learned about them, and this is a welcome addition, but I've since temperated my excitement. They are not as flexible and universal a modal as is often promised. Everything is a graph, sure but the result of your SPARQL query not necessarily.

I found classical DBMS based on sets/multisets to be much easier to compose from a querying point of view. A table is a set/multiset and a result of a query is also a set/multiset, SPARQL guarantees no such composability. Maybe, if you want to start mucking around with inference engines, but you'll either run into problems of undecidability.

PaulHoule

Jena lets you make little in-memory triple stores that you can use the way people use the list-map-scalar trinity. I've been working on this publication about that (RDF for difficult cases and when ordering counts) for years and it just got published last week

https://www.iso.org/standard/76310.html

I'll call out my collabortor Liju Fan for being the only person I've met who knew how to do anything interesting with OWL. (Well, I can do interesting things now but I owe it all to her.)

(For the research for that paper I used rdflib under PyPi because CPython was not fast enough.)

When I needed big persistent triple stores (that you use the way you might use postgres) I used to use

https://en.wikipedia.org/wiki/Virtuoso_Universal_Server

and had pretty good luck if I loaded a billion triples if I used plenty of 'stabilizers' (create a new AWS instance with ample RAM, use scripts to load a billion triples starting from an empty database, shut it down, make an AMI, start a new instance with the AMI, expect it to warm up for 20 minutes or so before query performance is good)

I don't regularly build systems on SPARQL today because of problems with updating. In particular, SQL has an idea of a "record" which is a row in a table, document oriented databases have an idea of a "record" which is a bit more flexible. Updating a SPARQL database is a little bit dangerous because there is no intrinsic idea of what a record is; i mean, you can define one by starting at a particular URI and traversing to the right across blank nodes and saying it is a 'record' and it works OK. But it's a discipline that I impose on it with my libraries, it ought to be baked into standards, baked into the databases, wrapped up in transactions, etc. For anything OLTP-ish I am still using SQL or document-oriented databases, but I hate the lack of namespaces and similar affordances that make SPARQL scalable in terms of "smash together a bunch of data from different sources" in document-oriented databases wheras SPARQL is missing the affordances you have in document-oriented databases for handling ordered collections. We badly need a SPARQL 2 which makes the kind of work that I talk about in that technical report easy.

smarx007

> Updating a SPARQL database is a little bit dangerous because there is no intrinsic idea of what a record is

SPARQL has a notion of a transactional boundary just like SQL has. You can combine multiple SPARQL queries in one transaction, they will all succeed or all fail just like you'd expect.

svilen_dobrev

datomic (and partially xtdb /former crux) are OLTPish, and use only such "tuples" , essentially it's up to the user to define what constitutes an entity if at all ("row", "object", "document", whatever) - maybe some entity-id and everything linked to it, but maybe other less-identity-related stuff. Which might feel freeing to extent, but as you said, also expects great responsibility/discipline to cobble the proper properties together.

smarx007

I said suitable for newcomers aka people touching RDF for the first time. If you want production-ready, you probably want Stardog, Ontotext GraphDB, or AWS Neptune - neither is cheap. https://github.com/the-qa-company/qEndpoint is also an interesting project that's used in production.

zozbot234

> SPARQL guarantees no such composability.

SPARQL has a CONSTRUCT clause which gives you RDF as your query output. Isn't that compositional enough?

FjordWarden

Ok, that is true, but how do I tell my graph database that the result of the construct query is some other graph in my DB?

UltraSane

Property graphs map to relational databases pretty well. Using Neo4j's terminology

table name -> node label

table row -> node

table column -> node property

The result of a query is a sub-graph and is very composable.

AtlasBarfed

I think it maps much better to document databases.

Nodes are just documents.

You just need to slap on a relations document type for the graph edges, and to store edge properties

I was close to finishing at least version 1.0 of a document/graph database on top of Cassandra and dynamodb.

hobofan

As someone that has built production systems with Oxigraph (and a bit less with Jena), I'd recommend Oxigraph over Jena any day. Especially if you have you are working with a Rust-based tech stack.

You can save so much time and headache based on less operational complexity and the architectural options it opens up. If you only reinvest part of that into building a framework for versioning/backups, etc. you'll have a much better overall package.

Karrot_Kream

Interesting. Do you think RDF is easier to work with in Rust's ecosystem than Java's ecosystem as a whole in your opinion? I've only touched Jena and worked with Java and Go systems with RDF.

iddan

Systems like Apache Jena are not production ready for anything serious. It makes total sense to start something different

spothedog1

Definitely do not start with Jena/Fuseki, pain in the ass to set up. Start with Oxigraph or rdflib in memory to play around with how to query/interact with the graphs

WhatIsDukkha

Here is a bug with some back and forth between millenniumdb and qlever in starting a benchmarking attempt but I don't see results, though they managed to build and import.

https://github.com/MillenniumDB/MillenniumDB/issues/10

https://github.com/ad-freiburg/qlever

sunshine-o

I got very interested in RDF about 20-25 years ago.

Obviously it did not really succeeded but it seems some industries invested a lot into the tech and it is still around. Especially since AWS built a service around it.

I am really curious, what are the top use cases for it today?

bawolff

I think wikidata (https://query.wikidata.org) is one of the more well known ones.

jerven

MilleniumDB is an interesting engine, as is Qlever mentioned in other comments. I think both are good candidates at making RDF graphs one or two orders of magnitude cheaper to host as sparql endpoints.

Both seem to have arrived at the stage of transitioning from research to production code.

Very exiting for those of us providing our data in RDF and exposing Sparql.

AWs Neptune analytics is also very interesting, allowing Cypher on RDF graphs. Even the Oracle inbuilt RDF+Sparql seems to have improved greatly in 23ai.

UltraSane

It seems like writing Cypher to query RDF would be hard.

jitl

Is it any good?

null

[deleted]

UltraSane

What is a domain graph?

throwaway867500

"Domain Graph" [1] was renamed to "Multilayer Graphs" [2].

The "Multilayer Graphs Model" was aimed to address the limitations found in prior Graph Models (e.g. RDF, Property Graphs) in representing higher-arity graphs without having to resort to reification or reserved words/vocab.

Skipping over the formal math definitions, a Multi-Layer Graph, in practical terms, is represented by statements of "quads": "{edge id, source, label, target}" -- similar on the surface to RDF-named graphs, but not.

The source and target may also refer other edges in addition to referring to the "entities" of real-world-concepts; targets may also be data types (e.g. strings, ints, etc)-- I believe that MilleniumDB puts edges, sources, targets and some simple values under the same packed int64 namespace.

Useful for MilleniumDB under-the-hood design/architecture, and probably for Wikidata where their data model are qualified facts-- a qualifying statement (e.g. "valid from 2020 to 2024") about a factual statement ("Alice Lived in America").

But this is just me, a non-expert, trying to cut to the core points after discovering and reading the papers just recently since knowledge graphs are hot topics right now.

[1] https://arxiv.org/abs/2111.01540 [2] https://users.dcc.uchile.cl/~ahogan/docs/mutlilayer_graphs.p...

UltraSane

allowing edges to have edges is something that RDF* allows.

Property graphs DBs like Neo4j don't support it but you can do it by using a node as a relationship. This is called a metanode or a hypernode. The need for this is mitigated somewhat by the fact that property graphs allow edges to have properties themselves. so you would use

(Alice)-[:LIVES_IN{valid_from:2020,valid_to:2024}]-(USA)

Edit: Read the paper and it is actually an attempt to unify the RDF, RDF* and property graph data models which is VERY interesting.

leetrout

Weird title here. The repo says "Property Graph and RDF engine, still in development" with no mention of domain.

dang

We've changed the title to that of the page. (Submitted title was "MillenniumDB: A graph database engine using domain graphs")

null

[deleted]