Sovereignty Score

What is the Sovereignty Score?

The Sovereignty Score is a number from 0.0 to 10.0 (in steps of 0.5) that summarises how well a tool supports digital sovereignty. It is computed automatically from the data we hold for each tool (e.g. where the vendor is based, whether the software is open source, where data is stored). No one edits the score by hand; it is fully transparent and reproducible.

You can use it to compare tools at a glance. The breakdown by dimension (see below) shows where a tool is strong or weak.

Score colors. On tool cards and detail pages, the score is shown as a colored badge. The color reflects the score band: red (0.0–2.0), amber (2.5–5.0), lime/green (5.5–8.0), green (8.5–10.0). If we have little data for a tool, the badge appears slightly faded (low confidence); the exact confidence is shown on the tool page.


Why we use this score

We want to give you a clear, comparable view of tools. The score:

Scores are generated when we build the site. They are not stored in our source data; they are derived from it.


The five dimensions

Each dimension is scored from 0 to 2 points (in 0.5 steps). The total score is the sum of these five scores, so the maximum is 10.0.

DimensionWhat it measures
Legal jurisdictionWhere the vendor is based and operates (EU, EEA, countries with EU adequacy, or mixed/non-EU).
Data controlWhere data is stored (EU, EEA, non-EU, or unknown) and whether you can self-host.
OpennessWhether the software is open source and whether it uses open standards or protocols.
Lock-inHow easy it is to leave the tool (open standards and data portability reduce lock-in). Higher points = less lock-in.
Operational autonomyHow independently you can run and control the tool (self-hosting, open source, or at least an EU-based vendor).

How each dimension is scored

The following tables describe the exact rules we use. They are the same rules used in our build process.

We look at whether the company is registered in the EU/EEA and in which countries it operates. Countries are grouped into: EU only, EEA (EU plus Iceland, Liechtenstein, Norway), adequacy (countries with an EU adequacy decision, e.g. UK, Switzerland), mixed (mix of these and others), or non-EU.

EU-based company?Countries classificationPoints
YesEU or EEA only2.0
YesAdequacy only1.5
YesMixed1.0
YesNon-EU only0.5
NoEU or EEA only1.5
NoAdequacy only1.0
NoMixed0.5
NoNon-EU only0.0

Data control

We combine self-hostable (can you run it yourself?) with data residency (EU, EEA, non-EU, or unknown).

Self-hostable?Data residencyPoints
YesEU2.0
YesEEA2.0
YesNon-EU1.5
YesUnknown1.0
NoEU1.5
NoEEA1.0
NoNon-EU0.5
NoUnknown0.0

Openness

We use open source (yes/no) and open standards (does the tool use or implement open protocols or standards?).

Open source?Open standards?Points
YesYes2.0
YesNo1.5
NoYes1.0
NoNo0.0

Lock-in

Lock-in is scored by adding:

The total is capped at 2.0. So: higher score = less lock-in risk.

Operational autonomy

We look at self-hosting, open source, and (if neither) whether the vendor is EU-based.

Self-hostable?Open source?EU-based vendor (if neither)?Points
YesYes2.0
YesNo1.5
NoYes1.0
NoNoYes0.5
NoNoNo0.0

Confidence level

Next to the score we show a confidence level: high, medium, or low.

Missing booleans are treated as “no”; missing data portability is treated as “unknown”. So scores never rely on guesswork; they stay deterministic.


What the score ranges mean

Score rangeInterpretation
0.0 – 2.0Low sovereignty; strong dependency on non-EU or non-transparent providers.
3.0 – 5.0Limited sovereignty; some EU or control aspects, but notable lock-in or lack of openness.
6.0 – 8.0Good sovereignty; EU/EEA or strong openness and control, with some gaps.
9.0 – 10.0High sovereignty; EU/EEA where relevant, open, self-hostable, low lock-in.

Worked examples

Element (Matrix) – score 9.0

Nextcloud – score 10.0

GitLab – score 8.0


FAQ

Where do you get the data?
From the structured data we maintain for each tool (e.g. EU company yes/no, countries, data residency, open source, self-hostable, open standards, data portability). You can inspect and suggest changes via our repository.

Do you ever adjust scores manually?
No. The score is computed only from the documented rules. If a score changes, it is because the underlying data or the (documented) rules changed.

Why is “lock-in” scored so that higher is better?
We score “low lock-in” positively. So a higher number in that dimension means it is easier to leave the tool or move your data; a lower number means more lock-in risk.

What if data is missing?
We still compute a score using conservative defaults (e.g. missing booleans count as “no”). The confidence level (high/medium/low) tells you how much we know. Low confidence means the score could change if we add more data.

How often do scores change?
Scores are regenerated every time we build the site. So any change to tool data or to the scoring logic is reflected in the next build.