Diffix is a bundled set of mechanisms for anonymizing structured data. It was jointly developed by Aircloak GmbH and the Max Planck Institute for Software Systems. Diffix' exploits mechansims that have been in use by national statistics offices for decades: aggregation, generalization, noise, suppression, and swapping. It automatically applies these mechanisms as needed on a query-by-query basis to minimize noise while ensuring strong anonymity. Here is a brief overview.
Open Diffix is a project to make Diffix anonymization free and open. The Open Diffix project develops two Diffix query engine implementations, one based on .NET and the other a PostgreSQL extension. We have developed Diffix for Desktop, a GUI-based application on the .NET query engine. Diffix for Desktop is aimed towards ease-of-use, while Diffix for PostgreSQL targets higher complexity and scale. Both are strongly anonymous, and satisfy the GDPR definition of anonymity.
Major versions of Diffix are named after trees. Diffix Elm is the latest version (after Aspen, Birch, Cedar, and Dogwood), and is the basis for the first releases of Diffix for Desktop and Diffix for PostgreSQL. Diffix Elm represents a kind of "complexity reset". It is much simpler than previous versions, making it easier to use and easier to analyze (though less feature rich).
A good overview of Diffix Elm can be found here. A detailed description is available on ArXiv. Besides including a full specification, it includes a complete privacy analysis and guidance for writing a risk assessment.
K-anonymity uses generalization and suppression. Systems based on Differential Privacy use noise and often use generalization. Diffix uses all three, and so combines the benefits of both k-anonymity and Differential Privacy without formally adhering to either model. In so doing, Diffix is more patterned after how national statistics offices approach anonymization. While Diffix does not offer the mathematical guarantees of low-epsilon Differential Privacy, it also does not have the drawback of a privacy budget.
Diffix supports descriptive analytics over structured data like relational databases or CSV files: selecting columns, requesting counts or sums over those columns, putting data in bins of different sizes, and so on. Descriptive analytics is used to produce visualizations like bar graphs or scatter plots or heat maps. Diffix does not support machine learning, synthetic data generation, data masking, pseudonymization, image fuzzing, or anonymization of free-form text (redacting).
All anonymization mechanisms reduce data quality, by generalizing or distorting, and Diffix is no exception. The data quality of Diffix is similar to data released by national statistics offices (e.g. census data), and usually far exceeds that of k-anonymity and Differential Privacy. Diffix for Desktop displays the amount of distortion, both as summary statistics and by displaying the original and anonymized data side-by-side. This way, you can observe Diffix' data quality for yourself.
Descriptive analytics over structured data covers a wide range of use cases. At one extreme, a non-technical user may wish to release simple summary statistics over data from a CSV file on his or her machine. Diffix for Desktop satisfies this use case. At the other extreme, someone may wish to stream data summaries of dynamic data over millions of users into an SQL-based dashboard application. For this the Diffix for PostgreSQL is appropriate.
Diffix has two modes of operation, Trusted Analyst Mode and Untrusted Analyst Mode. Trusted Mode protects against accidental release of personal data. Untrusted Mode protects against intentional, malicious exposure of personal data. A Trusted Mode analyst does not require any expertise in anonymization in order to safely release data queried through Diffix.
Trusted Mode is easier to use. It has more query features, and in Diffix for Desktop it allows an analyst to view the anonymized and original data side-by-side. In this way the analyst knows exactly how much the data is distorted through suppression and noise, and can more easily adjust column selection and generalization as needed.
The short answer is 'yes'. The longer answer is that there are no concrete criteria for GDPR anonymity. Ultimately it is up to a Data Protection Officer (DPO) or Authority (DPA) to make the call. Diffix as implemented by Aircloak was almost always evaluated as GDPR anonymous, and the same will hold for Open Diffix releases.
The full specification of Diffix Elm is designed to support risk assessment by DPOs and DPAs for GDPR or any other privacy standard. It describes the anonymization mechanisms in detail, and gives an analysis of the anonymization properties against an exhaustive set of attacks. For assistance in this process you can contact us at email@example.com.
For the first few years, Open Diffix is funded by the Max Planck Institute for Software Systems as a research initiative. Our goal is to become self-sustaining through sponsorships, consultancy, or licensing.
Please contact us at firstname.lastname@example.org.