The ancients’ practice of publicizing set-in-stone personal records would run anathema to modern data privacy laws. These days, in lieu of using contemporary personally identifiable records, I anonymized a 4,000-year-old tax record from ancient Babylon to describe three principles for effective data anonymization at scale:
1. Embracing rare attributes: values and preserves unique data points for insights.
2. Combining statistics and machine learning: enhances accuracy and effectiveness of data analysis and anonymization.
3. Stewarding anonymization fidelity: ensures the quality and integrity of anonymized data, preventing re-identification.
Modern governments make tradeoffs between data openness, efficiency, and privacy protections. In carving a path for data governance, state-mandated privacy protections, like the EU General Data Protection Regulation (GDPR), China’s Personal Information Protection Law (PIPL), and the California Consumer Privacy Act (CCPA), introduce considerations that impact the balance between data privacy and open access. Personally identifiable data excepted from the US Freedom of Information Act (FOIA) should not be less meaningful because it contains names, addresses, or rare disease indications. What if we could honor the FOIA spirit of openness while protecting privacy? And what if non-coders could use cloud computation resources to perform the job in the agencies where they work?
Through the anonymization process of the aforementioned 4,000-year-old Babylonian tax data set, government agencies can open anonymized citizen-level data and do it well using those three principles. Let’s talk about it in detail.
1. Embracing rare attributes:
In modern statistical departments, aggregated summaries are commonly used to present data to the public. However, while these summaries are efficient for broader analysis, they often wash out rare attributes at lower unit levels. Aggregations limit data richness and stifle the representation of diverse elements. In contrast, data synthesis offers an account of the rare attributes while disclaiming individual data provenance.
2. Combining statistics and machine learning:
Even without being a skilled coder or an ancient Near East document interpreter, I could count on the multimodal analytics capabilities in SAS® Viya® to iteratively prep, explore, model, and make decisions on data synthesis. Statistical processes and machine learning-based algorithms, such as the Synthetic Minority Oversampling Technique (SMOTE), are available in SAS to enhance data analysis and anonymization.
3. Stewarding anonymization fidelity:
To ensure quality, the data synthesis team could create additional strata to offset sampling bias and take steps to test re-identification potential through managed data or subsets of only the most plausible synthetic records. The result of back-end fidelity checks is that the synthetic data maintains important attributes of real data that prevent data re-identification.
A historical perspective:
Ancient Fertile Crescent mathematical innovations seem less known, such as when Sumerians began representing “whos” by “whats,” row by column, into a table. These easy-tally tablets named people, their occupations, and aspects of each citizen’s relationship to the state. The Babylonians, and eventually the written world, inherited the Sumerian innovation of tabular data. In hindsight, we might celebrate these societies’ computational efficiency and openness to data access while taking their complete disregard for individual privacy.
The multimodal analytics system of SAS Viya gave me, a low-skill coder unfamiliar with Babylonian tax records, a way to generate plausible anonymized records in six hours. The result was a synthetic tabular data set that maximized data openness, efficiency, and privacy protection. Imagine the possibility of toppling every unnecessary agency firewall so that citizens, including citizen data scientists, could connect the dots for themselves. The technology is ready. Are we?
Source link