Generative AI is receiving significant attention for its capacity to produce text and images. However, these forms of media only represent a small portion of the vast amount of data generated in our society today. Data is created every time a patient undergoes a medical procedure, a flight is affected by a storm, or an individual interacts with a software application.
Utilizing generative AI to generate realistic synthetic data based on these scenarios can assist organizations in more effectively treating patients, rerouting flights, or enhancing software platforms — particularly in situations where real-world data is limited or sensitive.
For the past three years, the MIT spinout DataCebo has provided a generative software system known as the Synthetic Data Vault to help organizations generate synthetic data for purposes such as testing software applications and training machine learning models.
The Synthetic Data Vault, or SDV, has been downloaded over 1 million times, with more than 10,000 data scientists utilizing the open-source library to generate synthetic tabular data. The founders — Principal Research Scientist Kalyan Veeramachaneni and alumna Neha Patki ’15, SM ’16 — attribute the company’s success to SDV’s ability to revolutionize software testing.
SDV goes viral
In 2016, Veeramachaneni’s group in the Data to AI Lab introduced a suite of open-source generative AI tools to help organizations create synthetic data that mimicked the statistical properties of real data.
Companies can utilize synthetic data instead of sensitive information in programs while still maintaining the statistical relationships between data points. Synthetic data can also be used by companies to run new software through simulations to evaluate its performance before releasing it to the public.
Veeramachaneni’s group encountered this issue while collaborating with companies that wanted to share their data for research.
“MIT exposes you to various use cases,” Patki explains. “You work with financial companies and healthcare companies, and all those projects are valuable for developing solutions across industries.”
In 2020, the researchers established DataCebo to develop additional SDV features for larger organizations. Since then, the applications of SDV have been as diverse as they have been impressive.
With DataCebo’s new flight simulator, airlines can prepare for rare weather events in a manner that would be impossible using only historical data. In another example, SDV users synthesized medical records to predict health outcomes for cystic fibrosis patients. A team from Norway utilized SDV to create synthetic student data to assess the fairness and impartiality of various admissions policies.
In 2021, the data science platform Kaggle hosted a competition for data scientists using SDV to create synthetic data sets to avoid using proprietary data. Approximately 30,000 data scientists participated, developing solutions and making predictions based on the company’s realistic data.
As DataCebo has expanded, it has remained loyal to its MIT origins: all current employees of the company are MIT alumni.
Supercharging software testing
Despite the use of their open-source tools across various applications, the company is concentrating on increasing its impact in software testing.
“You need data to test these software applications,” Veeramachaneni states. “Traditionally, developers manually write scripts to generate synthetic data. With generative models created using SDV, you can learn from a sample of collected data and then generate a large volume of synthetic data (with the same properties as real data), or create specific scenarios and edge cases, and use the data to test your application.”
For example, if a bank wished to test a program designed to reject transfers from accounts with insufficient funds, it would need to simulate numerous accounts transacting simultaneously. Accomplishing this with manually created data would be time-consuming. With DataCebo’s generative models, customers can create any edge case they wish to test.
“It is common for industries to possess data that is sensitive in some manner,” Patki remarks. “In domains with sensitive data, there are often regulations in place, and even in the absence of legal regulations, it is in the best interest of companies to be cautious about data access. Therefore, synthetic data is always preferable from a privacy standpoint.”
Scaling synthetic data
Veeramachaneni believes that DataCebo is advancing the field of what they refer to as synthetic enterprise data, or data derived from user interactions with large companies’ software applications.
“Enterprise data of this nature is intricate, and it is not universally available, unlike language data,” Veeramachaneni notes. “When individuals utilize our publicly accessible software and provide feedback on how it functions with a specific pattern, we gain insights into these unique patterns, enabling us to enhance our algorithms. In a way, we are building a repository of these complex patterns, which is readily available for language and image data.”
DataCebo has also recently introduced features to enhance the utility of SDV, including tools to evaluate the “realism” of the generated data, known as the SDMetrics library, as well as a method for comparing model performance called SDGym.
“It is about ensuring that organizations have confidence in this new data,” Veeramachaneni remarks. “[Our tools provide] programmable synthetic data, enabling enterprises to incorporate their specific insights and expertise to create more transparent models.”
As companies in every sector rush to embrace AI and other data science tools, DataCebo is ultimately aiding them in doing so in a more transparent and responsible manner.
“In the coming years, synthetic data generated from generative models will revolutionize all data-related activities,” Veeramachaneni predicts. “We believe that 90 percent of enterprise operations can be conducted using synthetic data.”