Posted by Katherine Heller, Research Scientist, Google Research, on behalf of the CAIR Team
Artificial intelligence (AI) and machine learning (ML) technologies are increasingly influential in today’s world. It is crucial that we consider the potential impacts on society and individuals in all aspects of the technology we create. The Context in AI Research (CAIR) team focuses on developing novel AI methods throughout the entire AI pipeline, from data to end-user feedback.
Data
The CAIR team is dedicated to understanding the data on which ML systems are built. We prioritize improving transparency standards for ML datasets. To achieve this, we employ documentation frameworks such as Datasheets for Datasets and Model Cards for Model Reporting. We have developed Healthsheets, a health-specific adaptation of Datasheets, to address the limitations of existing regulatory frameworks for AI and health. Through collaboration with stakeholders in various job roles, we aim to analyze and address ethical concerns in dataset analysis.
Model
When ML systems are deployed in the real world, unexpected behavior and consequences can arise. The CAIR team focuses on identifying and mitigating underspecified models that may fail to perform well in new contexts. We have demonstrated the importance of causal mechanisms in diagnosing and mitigating fairness and robustness issues, as well as identifying cases where bias is unintentionally introduced. Our research aims to develop inclusive models and techniques to mitigate bias during model development and evaluation.
Deployment
The CAIR team aims to build technology that improves the lives of all people through mobile device technology. We have explored the use of consumer technologies in the context of chronic disease management, such as multiple sclerosis (MS). We have extended the FDA MyStudies platform to make it more accessible for researchers to collect good quality data. We have also developed the MS Signals app to interface with patients and predict high-severity symptoms. Additionally, we have partnered with organizations like Learning Ally to build systems that benefit children with learning disabilities.
Human feedback
As ML models become more prevalent, it is important to ensure that voices from less developed countries are not left behind. The CAIR team prioritizes community-driven approaches and works closely with grassroots organizations for ML, such as Sisonkebiotik. We collaborate with these communities to address ML-related concerns and develop inclusive research initiatives.