Research on FAX Software Library by Google Research
Recently, a team of researchers from Google Research has introduced FAX, an advanced software library built on top of JavaScript to enhance calculations used in federated learning (FL). FAX is specifically designed to support large-scale distributed and federated computations across various applications, including data center and cross-device scenarios.
By leveraging JAX’s sharding features, FAX enables seamless integration with TPUs (Tensor Processing Units) and sophisticated JAX runtimes like Pathways. It offers several key advantages by embedding essential building blocks for federated computations directly as primitives within JAX.
The library provides scalability, simple JIT compilation, and AD features. In FL, clients collaborate on Machine Learning (ML) tasks without revealing their personal data, and federated computations often involve multiple clients training models simultaneously while maintaining periodic synchronization. While on-device clients can be utilized in FL applications, high-performance data center software remains crucial.
FAX addresses these challenges by providing a framework for specifying scalable distributed and federated computations in data centers. Through its Primitive mechanism, it integrates a federated programming model into JAX, enabling FAX to leverage JIT compilation and sharding to XLA.
FAX can shard computations between models and clients, as well as within-client data between logical and physical device meshes. It incorporates innovations in distributed data-center training like Pathways and GSPMD. The team has indicated that FAX may also support Federated Automatic Differentiation (federated AD) by enabling forward- and reverse-mode differentiation through JAX’s Primitive mechanism, preserving data location information during the differentiation process.
The team has outlined their primary contributions as follows:
- Efficient XLA HLO format translation of FAX computations for hardware accelerators like TPUs
- Thorough implementation of federated automated differentiation in FAX
- Compatibility with cross-device federated compute systems
In conclusion, FAX is a versatile tool for various ML computations in data centers, capable of handling distributed and parallel algorithms beyond federated learning tasks. To learn more, check out the research paper and Github repository. All credit for this research goes to the dedicated researchers behind the project.
If you enjoy our work, consider subscribing to our newsletter and joining our ML community on Reddit and other platforms.
Author: Tanya Malhotra, a final year undergrad specializing in Artificial Intelligence and Machine Learning at the University of Petroleum & Energy Studies, Dehradun.
Join the Fastest Growing AI Research Newsletter Read by Researchers from Google, NVIDIA, Meta, Stanford, MIT, Microsoft, and many more!