Using health-monitoring apps on smartphones can help individuals manage chronic diseases or stay on track with fitness goals. However, these apps can be slow and energy-inefficient due to the large machine-learning models they depend on, which need to be transferred between a smartphone and a central memory server.
To address this issue, engineers often utilize hardware to reduce the need for extensive data movement. While these machine-learning accelerators can improve computation efficiency, they are vulnerable to attacks that can compromise sensitive information.
To enhance security, researchers from MIT and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator that is resistant to common types of attacks. This chip ensures the privacy of a user’s data while still enabling efficient operation of large AI models on devices.
By implementing optimizations that prioritize security without significantly affecting device speed or accuracy, this machine-learning accelerator is ideal for demanding AI applications such as augmented reality, virtual reality, and autonomous driving.
The researchers focused on preventing side-channel and bus-probing attacks on a type of machine-learning accelerator called digital in-memory compute (IMC). This chip performs computations within a device’s memory, reducing the need for data transfer between the device and an off-chip memory server.
Their approach involved splitting data into random pieces, using a lightweight cipher to encrypt the model stored in off-chip memory, and generating unique decryption keys directly on the chip. By leveraging variations in the chip introduced during manufacturing, they were able to enhance security without compromising performance.
During testing, the researchers attempted to steal sensitive information using side-channel and bus-probing attacks but were unsuccessful, demonstrating the effectiveness of their security measures. While the addition of security features did impact energy efficiency and chip size, the team plans to explore methods to mitigate these effects in future iterations.
The research, funded by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship, highlights the importance of designing secure and efficient machine-learning accelerators for edge devices.