Data privacy is a major concern in today's world, with many countries enacting laws such as the EU's General Data Protection Regulation (GDPR) to protect personal information. In the field of machine learning, a key issue arises when customers want to leverage pre-trained models by transferring them to their data. Sharing features of extracted data with model providers can potentially expose sensitive customer information through feature inversion attacks.
Previous approaches to privacy-preserving transfer learning have relied on techniques such as secure multi-party computing (SMPC), differential privacy (DP), and homomorphic encryption (HE). While SMPC requires significant communication overhead and DP can reduce accuracy, HE-based methods have shown promise but suffer from computational challenges.
A team of researchers has now developed HETAL, an efficient HE-based algorithm (shown in Figure 1) for privacy-preserving transfer learning. Their method allows clients to encrypt data functions and send them to a server for adjustments without compromising data privacy.
At the core of HETAL is a process optimized for encrypted matrix multiplications, a dominant operation in training neural networks. Researchers propose novel algorithms, DiagABT and DiagATB, which significantly reduce computational costs compared to previous methods. Additionally, HETAL introduces a new approximation algorithm for the softmax function, a critical component in neural networks. Unlike previous approaches with limited approximation ranges, HETAL's algorithm can handle input values spanning exponentially large intervals, enabling accurate training over many epochs.
The researchers demonstrated the effectiveness of HETAL through experiments on five benchmark data sets, including MNIST, CIFAR-10, and DermaMNIST (results are shown in Table 1). Their encrypted models achieved 0.51% accuracy of their unencrypted counterparts, while maintaining practical runtimes, often less than an hour.
HETAL addresses a crucial challenge in privacy-preserving machine learning by enabling efficient, encrypted transfer learning. The proposed method protects the privacy of client data through homomorphic encryption while allowing model tuning on the server side. Additionally, HETAL's novel matrix multiplication algorithms and softmax approximation technique can potentially benefit other applications involving neural networks and encrypted computations. While there may be limitations, this work represents an important step toward practical, privacy-preserving solutions for machine learning as a service.
Review the Paper and Github. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on Twitter. Join our Telegram channel, Discord Channeland LinkedIn Grabove.
If you like our work, you will love our Newsletter..
Don't forget to join our 39k+ ML SubReddit
Vineet Kumar is a Consulting Intern at MarktechPost. She is currently pursuing her bachelor's degree from the Indian Institute of technology (IIT), Kanpur. He is a machine learning enthusiast. He is passionate about research and the latest advances in Deep Learning, Computer Vision and related fields.
<!– ai CONTENT END 2 –>