Privacy-preserving machine learning (PPML) algorithms use secure computation protocols to allow multiple data parties to collaboratively train machine learning (ML) models while maintaining their data confidentiality....
详细信息
Privacy-preserving machine learning (PPML) algorithms use secure computation protocols to allow multiple data parties to collaboratively train machine learning (ML) models while maintaining their data confidentiality. However, current PPML frameworks couple secure protocols with ML models in PPML algorithm implementations, making it challenging for non-experts to develop and optimize PPML applications, limiting their accessibility and *** propose Sequoia, a novel PPML framework that decouples ML models and secure protocols to optimize the development and execution of PPML applications across data parties. Sequoia offers JAX-compatible APIs for users to program their ML models, while using a compiler-executor architecture to automatically apply PPML algorithms and system optimizations for model execution over distributed data. The compiler in Sequoia incorporates cross-party PPML processes into user-defined ML models by transparently adding computation, encryption, and communication steps with extensible policies, and the executor efficiently schedules code execution across multiple data parties, considering data dependencies and device *** to existing PPML frameworks, Sequoia requires 64%-92% fewer lines of code for users to implement the same PPML algorithms, and achieves 88% speedup of training throughput in horizontal PPML.
暂无评论