Privacy Preserving n-Party Scalar Product Protocol

F. van Daalen*, L. Ippel, A. Dekker, I. Bermejo

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Privacy-preserving machine learning enables the training of models on decentralized datasets without the need to reveal the information, both on horizontally and vertically partitioned data. However, it requires specialized techniques and algorithms to perform the necessary computations. The privacy preserving scalar product protocol, which enables the dot product of vectors without revealing them, is one popular example for its versatility. For example it can be used to perform analyses that require counting the number of samples which fulfill certain criteria defined across various sites, such as calculating the information gain at a node in a decision tree. Unfortunately, the solutions currently proposed in the literature focus on two-party scenarios, even though scenarios with a higher number of data parties are becoming more relevant. In this article, we propose a generalization of the protocol for an arbitrary number of parties, based on an existing two-party method. Our proposed solution relies on a recursive resolution of smaller scalar products. After describing our proposed method, we discuss potential scalability issues. Finally, we describe the privacy guarantees and identify any concerns, as well as comparing the proposed method to the original solution in this aspect. Additionally we provide an online repository containing the code.
Original languageEnglish
Pages (from-to)1060-1066
Number of pages7
JournalIeee Transactions on Parallel and Distributed Systems
Volume34
Issue number4
DOIs
Publication statusPublished - 1 Apr 2023

Keywords

  • Federated learning
  • n-party scalar product protocol
  • privacy preserving

Cite this