Sebastian Hasler, Pascal Reisert, Marc Rivinius, and Ralf Küsters, “Multipars: Reduced-Communication MPC over Z2k,” Cryptology ePrint Archive, Technical Report 2023/1932, 2023.
Abstract
In recent years, actively secure SPDZ-like protocols for dishonest majority, like SPDZ2k, Overdrive2k, and MHz2k, over base rings Z2k have become more and more efficient. In this paper, we present a new actively secure MPC protocol Multipars that outperforms these state-of-the-art protocols over Z2k by more than a factor of 2 in the two-party setup in terms of communication. Multipars is the first actively secure N-party protocol over Z2k that is based on linear homomorphic encryption (LHE) in the offline phase (instead of oblivious transfer or somewhat homomorphic encryption in previous works). The strong performance of Multipars relies on a new adaptive packing for BGV ciphertexts that allows us to reduce the parameter size of the encryption scheme and the overall communication cost. Additionally, we use modulus switching for further size reduction, a new type of enhanced CPA security over Z2k, a truncation protocol for Beaver triples, and a new LHE-based offline protocol without sacrificing over Z2k.
We have implemented Multipars and therewith provide the fastest preprocessing phase over Z2k. Our evaluation shows that Multipars offers at least a factor of 8 lower communication costs and up to a factor of 10.2 faster runtime in the WAN setting compared to the currently best available MPC implementation over Z2k.BibTeX
Marc Rivinius, Pascal Reisert, Sebastian Hasler, and Ralf Küsters, “Convolutions in Overdrive: Maliciously Secure Convolutions for MPC,” Proceedings on Privacy Enhancing Technologies, vol. 2023, no. 3, pp. 321--353, 2023. Runner-Up for the PETS 2023 Best Student Paper Award.
Abstract
Machine learning (ML) has seen a strong rise in popularity in recent years and has become an essential tool for research and industrial applications. Given the large amount of high quality data needed and the often sensitive nature of ML data, privacy-preserving collaborative ML is of increasing importance. In this paper, we introduce new actively secure multiparty computation (MPC) protocols which are specially optimized for privacy-preserving machine learning applications. We concentrate on the optimization of (tensor) convolutions which belong to the most commonly used components in ML architectures, especially in convolutional neural networks but also in recurrent neural networks or transformers, and therefore have a major impact on the overall performance. Our approach is based on a generalized form of structured randomness that speeds up convolutions in a fast online phase. The structured randomness is generated with homomorphic encryption using adapted and newly constructed packing methods for convolutions, which might be of independent interest. Overall our protocols extend the state-of-the-art Overdrive family of protocols (Keller et al., EUROCRYPT 2018). We implemented our protocols on-top of MP-SPDZ (Keller, CCS 2020) resulting in a full-featured implementation with support for faster convolutions. Our evaluation shows that our protocols outperform state-of-the-art actively secure MPC protocols on ML tasks like evaluating ResNet50 by a factor of 3 or more. Benchmarks for depthwise convolutions show order-of-magnitude speed-ups compared to existing approaches.BibTeX
Marc Rivinius, Pascal Reisert, Sebastian Hasler, and Ralf Küsters, “Convolutions in Overdrive: Maliciously Secure Convolutions for MPC,” Cryptology ePrint Archive, Technical Report 2023/359, 2023.
Abstract
Machine learning (ML) has seen a strong rise in popularity in recent years and has become an essential tool for research and industrial applications. Given the large amount of high quality data needed and the often sensitive nature of ML data, privacy-preserving collaborative ML is of increasing importance. In this paper, we introduce new actively secure multiparty computation (MPC) protocols which are specially optimized for privacy-preserving machine learning applications. We concentrate on the optimization of (tensor) convolutions which belong to the most commonly used components in ML architectures, especially in convolutional neural networks but also in recurrent neural networks or transformers, and therefore have a major impact on the overall performance. Our approach is based on a generalized form of structured randomness that speeds up convolutions in a fast online phase. The structured randomness is generated with homomorphic encryption using adapted and newly constructed packing methods for convolutions, which might be of independent interest. Overall our protocols extend the state-of-the-art Overdrive family of protocols (Keller et al., EUROCRYPT 2018). We implemented our protocols on-top of MP-SPDZ (Keller, CCS 2020) resulting in a full-featured implementation with support for faster convolutions. Our evaluation shows that our protocols outperform state-of-the-art actively secure MPC protocols on ML tasks like evaluating ResNet50 by a factor of 3 or more. Benchmarks for depthwise convolutions show order-of-magnitude speed-ups compared to existing approaches.BibTeX
Sebastian Hasler, Toomas Krips, Ralf Küsters, Pascal Reisert, and Marc Rivinius, “Overdrive LowGear 2.0: Reduced-Bandwidth MPC without Sacrifice,” Cryptology ePrint Archive, Technical Report 2023/462, 2023.
Abstract
Some of the most efficient protocols for Multi-Party Computation (MPC) follow a two-phase approach where correlated randomness, in particular Beaver triples, is generated in the offline phase and then used to speed up the online phase. Recently, more complex correlations have been introduced to optimize certain operations even further, such as matrix triples for matrix multiplications. In this paper, our goal is to improve the efficiency of the triple generation in general and in particular for classical field values as well as matrix operations. To this end, we modify the Overdrive LowGear protocol to remove the costly sacrificing step and therewith reduce the round complexity and the bandwidth. We extend the state-of- the-art MP-SPDZ implementation with our new protocols and show that the new offline phase outperforms state-of-the-art protocols for the generation of Beaver triples and matrix triples. For example, we save 33 % in bandwidth compared to Overdrive LowGear.BibTeX
Pascal Reisert, Marc Rivinius, Toomas Krips, and Ralf Küsters, “Overdrive LowGear 2.0: Reduced-Bandwidth MPC without Sacrifice,” in ACM ASIA Conference on Computer and Communications Security (ASIA CCS 2023), 2023, pp. 372–386.
Abstract
Some of the most efficient protocols for Multi-Party Computation (MPC) follow a two-phase approach where correlated randomness, in particular Beaver triples, is generated in the offline phase and then used to speed up the online phase. Recently, more complex correlations have been introduced to optimize certain operations even further, such as matrix triples for matrix multiplications. In this paper, our goal is to improve the efficiency of the triple generation in general and in particular for classical field values as well as matrix operations. To this end, we modify the Overdrive LowGear protocol to remove the costly sacrificing step and therewith reduce the round complexity and the bandwidth. We extend the state-of- the-art MP-SPDZ implementation with our new protocols and show that the new offline phase outperforms state-of-the-art protocols for the generation of Beaver triples and matrix triples. For example, we save 33 % in bandwidth compared to Overdrive LowGear.BibTeX