Kontakt

Universität Oldenburg
Fakultät II – Department für Informatik
Abteilung Safety-Security-Interaction
26111 Oldenburg

Sekretariat

Ingrid Ahlhorn

+49 (0) 441 - 798 2426

I 11 0-014

Industriestrasse 11, 26121 Oldenburg

Nachrichten

Artikel auf der DBSec 2025!

Artikel „Encrypt What Matters: Selective Model Encryption for More Efficient Secure Federated Learning” mit SSI-Beteiligung auf der DBSec 2025 akzeptiert!

Artikel „Encrypt What Matters: Selective Model Encryption for More Efficient Secure Federated Learning” mit SSI-Beteiligung auf der DBSec 2025 akzeptiert!

F. Mazzone, A. Al Badawi, Y. Polyakov, M. Everts, F. Hahn, und A. Peter, "Encrypt What Matters: Selective Model Encryption for More Efficient Secure Federated Learning" in Proc. of the 39th IFIP WG 11.3 Annual Conference on Data and Applications Security and Privacy (DBSec 2025), 2025.

Kurze Zusammenfassung (auf Englisch):

The notion that federated learning ensures privacy simply by keeping data local is widely acknowledged to be flawed. Cryptographic techniques such as Multi-Party Computation (MPC) and Fully Homomorphic Encryption (FHE) address this issue by concealing the model during the training procedure, but their extreme computational and communication overhead makes them impractical for real-world deployment.

However, we argue that such strong guarantees are unnecessary. Even with full-model encryption, black-box attacks remain possible during the prediction phase, since model outputs are eventually revealed to the querier. This suggests that instead of enforcing perfect privacy during training, it is sufficient to ensure that the leakage during training is no higher than the leakage during prediction.

To achieve this, we generalize POSEIDON (NDSS 2021), a state-of-the-art FHE-based federated learning approach, by selectively encrypting only the components of the model necessary to match the privacy level of the prediction phase. Our method identifies the parts of the model that contribute most to information leakage and prioritizes their encryption, significantly reducing computational and communication overhead.

Our experiments on dense neural networks show that encrypting only the last layer is often sufficient to hinder white-box attacks, improving efficiency by a linear factor in the number of layers. For deeper models, multiple layers may require encryption, but our approach still achieves a substantial speedup compared to full-model encryption.

(Stand: 20.08.2024)  Kurz-URL:Shortlink: https://uole.de/p87900n11963
Zum Seitananfang scrollen Scroll to the top of the page