Contact

University of Oldenburg Faculty II - Department of Computer Science Department Safety-Security-Interaction 26111 Oldenburg

Secretariat

Ingrid Ahlhorn

A03 2-208

+49 (0) 441 - 798 2426

Safety-Security-Interaction

Welcome to the Safety-Security-Interaction Group!

The Safety-Security-Interaction group is concerned with the development of theoretically sound technologies for maintaining the security of IT systems in the context of safety-critical systems and the Internet of Things. The focus is on the development of security solutions that are tailored to the context-specific conditions and that take into account various types of user-interaction as well as the functional safety of the to-be-protected systems.

News

Article at DBSec 2025!

SSI-co-authored paper „Encrypt What Matters: Selective Model Encryption for More Efficient Secure Federated Learning” accepted at DBSec 2025!

SSI-co-authored paper „Encrypt What Matters: Selective Model Encryption for More Efficient Secure Federated Learning” accepted at DBSec 2025!

F. Mazzone, A. Al Badawi, Y. Polyakov, M. Everts, F. Hahn, und A. Peter, "Encrypt What Matters: Selective Model Encryption for More Efficient Secure Federated Learning" in Proc. of the 39th IFIP WG 11.3 Annual Conference on Data and Applications Security and Privacy (DBSec 2025), 2025.

Short Summary:

The notion that federated learning ensures privacy simply by keeping data local is widely acknowledged to be flawed. Cryptographic techniques such as Multi-Party Computation (MPC) and Fully Homomorphic Encryption (FHE) address this issue by concealing the model during the training procedure, but their extreme computational and communication overhead makes them impractical for real-world deployment.

However, we argue that such strong guarantees are unnecessary. Even with full-model encryption, black-box attacks remain possible during the prediction phase, since model outputs are eventually revealed to the querier. This suggests that instead of enforcing perfect privacy during training, it is sufficient to ensure that the leakage during training is no higher than the leakage during prediction.

To achieve this, we generalize POSEIDON (NDSS 2021), a state-of-the-art FHE-based federated learning approach, by selectively encrypting only the components of the model necessary to match the privacy level of the prediction phase. Our method identifies the parts of the model that contribute most to information leakage and prioritizes their encryption, significantly reducing computational and communication overhead.

Our experiments on dense neural networks show that encrypting only the last layer is often sufficient to hinder white-box attacks, improving efficiency by a linear factor in the number of layers. For deeper models, multiple layers may require encryption, but our approach still achieves a substantial speedup compared to full-model encryption.

(Changed: 20 Aug 2024)  Kurz-URL:Shortlink: https://uole.de/p81251n11963en
Zum Seitananfang scrollen Scroll to the top of the page