Guide

Appendix C. Attack vectors

PET solutions do not guarantee protection from all potential attacks at any point in the process. But what would such an attack entail? Specific types of attacks may be more serious than other types of attacks and not all of them may be equally likely to happen.

To facilitate this discussion, we include a non-exhaustive list of potential attack vectors:

  • Reconstruction attack. Reconstruct input data from the output data.
  • Membership inference. Find out whether a certain record (person) is present in the input or output data set.
  • Property inference. Retrieve the value of a certain attribute of a record (person) in the input or output dataset.
  • Model poisoning or backdooring (in Machine Learning): temper with the training phase of the machine learning to poison the model. The poisoned model may infer detailed information of some training data or provide forced (malicious) outputs for certain inputs.
  • Infrastructure attack: an attack that aims to weaken the infrastructure software to insert a malicious algorithm, or to weaken security checks for repetitions (e.g. to avoid reconstruction attacks) or authentication/authorization to the infrastructure.