by Kristof Meding and Christoph Sorge
Abstract:
When does a digital image resemble reality? The relevance of this question increases as the generation of synthetic images—so-called deep fakes—becomes increasingly popular. Deep fakes have gained much attention for a number of reasons—among others, due to their potential to disrupt the political climate. In order to mitigate these threats, the EU AI Act implements specific transparency regulations for generating synthetic content or manipulating existing content. However, the distinction between real and synthetic images is— even from a computer vision perspective—far from trivial. We argue that the current definition of deep fakes in the AI Act and the corresponding obligations are not sufficiently specified to tackle the challenges posed by deep fakes. By analyzing the life cycle of a digital photo from the camera sensor to the digital editing features, we find that: (1.) Deep fakes are ill-defined in the EU AI Act. The definition leaves too much scope for what a deep fake is. (2.) It is unclear how editing functions like Google's "best take" feature can be considered as an exception to transparency obligations. (3.) The exception for substantially edited images raises questions about what constitutes substantial editing of content and whether or not this editing must be perceptible by a natural person.Our results demonstrate that complying with the current AI Act transparency obligations is difficult for providers and deployers. As a consequence of the unclear provisions, there is a risk that exceptions may be either too broad or too limited. We intend our analysis to foster the discussion on what constitutes a deep fake and to raise awareness about the pitfalls in the current AI Act transparency obligations.
Reference:
Kristof Meding and Christoph Sorge: What constitutes a Deep Fake? The blurry line between legitimate processing and manipulation under the EU AI Act , In Proceedings of the 2025 Symposium on Computer Science and Law, Association for Computing Machinery, pp. 152–159, 2025.
Bibtex Entry:
@InProceedings{ medingsorge25deepfake,
title = {{What constitutes a Deep Fake? The blurry line between
legitimate processing and manipulation under the EU AI Act
}},
author = {Kristof Meding AND Christoph Sorge},
year = {2025},
isbn = {9798400714214},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3709025.3712218},
abstract = {When does a digital image resemble reality? The relevance
of this question increases as the generation of synthetic
images---so-called deep fakes---becomes increasingly
popular. Deep fakes have gained much attention for a number
of reasons---among others, due to their potential to
disrupt the political climate. In order to mitigate these
threats, the EU AI Act implements specific transparency
regulations for generating synthetic content or
manipulating existing content. However, the distinction
between real and synthetic images is--- even from a
computer vision perspective---far from trivial. We argue
that the current definition of deep fakes in the AI Act and
the corresponding obligations are not sufficiently
specified to tackle the challenges posed by deep fakes. By
analyzing the life cycle of a digital photo from the camera
sensor to the digital editing features, we find that: (1.)
Deep fakes are ill-defined in the EU AI Act. The definition
leaves too much scope for what a deep fake is. (2.) It is
unclear how editing functions like Google's "best take"
feature can be considered as an exception to transparency
obligations. (3.) The exception for substantially edited
images raises questions about what constitutes substantial
editing of content and whether or not this editing must be
perceptible by a natural person.Our results demonstrate
that complying with the current AI Act transparency
obligations is difficult for providers and deployers. As a
consequence of the unclear provisions, there is a risk that
exceptions may be either too broad or too limited. We
intend our analysis to foster the discussion on what
constitutes a deep fake and to raise awareness about the
pitfalls in the current AI Act transparency obligations.},
booktitle = {Proceedings of the 2025 Symposium on Computer Science and
Law},
pages = {152--159},
keywords = {Deep Fakes, EU AI Act, Image Processing, Legal Aspects,
Transparency Regulations},
url = {https://dl.acm.org/doi/10.1145/3709025.3712218}
}