Tag Archives: chat control

Position Regarding the “Chat Control” EU Regulation Proposal

от Божидар Божанов
лиценз CC BY

Interest in a very sensitive digital topic has been gaining momentum in recent weeks – the so-called “chat control” – a draft EU regulation under which every message we send, even through encrypted applications, would be scanned for material containing child sexual abuse (the so-called CSAM).

I will make a retrospective and explain the technical problems, but before that I must state that the political party I represent holds the position that invasive measures against private correspondence, which create conditions for mass surveillance, must not be implemented. Therefore, the proposal – both in its original form and in the version seen by the Danish presidency – is unacceptable.

Even without the provisions concerning encrypted applications, the regulation makes serious steps toward improving the effectiveness of combating the spread of CSAM. Thus, at the upcoming Council of the EU meeting in the fall, the hot issue will be precisely encrypted applications – on the rest there is rather consensus, since it is indisputable that more serious and effective counteraction against such crimes is needed. Therefore, the remaining provisions of the regulation should be supported.

Initially, this proposal included the possibility of sending images centrally to a European body for scanning. This was met with strong disapproval, since in practice it eliminates end-to-end encryption – if every message containing a photo or a link is sent somewhere, encryption is effectively nullified.

Therefore, under a previous Council presidency, there was a working proposal to limit this measure only to already known content (CSAM) and for scanning to be carried out only on the device, before encryption, without sending anything anywhere. At first glance, this sounded more reasonable, as it moved the proposal away from mass surveillance. It even seemed, at first glance, that artificial intelligence could be applied directly on the device. At the time, I made such an assumption, with the caveat that careful analysis was needed.

But once such careful analysis is done, it becomes clear that this approach is both dangerous and not particularly useful for achieving the goal. I will list a few details:

  1. Organized crime groups involved in the distribution of CSAM would simply start using their own applications, which, thanks to another EU regulation (the DMA), they would be able to install on their phones without complying with the new requirements. In other words, the protection of ordinary people’s private correspondence would be weakened and risks of mass surveillance and abuse would be created, while criminal groups would bypass it.

  2. At present, there is no technology capable of implementing the Danish presidency’s and the Commission’s vision in a workable way. Algorithms for so-called perceptual hashing (or fuzzy hashing) were not designed to withstand malicious modifications – with small visual effects or transformations of images, they will go undetected. Likewise, both these algorithms and AI models that would work on end devices produce false positives, which risks flooding law enforcement with entirely legal photos. For such a technology to be introduced by regulation, it must meet all these (and other) challenges – we cannot allow proprietary, experimental technologies to become part of legal frameworks, especially when fundamental constitutional rights are at stake.

  3. The technology, if one day a sufficiently good one is developed, must be open source, and if it uses AI – the model must also be open, with a very clear and transparent process for auditing the training data. The perceptual hashing algorithm should be resistant to malicious image alterations, because otherwise it’s pointless to even try to impose such techniques. Furthermore, the central database must be subject to very strict procedures for submission and verification of content, because otherwise a member state with a low level of rule of law could submit other content, including political content, that it wishes to monitor or censor. Last summer’s example in Bulgaria with the takedown of the satirical website New Beginning (the party of the strongest local local oligarch) is just an indication of how such abuse could happen. Apart from the initial takedown, the website also appeared on lists by cybersecurity companies as “adult content” and was blocked in networks where software by those companies was installed.

These are only part of the arguments why the proposal is ill-conceived. A much longer debate on the issue is needed, as well as many more academic studies researching and developing technological readiness for such approaches. The good news is that many countries are still hesitant, among them Germany, and thus there is no majority in the Council, while the mandate of the European Parliament is against this type of invasive changes.

When there is legitimate criticism of the EU, it is that such types of regulation are possible. But the answer to this criticism is that member states evidently value guarantees for personal freedom, and that within a serious debate across the entire European Union, Orwellian measures can be stopped and working solutions can be found instead of well-sounding but nonfunctional technological regulations.

The post Position Regarding the “Chat Control” EU Regulation Proposal appeared first on Bozho's tech blog.