18.02.2026 19:02
Lawful access to encrypted data: General Considerations
Last week, I wrote a blog post on why the problem of lawful access to encrypted data is so tricky, this week I want to continue with a discussion on the general considerations you should keep in mind when thinking about this topic.
Important note: I think LE is well aware of these considerations and agrees with most of my conclusions.
Regulatory Overreach
As long as citizens are free to program themselves, or download software from the Internet, and run that applciation on their own devices, then there will be ways to evade any mechanism that the state can think of to give LE access to encrypted data.
I see no appetite to restrict generic computing or start a government licensing scheme for communication software on any side of the discussion.
Similarly, there is no push to tightly regulate the properties of communication services. Even in Australia, where there is a clear legal mandate to force the operators into compliance, the law states that “Importantly, a TCN is expressly prohibited from requiring the building of a capability to decrypt information or remove electronic protection.”
No perfect solution
As a corollary to the first point, there cannot be a solution that will work in 100% of the cases when LE wants (and gets a proper warrant) to access encrypted data.
There will always be skilled criminals, state actors, technology & privacy enthusiasts, niche platforms or emerging technology that will not be covered by the access solution.
Open Source is also incredibly hard to tackle here.
We just need to accept that.
No Weakening of Encryption Algorithms
Previously, the United States government tried to regulate or influence the strength of the encryption algorithms that are widely deployed. That ranged from the design of DES , to “export version of cryptography” to actively trying to backdoor algorithms (A NOBUS in Dual_EC_DRBG ).
There is a wide consensus that this is the wrong approach for lawful access due to the following reasons:
- A NOBUS makes the whole ecosystem brittle, because if the secret backdoor leaks, then everything becomes instantly insecure.
- Weak crypto is really hard to get right as
- With all the private investments in GPUs for AI there is now a lot of cracking power available to rent in the cloud
- With the advances in computing power, the right strength is a moving target, and choices need to be right for years to come
- Quantum computers add further unpredictability
- Historically, the “export version” of cryptography led to ugly problems
#define GOODGUYS
Who should be able to use the technology behind the lawful access technology?
At first glance, this is simple, but if you think a bit more about this, it becomes really tricky.
So, who are we talking about?
Will this be a tool that the normal police force will be able access? If yes, then perhaps we’re giving them too much power – we don’t want to fight petty crime with such a powerful tool.
Or is this something that will be reserved for the special units which are tasked with fighting terrorism or organized crime? Think FBI, BfV, DSN and equivalents.
But then you have the defensive military services: they are tasked to do counterespionage and similar security functions. They, too, will want to be able to use those surveillance tools to perform their supremely important mission.
Next in line will be the intelligence agencies. If they hear that LE can have such spiffy toys to break open encrypted communication, then they will also knock on the door of the politicians. And why should they be denied? They also have a mission and why shouldn’t they also get all the tools that could make their job so much easier?
And God forbid, we find ourselves in a hot war – or just a short “special military operation”, then clearly all those departments for offensive cyber war inside our militaries will also want to use these interfaces.
Whose good guys?
The other dimension is geography: I might trust my own police, but surely not the police from some authoritarian regime.
And the other players from the list above? Are we really ok with military intelligence services from other EU countries being able to remotely listen to encrypted communication in your country? The threshold for them in terms of legal safeguards / requirements is completely different than what applies to our national police.
Going back in history, e.g. WW2 shows another complication: Alliances can shift. Former partners became adversaries overnight. Governments can be toppled and anything you might have trusted the old one with, might be used by the new one against you. The same can also apply to commercial entities from other countries.
How do we restrict the solution to the intended organizations?
Is it enough to just have legal and/or contractual restrictions in place to keep the wrong organizations from accessing the LE access mechanisms?
I fear that this is not enough. Legal boundaries are susceptible to be swept aside by emergencies, political changes or simple corruption.
We need technical means to restrict the reach of the LE access methods. For example, a method that needs physical access to a device to start a wiretap is much less likely to be abused from abroad.
Jurisdiction is really, really tricky
As I already wrote in the previous article, jurisdiction is a hard problem. Consider these examples:
- A judge from Country A issues a warrant that the communication of X should be wiretapped. LEA in A installs spyware on X’s phone. X travels to a different country. Can the surveillance continue, or does it have to stop?
- A Citizen of country A uses an OTT service hosted in country B: Can LEA in country A force the operator in another country to help with the wiretap?
- A group of enthusiasts in Country A publish open-source software which enables encrypted communication: Can a law in country B regulate backdoors to be put in to enable wiretapping there?
- A company in Country A, where the legal mandate doesn’t apply is selling products locally. A travelling citizen of Country B, where the law applies, buys a device and brings it home. Is this now illegal contraband?
- A device stolen in Country A is brought into Country B where LE is bribed to break it open.
- When installing a full-disk encryption solution in Country A, the key is escrowed to some entity in Country A. The device is then brought to Country B. Will the disk have to be re-encrypted and escrowed to the equivalent entity in Country B?
- How will devices even know where they are? How can the system be subverted by messing with the location detection algorithms? (VPNs, spoofed GPS, etc.)
- Who holds the master configuration data for all of this? This includes mapping coordinates to jurisdictions and then from jurisdiction to the keys / contact points of the proper entities. Any abuse of this position completely breaks open the whole scheme.
We need to come up with list of all these permutations and define in advance where the access scheme MUST work, SHOULD work and MUST NOT work.
Implementation is hard
Pluck suitable key-sharding algorithms, key-escrow or zero-knowledge proofs from the academic literature and you can quite easily can come up with something that looks perfectly workable on a PowerPoint slide. Getting it to work in a lab might also be doable.
The math of the algorithms might manageable, but getting an implemented system that is truly secure is next level difficult. Way too often the security of an encryption scheme failed not because of the algorithm, but because mistakes were made in the implementation. Typical ones are the selection of bad parameters, side-channels in timings, and insufficient entropy in the creation of nonces and initialization vectors.
And then there is scaling the solution to billions of devices in the real world.
How will it be deployed in global production pipelines and supply chains?
How will it handle events like key-rollovers, security incidents or algorithm changes?
What happens when institutions or borders change?
Expanding our Attack Surface while Maintaining Security
We’re creating “intended breaking points” in device & communication security to enable LE to access unencrypted data. This is expanding the attack surface of our IT systems.
If – and this is a big if – vendors are asked to mediate LE access to devices and data (see the recent press reports on Microsoft disclosing BitLocker recovery-keys to LE), then they become even more critical.
To use an analogy: We are hoping that we’re buying good bullet-proof vests from our vendors. Now we’re asking them to modify their vests such that they will cease to block bullets fired by a police officer whenever the proper judicial warrant is waved into the direction of the vest.
(Sometimes this reminds me of trying to secure weapons by adding fingerprint readers – an idea which hasn’t really taken off, either.)
Once those LE access paths are in place, our attack surface also includes:
The IT security of the vendors themselves.
If they hold the special key that can give LE access to a suspect’s phone, then they also hold the key that can enable malicious actors to access the phones of politicians, CEOs, journalists and everyone else.
Of course, we are already in many aspects dependent on the internal security of our vendors – as the increased focus on supply chain security in various policy documents shows.
Still, this is a massive expansion. It paints an even bigger bullseye on their back. Getting into the systems that broker that access is huge jackpot for any threat actor.
Regulatory Environment of Vendors
Do you remember the scary stories regarding our dependency on vendors from foreign countries? How Kaspersky was ruled out of various contracts because the Russian state might force them to do something against the interests of their customers? The EU built the whole 5G Toolbox on the premise that “high-risk vendors” from 3rd countries supplying and operating networking equipment is a strategic vulnerability we cannot afford.
What about Apple, Google, Microsoft, Samsung and others? How much can they be forced by the government in their home countries to do something that does not align with our interests? That question might have been hypothetical in past, but those calculations surely have changed over the last 13 months.
In plain text: can foreign vendors be forced by their own national security apparatus to hand them the keys that the EU might legislate into existence?
Law Enforcement Agencies themselves
And then there is law enforcement themselves. They hold (directly or indirectly) the keys to access any organisation’s network if this scheme ever goes live.
Thus, the integrity of LEA is now part of everybody’s attack surface. Yes, that’s not really new, and in the physical world this always becomes relevant when a country slides from democracy into authoritarianism (just ask Minnesota). The tricky thing about the cyber side is that you can’t start filming police officers misbehaving with your mobile phone. It would happen undetected.
So, how much do we trust the security and integrity (IT, processes, people) of LEAs?
Liability
Trust is good, liability is better.
This is something I hear a lot from CISOs and the NIS2 directive also went into the same direction: we need the people who decide to take certain risks to also assume the liability if something goes haywire.
Are the entities involved in the whole scheme (regulatory authorities, key escrow providers, certification authorities, vendors implementing the solution) in any way willing to say: “Yes, this is a good idea. And if it leads to damages caused by a security failure on my side, I will compensate the injured party.”?
If nobody is willing to do that, then perhaps we as a society didn’t complete a full risk assessment.
Corollary: no master key
In order to keep the risk and thus the potential liability somewhere on the manageable side, we cannot have a system where a single breach anywhere can completely destroy the security properties of millions of devices.
Freedom to Innovate
The EU Commission recently published its Digital Omnibus Regulation Proposal to reduce the regulatory burden on EU-based technology companies. Whatever solution the expert group might come up with, it should not introduce a new regulatory hurdle for innovation made in the EU. If we don’t manage the thread this needle, we could drive companies out of the EU. See Proton’s move away from Switzerland for an example.
We might need something like the size-caps in NIS2 or the “Very large online platforms” from the DSA to avoid targeting private operators and start-ups. A regulation that is appropriate for WhatsApp might not be sensible to the Nextcloud Talk instance of SME.
International Impact of regulations
If a democratic, well-run country can force global operators of OTT software to give them access to plaintext (in legally defined situations), how can they refuse to do the same for a country with a less stellar reputation, to say nothing of borderline authoritarian countries?
I remember that a few years back the US government suffered from a split personality regarding the Tor network: The State Department was funding Tor to help dissidents in non-free countries to be able to securely communicate and reach the open Internet, while at the same time the FBI was railing against Tor, threatening to outlaw it, because criminals in the US were using it to hide criminal marketplaces.
If we break open e.g., Signal for our LE, how can we expect it to save the lives of civil rights campaigners in difficult countries? This circles back to “Who are the good guys?”.
Effect on User Behaviour
The solution might change the behaviour of law-abiding citizens in a negative way. For example, if it utilizes targeted software updates to devices under surveillance, then we must expect that some less bright people will chose to just “disable updates to be secure”.
Or, if the official version of a chat app includes the LE access “feature”, then people will start to side-load a version of the app which claims to be free of such “features”. This will be a bonanza for scams and will lead to an overall decline in cyber hygiene.
Yes, this might not be rational, but if the Covid pandemic showed us anything, then this: A sizable part of our population is vulnerable to conspiracy theories that will drive their behaviour in an unsafe direction.
Thus: whatever the solution is, the effect on the broader population is something we need to think about.
Will we eat our own dog food?
According to the EU Blueprint (“XII: Secure communication“), we’re supposed to develop a secure communication infrastructure. Will we include LE access in the design?
According to press reports , Macron and von der Leyen used Signal to communicate. Would we be ok if LE could technically (getting a warrant might be hard in this case) start to wiretap that communication?
Interesting from a legal point are protected communications: Journalists talking to sources, lawyers writing to their clients, doctors, priests.
Prevention of Abuse / Tracking the Usage
A fully functional system to access encrypted communication can be a huge advantage for LE investigations. It has the potential to become the go-to tool whenever LE hits a rough spot in a case. It can be like a cheat-code in a computer game, or the ring of power from the Tolkien books: Powerful, but dangerous. Reading the final report from the EU Parliament on the abuse of surveillance tools is a sobering exercise.
We thus need built-in speed-brakes and accountability tools. There needs to be an audit trail that cannot be manipulated.