A few weeks ago, The Guardian reported on a so-called WhatsApp “backdoor.”
This brought a flurry of high profile security experts into the conversation, starting with Moxie Marlinspike, who helped create the encryption protocol that WhatsApp uses.
The Guardian reacted by publishing an opinion piece by Tobias Boelter, the researcher who discovered the flaw.
Some other security researchers have since written some thorough and insightful responses about The Guardian’s article.
Basically, the WhatsApp “backdoor” vulnerability brought up by The Guardian is not a backdoor per se. And the flaw had been known since April 2016.
Before we dive into the argument, it might be worth spending a bit of time explaining what exactly a backdoor is. This word has been used a lot over the past few years.
A scene about back doors from the 1983 movie War Games
Below is a concise explanation of backdoors by the Electronic Frontier Foundation (EFF):
It was originally used — along with “trapdoor” — throughout the 1980s to refer to secret accounts and/or passwords created to allow someone unknown access into a system.
Their broader interpretation of the term today:
Any mechanism someone designs into a system that allows for access via bypassing normal security measures.
As the EFF mentions in their article, a backdoor does not have to be secret:
The government’s ability to bypass the Clipper Chip’s security wasn’t a secret back in the 1990s. It was part of the system’s basic design.
If you feel like you’d want a deeper definition of the term, you can dive into the 7,000-word long essay of security expert Jonathan Zdziarski.
The usage of the word backdoor was described as “supremely inaccurate” by Open Whisper System’s (OWS) founder Moxie Marlinspike, who explained why WhatsApp has no backdoor, and how the implementation of the end-to-end encryption protocol in fact detects man-in-the-middle attacks:
The fact that WhatsApp handles key changes is not a “backdoor,” it is how cryptography works. Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system.
By the way, an “attempt to intercept messages” is only detectable if you activate “show security notification” (In WhatsApp, go to: Settings > Account > Security > show security notifications: on).
Then Moxie explains why this is solely a design decision to improve the usability of WhatsApp:
The only question it might be reasonable to ask is whether these safety number change notifications should be “blocking” or “non-blocking.” In other words, when a contact’s key changes [this happens when a user reinstalls the app or changes phones], should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.
Given the size and scope of WhatsApp’s user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user’s communication, along with a simple user experience. The choice to make these notifications “blocking” would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn’t, effectively telling the server who it could MITM transparently and who it couldn’t; something that WhatsApp considered very carefully.
Note that OWS is the company that built the end-to-end encryption protocol that WhatsApp uses.
Most experts agree that there is no a backdoor, but Tobias Boelter argues there is actually a flaw — more precisely a vulnerability to a man-in-the-middle attack that can be performed because of a retransmission vulnerability. He published a blog post outlining the flaw.
But the flaw that The Guardian misleadingly reported in January 2017 as a backdoor was actually known of since April 2016.
Back in April 2016, Facebook even acknowledged the flaw and replied to white-hat report from Boelter:
“[…] We were previously aware of the issue and might change it in the future, but for now it’s not something we’re actively working on changing.[…]”
The Guardian’s misleading report
As Alex Muffett outlines in his article, this core argument of the Guardian’s article is just that the surveillance will ultimately win. In the process, it got its main facts wrong.
“Nobody has benefited from this article, except the author, the newspaper, and the state surveillance industry as a whole.” — Alex Muffet
The Guardian’s report is questionable. As a mea culpa, they invited Tobias Boelter, the security researcher who discovered the flaw, to write a column. Boelter set out to describe what the vulnerability is and why it matters:
A user’s public key can be used to encrypt messages which can then only be made readable again with the associated secret key. A difficult problem in secure communication is getting your friend’s public keys. Apps such as WhatsApp and Signal make the process of getting those keys easy for you by storing them on their central servers and allowing your app to download the public keys of your contacts automatically.
The problem here is that the WhatsApp server could potentially lie about the public keys. Instead of giving you your friend’s key, it could give you a public key belonging to a third party, such as the government.
And he also explained how WhatsApp failed to sufficiently inform users of their option to be notified when keys change, and to verify their keys with friends:
You should be notified when sent a friend’s new public key, and given the option to validate again that this new key indeed belongs to your friend and not some other party. This behavior is called “blocking”. The problem with WhatsApp is that you are not given this option.
Instead, your WhatsApp will automatically accept this new key and resend all “in transit” messages (those marked with only one tick), encrypted with the new, potentially malicious key. This behavior is called “non-blocking”.
Again, you can enable the “show security notifications” in WhatsApp’s setting — or switch to Signal, OWS’s own secure messaging tool.
Whoever is right — this whole debate is happening because many users favor network effects, convenience, and usability over privacy.
This argument was also corroborated by Frederic Jacobs, a former Signal staffer:
But again, Tobias Boelter refutes the User Experience VS security argument:
Signal chooses to handle key changes with blocking and so does not have this vulnerability, but WhatsApp chooses to go with non-blocking and therefore has it. So how are they different? How more difficult is Signal to use?
I’ll leave to you to decide.
The questions we are left with are:
While it’s great to have security experts raising questions over the security of ubiquitous messaging applications, I think we are still missing the bigger picture. WhatsApp collects extensive metadata about its users’ communication.
As I mentioned in my previous post, end-to-end encryption can be of little help if we want to know what messages are about. Using end-to-end encryption does not prevent messaging services from collecting metadata.
This blog post was edited by FreeCodecamp.