There are two iron laws of security that are often tragically ignored:
I. “There is no abstract ‘security’ – only security from some specific threat”
II. “There is no security in obscurity.”
Bridgefy, an app that’s been billed as a way for protesters to communicate securely, illustrates both of them.
Bridgefy is an offline messaging tool – a mobile app that uses Bluetooth to pass encrypted messages around a crowd where there is no internet access.
It was originally billed as being useful for big festivals and concerts out in the countryside, where there were lots of people but little or no internet connectivity.
However, as protests have spread around the world, the company has promoted its product as a tool for at-risk protesters seeking to coordinate uprisings for which they might face severe retaliation, including imprisonment, torture and murder.
In April, a group of Royal Holloway researchers audited the app and found it severely unsuitable for these contexts, potentially exposing users to life-threatening hazards. They told the company about these flaws then, but have only now published their findings.
The researchers’ findings reveal that the threats to users from using the app at festivals are very different to the threats that protesters face in repressive regimes (“There is no abstract ‘security’ – only security from some specific threat”).
They also find that the product team made a bunch of mistakes that they overlooked, a common problem (it’s why I can’t find my own typos!) that exposed users to attacks from anyone who knew how to hunt for these errors (“There is no security in obscurity”).
For example, the app sends the ID of both the sender and recipient of every message “in the clear” (without encryption). That allows an attacker who intercepts this metadata to assemble social graphs: Alice knows Bob, Bob knows Carol.
This might expose concertgoers to some risk (for example, if Carol is arrested for selling drugs, Alice and Bob’s messages to her might put them under suspicion). But in a protest context, that exposes the whole movement to risk.
What’s more, the identifiers the app uses are tied to users’ phone numbers: an attacker at a concert would need access to a database that maps phone numbers to real identities. A state-level adversary can simply demand these connections from the phone company.
But not all the flaws in the system stem from the differences in threats at concerts and protests. Some of Bridefy’s flaws threaten users in ANY context, and stem from the developers’ own blind spots about errors in their thinking.
For example, the system doesn’t have any “out of band” way to initialize keys between users. That means that when Alice wants to send a secret message to Bob, she first announces to the whole network that she is Alice and this is her public key that Bob should use.
An attacker in the network can – rather than passing that message on – replace it with a message that substitutes their OWN key, and thereafter intercept, read, and relay all the messages from Alice to Bob (a “man in the middle” attack).
Worse than that, the actual encryption formatting used for the messages is PKCS #1, a system that has been deprecated since 1998 due to unsalvageable flaws.
The app also fails to do vital forms of input sanitization: it doesn’t check for “zip bombs” – small compressed files that, when decompressed, expand to junk files that are millions of times larger. These bombs could crash enough devices in the network to shut it down.
Though Bridgefy has known of the vulnerabilities since April, they are only now announcing them. They attribute the delay to their fruitless internal efforts to remediate these defects, and their ultimate conclusion that their system needs to be rebuilt from the ground up.
They say they are now doing that work, rebuilding the app around the Signal protocol, which is very robust and has been widely probed to identify and shore up weaknesses.
It’s good that they’re doing this. A third iron law of security is that “Security is a process, not a product” – that is, security is always contingent, and requires constant tending and upgrading to patch newly identified defects.
We can’t and shouldn’t expect products to be perfectly secure – all we can ask is that product teams are transparent about which threats they considered in their design, how their products work, and which defects have been identified in them.
Unfortunately, while Bridgefy is doing the right thing by acknowledging these bugs, thanking the reasearch team, and fixing the bugs, the rest of their conduct is less than exemplary.
It was wrong to promote an app designed for concerts as a tool for protesters without considering the differences in the threats to those user populations.
Worse, though the team has known of these defects since April, they didn’t start correcting the record on end-to-end encryption promises until June. And, as Dan Goodin points out on Ars Technica, their messaging continues to imply that it is safe to use.
Bridgefy: even worse than previously believed.
(They lost me at “must have Internet during installation” [link]; I didn’t even get as far as security.)
((*reads articles* wait, hang on, verification is optional now? did Bridgefy become an actual functional mesh system in December and not tell anyone?? Bridgefy: *better* than previously believed???))
(((of course the *other* part of my misgivings about them were vague shady-corporation vibes, which have now intensified)))
#promoted the above from a tag ramble because I thought it ought to be fully part of the thread #and also to be able to include that very relevant and timely link #101 Uses for Infrastructureless Computers #reply via reblog #oh look an update