This article is out of date. Check here for more recent information about Signal.
There are dozens of messaging apps for iPhone and Android, but one in particular continues to stand out in the crowd. Signal is easy to use, works on both iOS and Android, and encrypts communications so that only the sender and recipient can decipher them.
It also has open source code, meaning it can be inspected to verify security. You can download Signal from the Android Play Store and the iPhone App Store.
Although Signal is well-designed, there are extra steps you must take if you want to maximize the security for your most sensitive conversations — the ones that could be misinterpreted by an employer, client, or airport security screener; might be of interest to a snooping government, whether at home or abroad; or could allow a thief or hacker to blackmail you or steal your identity.
I discuss these steps at length below, in order of importance. If you wish to jump ahead to a specific section, you can click the appropriate link:
Signal uses strong end-to-end encryption, which, when properly used, ensures that no one involved in facilitating your conversation can see what you’re saying — not the makers of Signal, not your cellphone or broadband provider, and not the NSA or another spy agency that collects internet traffic in bulk.
But Signal’s encryption scheme can’t stop someone from picking up your phone and opening the app to read through your conversations. You have to take additional precautions.
If you’re using Android:
If you’re using an iPhone:
Signal’s powerful encryption won’t necessarily help you if other people can see incoming Signal messages displayed on your lock screen. Displaying messages on the lock screen is Signal’s default behavior, but you should change this if your phone is frequently in physical proximity to people who shouldn’t see your Signal messages — roommates, coworkers, or airport screeners, for example.
Left: Signal notification on locked Android phone. Right: Signal notification on locked iPhone.
Here’s how to lock down your Signal notifications.
If you’re using Android:
If you’re using an iPhone:
Left: Hidden Signal notification on locked Android phone. Right: Hidden Signal notifications on locked iPhone.
I said earlier that Signal ensures your communications stay private when it is properly used. Using Signal properly involves verifying that your communications are not subject to a “man-in-the-middle attack.”
A man-in-the-middle attack is where two parties (Romeo and Juliet, for example) think they’re speaking directly to each other, but instead, Romeo is speaking to an attacker, Juliet is speaking to the same attacker, and the attacker is connecting the two, spying on everything along the way. In order to fully safeguard your communications, you have to take extra steps to verify that you’re encrypting directly to your friends and not to impostors.
Most messaging apps don’t provide any way to do this sort of verification. Signal provides two: one for verifying voice calls and one for verifying text conversations.
It’s easy to verify the security of phone calls on Signal, but you have to verify every call.
For each call, the Signal app displays two words on the callers’ phone screens. In the screen shot below, for example, each screen shows the words “shamrock paragon.” Juliet and Romeo read these words to one another; if the words are the same, and they recognize one another’s voices, the call is secure. If the words are different, someone is attacking the encryption in the call and you should hang up and try calling again, but this time from a different internet connection.
It’s not required, but a popular convention is for the receiver to answer the phone by reading the first word, as in, “Shamrock?” And the caller to respond with the second word, as in, “Paragon.”
Left: Encrypted Signal voice call in Android. Right: Encrypted Signal voice call on an iPhone.
I admit that this sounds like magic, but I assure you that it’s only mathematics. Here’s how it works: When Juliet calls Romeo using Signal, her app communicates with his app and comes up with a shared secret that no one else can possibly learn, even if they’re spying on this exchange — watch this five-minute video if you want to get some information about how this works. The Signal app on each phone takes this shared secret and converts it into the two-word authentication string. As long as the shared secret is exactly the same, the authentication string will be exactly the same as well.
It’s more complicated to verify the security of Signal text chats, but once you’ve verified a text chat correspondent, you won’t have to re-verify them again until they get a new phone or re-install Signal.
Each person you text with in Signal has something called an identity key. When Juliet sends Romeo a message for the first time, her Signal app downloads a copy of his identity key and stores it on her phone and visa versa. So long as these identity keys are valid — the key that Juliet has stored for Romeo is actually Romeo’s real key and not some attacker’s key — then the messages they send to each other are secure.
Because it’s unlikely that anyone is trying to attack your encrypted messages the very first time you send a contact a message, Signal automatically trusts the identity key that it downloads. This makes Signal easy to use: All you need to do to have an encrypted conversation is send someone a message, and that’s it. But if you discuss anything sensitive, you still might want to confirm.
To verify the identity key, you first navigate to the verification screen.
If you’re using Android:
If you’re using an iPhone:
Left: Signal identity verification in Android. Right: Signal identity verification on an iPhone.
Next, you want to confirm you have the correct identity key for your contact. You can do this either by scanning “QR codes,” which work similarly to the bar codes used to ring up groceries, or by comparing “fingerprints,” which are 66-character blocks of text.
If you’re able to meet up in person, here’s how you verify identity keys using QR codes:
If you’re using Android:
If you’re using an iPhone:
When you successfully verify a contact, Signal should pop up a message that says, “Verified!”
If you can’t meet up in person, you can still verify that you have the right identity key by comparing fingerprints — however, it’s kind of annoying.
You need to share your fingerprint with your contact using some out-of-band communication channel — that is, don’t share it in a Signal message. Instead, share it in a Facebook message, Twitter direct message, email, or phone call. You could also choose to share it using some other encrypted messaging app, such as WhatsApp or iMessage. (If you’re feeling paranoid, a phone call is a good option; it would be challenging for an attacker to pretend to be your contact if you recognize their voice.)
Once your contact gets your fingerprint, they need to navigate to the verification screen and compare, character by character, what you sent them with what they see. If they match, your conversation is secure.
Your contact should share their fingerprint with you in the same way, and you should confirm that what they sent you matches what’s on your verification screen as well.
If you’re using Android, unfortunately there’s no way to copy your own fingerprint to your phone’s clipboard to paste into another app. If you want to share it using another app on your phone, you’ll have to manually type it.
If you’re using an iPhone, you can copy your own fingerprint to your phone’s clipboard like this: Open the Signal app and click the gear icon in the top-left to get to Signal’s settings. Tap Privacy, then tap Fingerprint.
From time to time, you might see a warning in a Signal conversation that says “Identity key changed. Tap to verify new key.” This can only mean one of two things:
The latter is less likely, but the only way to rule it out completely is to again go through one of the verification processes for text contacts described above.
After Juliet sends a message to Romeo using Signal, copies of this message exist in only two locations: on Juliet’s phone and on Romeo’s phone. Unlike other messaging apps, Signal doesn’t store a copy of your messages on internet servers (“in the cloud”). Still, if you have a sensitive conversation, it may be a good idea to delete it when you no longer need it.
You can also archive conversations that you want to keep around but don’t want cluttering your Signal app. Here’s how to delete and archive Signal conversations.
When you open the Signal app, you will see a list of your conversations — your inbox, essentially. You can swipe a conversation to the right to archive it, which moves it out of your inbox and into an “archived conversations” list. Deleting a message or conversation varies depending upon your phone’s operating system:
If you’re using Android:
To delete a message, open the conversation, pick the message you’d like to delete, and long-touch it. This will select the message and give you the option to delete it. Similarly, to delete a conversation, pick a conversation from your inbox and long-touch it. This will select the conversation and give you the option to delete it.
If you’re using an iPhone:
To delete a message, open the conversation, pick the message you’d like to delete, long-touch it, and choose “Delete.” To delete a conversation, pick the conversation you’d like to delete from your inbox and swipe to the left to delete it.
Deleting messages is permanent. If you delete a message from your Signal app, and the person you’re talking to deletes it from their Signal app, the message will be completely gone.
You are making it seem like the truth is in the details.
Part of my point (which I will try to make even more explicit) is that they don’t even have to put backdoors in the hosting OS, because:
1) they don’t own it
2) is in full or partially closed source and/or proprietary
3) to just an example of google (which Android’s is for the most part open source) doesn’t even care about “security patches and upgrades” leaving most of their phones at the mercy of the NSA (Oh, now wait, I have no proof for that and it sounds too sinister, sick. They do it “obviously” because they want for people to upgrade to “their latest, greatest” …)
https://en.wikipedia.org/wiki/Android_(operating_system)
Android’s source code is released by Google under open source licenses, although most Android devices ultimately ship with a combination of open source and proprietary software, including proprietary software required for accessing Google services.[3]
…
At the same time, as Android has no centralised update system most Android devices fail to receive security updates: research in 2015, concluded that almost 90% of Android phones in use had known but unpatched security vulnerabilities due to lack of updates and support.[23][24]
~
RCL
All this is true, but it isn’t evidence that there are backdoors in Signal. Just because the app is distributed on Google Play does not mean they ship a backdoor in it.
Backdoors in Google Android are not the same as backdoors in Signal. I agree there probably are backdoors in Google (not AOSP/CM/Replicant) Android in the Google apps, but this does not affect the status of Signal.
It’s much easier just to watch people through the Google Play Services, which runs as root, than compel Open Whisper Systems, who might tell people that they are being coerced into making Signal malicious.
… which OS’s internals are owned by the NSA should you have said. I would also add that Signal very well knows that, as do “Sound” and “Symphony”
Even (someone pretending to be) your “Richard Stallman” hero and pointing to docs on his site has been pointing that out to Lee the wasteful irrelevance of trying to encrypt anything in a compromised system:
https://theintercept.com/2015/12/28/recently-bought-a-windows-computer-microsoft-probably-has-your-encryption-key/?comments=1
In that case, believe it or not, Lee was trying to persuade people into “outsmarting MS”!
I have a simple, basic question for you, smart people. All those “encrypted messaging ‘apps'” are technically doing basically the same damn thing lavabit was. lavabit business was not only shutdown by USG, but they even tried to gag order Ladar Levison
https://en.wikipedia.org/wiki/Lavabit
Of course, Lee knows about Lavabit. This is what he wrote about them while talking about the legal aspects of warrant canaries:
Don’t those “actions” and “occurrences” speak louder than words, to you?
RCL
Er… Do you have a better solution? Just because most people use a proprietary OS doesn’t mean that we have completely lost the fight against mass surveillance. Certainly, Signal should make it so Google apps are not required. Certainly, Signal should recommend Replicant and mobile OSs not controlled by Google. Yet Signal is better than nothing.
Signal have no access to the messages as far as I know, because they are meant to be encrypted client-side. So unlike LavaBit, which held the keys to the kingdom in its private keys, as most people weren’t using end to end encryption, Signal can’t divulge the messages.
I don’t know if they put backdoors in. It’s possible but you are not providing evidence for this, just vague FUD. Conspiracy theorists are people who make vague assertions without evidence. Investigators and researchers are people who uncover details and provide evidence. You fall into the former category, I’m sorry to say.
Actually, all U.S.-based companies MUST, “by the rule of law”, submit to snitching. Don’t take it from me, but from lavabit’s Ladaar Levison:
// __ Secrets, lies and Snowden’s email: why I was forced to shut down Lavabit
theguardian.com/commentisfree/2014/may/20/why-did-lavabit-shut-down-snowden-email
and Qwest Communications’ Joseph Nacchio
http://en.wikipedia.org/wiki/Joseph_Nacchio#Qwest
Joseph P. Nacchio was the only head of a communications company to demand a court order, or approval under the Foreign Intelligence Surveillance Act, in order to turn over communications records to the NSA.[9]
[9] “NSA has massive database of Americans’ phone calls”
http://www.usatoday.com/news/washington/2006-05-10-nsa_x.htm
“In June 2002, Nacchio resigned amid allegations that he had misled investors about Qwest’s financial health. But Qwest’s legal questions about the NSA request remained.”
Also, what is the point of citing our “representatives” when they themselves admitted to not even knowing what N-S-A stood for, let alone that it was recording the metadata and content of all domestic and as much as international comms they could en mass.
RCL
Fair enough; but that is a different issue, as it doesn’t address the OP’s concern: the fact that (to my knowledge) there exists no legislation yet “that makes it illegal to produce NSA-proof encryption and illegal to talk about it being illegal to make systems that are actually secure.”
Bull. The NSA can crack anything with ease. The fact that Signal exists is likely evidence that NSA can not only crack it, but has a back door.
I wouldn’t be surprised if there’s a piece of classified legislation that makes it illegal to produce NSA-proof encryption and illegal to talk about it being illegal to make systems that are actually secure.
These steps will keep kids, thieves, and opportunists out of your business, but nothing will keep an interested government agency out of your business if you’re communicating via electronic means. Nothing.
Don’t kid yourself. The protestations against supposedly strong encryption by the government are designed to ensure people use those systems to do their dirt so they can be exploited. Seriously. Secure against US government snooping. No. Wishful thinking. Cracking civilian and commercial systems is child’s play for the US government.
I’m sorry, but this is, to use your own words, “bull”.
The messages are encrypted by the client. What this means is that Signal has no access to the messages. The keys are stored on the device, Signal cannot decrypt the messages unless you were to give them the key. The source code is public ally available. This gives a reasonable guarantee that it does not include backdoors.
There is the possibility that compiled copies available from Google Play have a backdoor but this is unlikely. You can build from source and use it.
While there are legitimate criticism about OWS’s attitude to versions of Signal they don’t control, you are not making these criticisms. You are spreading FUD, quite frankly.
Right? I’m sure that they have keys to my house, too. Why do I bother locking my doors? The fact that locks exist proves I’m right in thinking this way.
Not yet, so it wouldn’t be legislation, it would be “off-the-books” CIA/NSA policy. Ask Ed Snowden?
Seems obvious now that you mention it; we should all simply give up trying.
“Not yet, so it wouldn’t be legislation, it would be “off-the-books” CIA/NSA policy. Ask Ed Snowden?”
I’m no Ed Snowden but my guess is their interest would likely be in identifying popular apps and taking advantage of things like weakening the encryption and/or finding ways to make use of stuff like the ‘Keys Under Doormats’ and ‘Trusting Trust’ research. Through Ed’s (I hope he wouldn’t object to me using his first name; ‘Snowden’s’ seems gauche to me too) disclosures, for instance, we know that they have a habit of targeting sysadmins and developers. It seems logical to me that they’d be more interested in attempting to subvert crypto at the compiling stage or thereabouts (one reason reproducible builds is such a big deal) and/or doing something with libs and/or PRNG so as to be in control of crypto in that way than trying to deal with a bunch of random stuff popping up all over the place (takes more man hours, more work). Winds up being especially easy on Windows where people almost never compile from source (even their crypto apps).
Forgot to add, but especially important given the topic of this article in particular: For as few people as compile things for Windows, not many more people compile their own on OSX, not many people compile even their own crypto on linux — and almost none compile their own packages for their smartphones. Which (ironically) is where so many people communicate in the first place (with regards to crypto specifically used in chat and web applications not just storage containers and the like).
Regarding compiling, currently there is a chain of trust. Everything with compiled copies requires trust – do you trust the people compiling more than the original source code? Personally, I trust the original source code more than the compiled copies, but not much more. From the point of view of a malicious figure, it is more practical to infect the source code with a backdoor than target one person who maintains the software on one particular operating system distribution (although this would be useful for malware distribution as well).
At any rate, the answer to mistrust of compiled copies is reproducible builds. This way, multiple trusted people from around the world can build the software independently, and then each cryptographically sign the release of the compiled software. Anyone can build it and achieve the same result. There are efforts to make the Tor Browser Bundle’s build process reproducible, and some GNU/Linux distributions such as GuixSD and NixOS have functional package managers. What this means is that the build process is treated as a mathematical problem, where identical inputs (library versions, dependencies, build options, etc) always produce the same result. So, when you attempt to install a package, the package manager first attempts to find a compiled version with this output (a specific ID number is assigned to some version of the package with given conditions) and if no compiled version is found, it builds from the source code. As more packages are made reproducible this gets better and better. This is a a practical solution to these problems, in my opinion.
“Because it’s unlikely that anyone is trying to attack your encrypted messages the very first time you send a contact a message, Signal automatically trusts the identity key that it downloads.”
That’s pretty naive of Signal’s developers. With intercepts on over 95% of the fibre optic cables and connection points like ISPs around the world, with the NSA’s systems like XKEYSCORE and QUANTUMINSERT they can automatically fingerprint Signal traffic then automatically MITM that initial connection to make it display whatever public keys they want.
Also if you’re attempting to verify a public key via an already easily compromisable method such as phone call, internet website etc then you’re doing it wrong. The only provably secure method is an in-person (face-to-face) meeting. There exists methods to automatically re-create or fake someone’s voice based on existing voice samples. NSA would have those voice samples already if you’ve ever used an insecure phone or Skype etc. Also verifying a fingerprint via some random internet site like Twitter is also MITMable. I may accept a blockchain based identity such as keybase.io as somewhat more secure. But then again quantum computers will ruin that. Also you would need to download the application, verify it’s source code then run it locally. Not just look up the identify on the keybase.io website to confirm the public key. HTTPS is terribly insecure for nation state attackers and they can intercept or alter data on the page.
For journalists dealing in very important data I would expect in-person meetings to verify the public keys as the preferable option, or an existing secure channel in which the public key fingerprints were verified properly e.g. OTR or PGP.
This method is called Trust On First Use (TOFU), and it’s most commonly used in SSH. The alternative is make it similar to OTR, where you start out in an “unverified” conversation and after you verify you can change it to “verified”. However, then nearly everyone will use it the way that most people use OTR, where nearly all their conversations are “unverified” and they won’t get meaningful warnings when there actually is an attack.
All communication between the Signal app and server is itself encrypted with TLS, so all passive surveillance of ISPs and XKEYSCORE can pick up is the fact that a device is using Signal, not anything else. If they wanted to do active MITM attacks they would have to compromise the Signal server and do them from there. But also, this isn’t happening. If they were, those of us that regularly verify Signal fingerprints would discover the attack (thanks to the ability to verify that the encryption works), and no one seems to have discovered it.
I think both technically and ethically it is, to some extent, Signal hacker’s fault to encourage people entertain untrue illusions knowing very consciously well that they are lying.
I think there is plenty that can be done from both a hardware and software point of view. Yet, we have to still truthfully educate “We the people” (instead of bullsh!ting them telling them about this and that “app”) and try to persuade them out of “easiness”, “ubiquity” into some technical hippie “awareness” (which I don’t think we technical people are good at), but at the end of the day we will have to do.
Basically what I have in mind is:
1) start a culture of like-minded people: only one could start by him/herself, but the fewer people the easier it is for them to mess with us, it. They have set a pesky noise on my phone ever since right after I posted this comment:
https://theintercept.com/2015/05/08/u-s-government-designated-prominent-al-jazeera-journalist-al-qaeda-member-put-watch-list/?comments=1#comment-131280
2) our culture would be of totally open individuals openly documenting everything in an accessible, readable way, so that even if we would, say, use Linux/Debian as primary distro, anyone could do it in other Linux or Unix Open source OSs
3) we should make everything culturally friendly
4) first networking should be removed off the kernel (I tried to make them do so):
// __ RFE: moving networking out of the kernel and into to user land …
https://lists.debian.org/debian-user/2013/08/msg00381.html
https://lists.debian.org/debian-user/2013/08/msg00417.html
https://lkml.org/lkml/2013/8/12/476
~
5) we should strategize vertical contextualizations towards a clear separation of:
5.1) I and O features of the kernel/OS
5.2) networked and local services (such as the clipboard, keyboarding history …)
5.3) networked and local users (root if existent in init 5 should not be networked )
6) we should exploit the initialization process to set up a totally stealthy logging context a background (probably even dedicating a processor, USB ports, NIC … to it, which is not reported by the OS after you are past a certain init step)
7) include as part of the GUI easy ways for people to monitor bs (get snapshots of their screens, clips)
8) Linux/Debian should be totally encrypted and checks should be made to make sure BIOS and installation directories haven’t been tampered with
9) tor should not only be part of it, but also some sort of filtering web access interface should proxy all access to the Internet (so that, say, google’s redirecting URLs and all that crappy ad are parsed out) …
10) we will have to “fix” the Internet using functional FoF networks (instead of facebook so-called “AI” [email protected])
http://stackoverflow.com/questions/31106584/foftn-friends-of-friends-and-topical-networks-any-implementations
…
~
RCL
Richard Stallman, the founder of the free software movement, says that “freedom is worth the inconvenience”. We must make it as easy as possible to switch to an operating system that is free as in freedom, but people must be committed to freedom, they must want it. Our society puts too much emphasis on consumer qualities rather than what is good for us;Signal is a good app but it is not ideal, considering the policies of Signal’s developers. We must recommend and promote true Libre alternatives such as ChatSecure and Silence, and the libre Android system Replicant.
I think we must educate people. Educating about privacy means being honest to them. Sometimes this makes people annoyed, because they see it as criticism. But if each of us can make even 3 people (and ideally, around 10) value privacy and software freedom (security starts with libre software but it is a different issue to an extent) then that it is very good, as they will help to spread the word.
One other thing, I think is important, is to mention the contribution of GNU. First, some history, this is addressed to everyone here. In the 1980s, Richard Stallman started work on the GNU Project, which aimed to produce an operating system that was made up of just free software. Over the years, they produced almost everything, including a C compiler (GCC), a C standard library (glibc), various system utilities to replace UNIX tools (coreutils, as well as GNU make, GNU autotools, text editors like GNU Emacs, GNU Wget). All was missing was the kernel, the program that controls hardware. In the 1990s, Linus Torvalds produced the Linux kernel (indeed, Torvakds said that if the GNU kernel was ready, he would not have produced Linux), and on mailing lists, they combined this with many of the GNU utilities and programs. They called this whole system Linux, but this is not apt at describing the system, as it doesn’t give credit to GNU, which provides the basic system. Yet, although Linux is really important, it is one program only (and hosts a few others, such as util-linux but this is not part of the kernel). You can’t actually run an OS without GNU. Therefore, the accurate way to describe modern “Linux” systems (such as Fedora, Debian, Ubuntu, SuSE, CentOS) is GNU/Linux. Think of it like a tower, the very bottom being Linux, the foundations and first few floors being GNU, and anything being added further as your miscellaneous programs and high-level tools. Mention GNU, for their contribution to freedom, rather than those that merely made it more popular, happy to take a lot of credit.
There are various other systems, such as Android, that use Linux only, and their own libraries, or Busybox utilities. These should not be called GNU/Linux if they use no GNU, and instead be called Android/Linux and Busybox/Linux more accurately. Some systems that use the GNU kernel, Hurd, or the FreeBSD kernel can be called GNU/Hurd (or just GNU as Hurd is part of the GNU project) or GNU/kFreeBSD.
Confusing it may be, but the contribution of GNU is really important.
Sorry for a long post, but as a like-minded person I thought it is beneficial to bring up, and for other people reading the comments.
More info found here: https://www.gnu.org/gnu/gnu.html
Hi Micah, another great article, thanks for everything you do.
Thanks, Micah. Another informative article. I still use a VM set up per a previous article of yours; it’s pretty easy to use and update, which given my tech experience is mandatory.
On another note: I’ve been trying out Opera’s developer browser w/free VPN (at least they say it’s a VPN) and native ad-blocking and have found it to be quite stable – and more importantly, as fast as my current connection.
It would be nice to see you do a security and functionality review of this and similar browsers.
Regards,
Happy national holiday, Micah, and thank you for everything. I hope you and those close to you can relax a bit and enjoy it like most others. I’m taking it easy this evening and just grillin’ brats and HN hot dogs.
Maybe I’ll have a Mitty or two – where I daydream my communications with anyone are truly private, again.
not making any sense when they DO own you … I should have written
https://yro.slashdot.org/story/10/08/07/1625245/saudi-says-rim-deal-reached-blackberry-ok-if-we-can-read-the-messages
RCL
or don’t own [a mobile phone] …
Simple!
I also don’t trust javascript dependent pages. At least the reply capability should “degrade ‘gracefully'” I am sure engineers working for TI know that very well
I also agree with you when you point out that it doesn’t make any sense to talk about “securing” software running on proprietary OS, BIOS and hardware. I think Micah Lee knows this very well (notice the explanation he gives about MITM attacks to encryption not making any sense when they can own you …)
RCL
First, it’s a shame that one needs Javascript (JS) to reply in the appropriate place in a thread. I don’t wish to run JS so perhaps an Intercept admin can reparent this post to avoid breaking a thread. Form submission over HTTPS certainly doesn’t require JS to work. Please Intercept code hackers, make it so that the JS is completely unnecessary to fully work with your site including posting followups in the right place so they thread properly. It’s great that the rest of the site (reading articles, getting the RSS feed, posting thread-starting articles) doesn’t require JS at all.
Being as good as one can get to provide desired security and safety might not be good enough; with proprietary software, it is impossible to assess the state of one’s safety or correct problems if found to be inadequate (this is not Signal hacker’s fault, these are problems only the OS licensing and distribution can solve). I encourage people to either not get a tracker in the first place. As Richard Stallman, founder of the free software movement, points out in his talks: sometimes freedom requires a sacrifice; giving up a tracker isn’t a big sacrifice to make in exchange for increasing one’s software freedom (which allows one to gain more privacy, and more security). If users won’t do without their tracker, consider trackers to be devices which gather more data than you realize and only use your tracker to ultimately help out free software developers for the time when we get trustworthy hardware running fully free OSes. Developers need feedback on how to get non-technical computer users to work securely in addition to all of the normal debugging complex programs require.
Regarding the Unaphone: I was unable to find information on the UnaOS licensing. UnaOS appears to be a modified Android OS which is said (in its press release) to be “affordable”, “secure”, and have “No connections to Google’s office in Mountain View, no connections to US Department of Defence, no connections to US Department of Justice” but (as far as I can tell) is just another proprietary OS running on problematic hardware. UnaOS won’t let users install programs of their own choosing on the OS, as far as I can tell. UnaOS developers appear to have conflated security with locking computer owners out of controlling their own computers! I don’t see how that design choice respects a user’s freedom to run, share, and modify their computer’s software. A demo (https://www.youtube.com/watch?v=bPkg4Utr_yM) shows the user interface and appears to come with Adobe Reader, a proprietary PDF reader. Being proprietary is bad enough, but this company and this program are also notoriously bad at security. Keeping the user from completely controlling their own computer will not keep users safe from malicious activity. Free software is a prerequisite for the security computer users all need and deserve. Proprietary software users eventually learn that the software they aren’t free to inspect, alter, or share is the software which exposed them to malicious activity.
Signal should incorporate a feature that will allow the user to specify if any message should self-destruct a few minutes after being read.
One convenient way to catch the Wahaabi and Salafi Muslim terrorists is to provide them with free public wifi under conveniently located security cameras.
Edward Snowden covered himself with a towel while typing in his password in the HK hotel as you may confirm. We don’t want to alert the ISIS-type folks to this method. Thanks Micah for not mentioning this.
I don’t understand half of this article and even less of the comments.
I envy your ignorance and naivety. All these apps and devices have added a lot of complexity with any significant increase of peace and happiness.
Is there any part in particular that you find confusing? (Also, some of the commenters have no idea what they’re talking, but pretend that they do.)
Wait till -Mona- turns up here, then you’ll see.
I don’t understand most of the computer language. I never heard of Signal before. I don’t know anything about the apps under discussion. I’m afraid I use very few of them on my phone.
This is a good, sane article, and I don’t agree with the repeated comments of the form,
– “signal could totally be intercepted, I could root you with a printer, omghax,” or
– “wah wah some other flashpan shatchat app is slightly more convenient,” or
– “I feel primarily allied with the brand identity of Blackberry, not rational adulthood.”
However, “Verifying a Text Contact Who Gets a New Phone” shows that Signal devs have work to do. It should be possible to store a backup of your identity outside the phone, because “switched to backup identity” is vastly more useful to the other party than “switched to random identity,” and losing or wiping phones is common for everyone but particularly for people under threat and, or because of, frequent travel. It isn’t a matter of how hard-core you are to do the “extra work” of reverifying everyone. It’s about Signal dropping the ball on its core function of verifying endpoints. Most secure messaging systems (without a centralized and therefore vulnerable PKI) in practice slowly fall back to “unverified” because of device-loss or multiscreen-phonebook-skew chaos, and Signal makes it worse by fetishizing phones, binding credentials to them as if they were real TPMs. Signal should address the problem they have worsened, for the sake of security, not convenience.
Signal is a messaging app, not a backup service. If you want a backup then get a fucking backup.
Signal is TOTALLY interceptable nowadays. I could intercept signal protocol and even listen your calls.
NEVER trust a software that ask for your REAL phone number :)
That’s why we are going to build a Wall, to make sure the Mexicans cannot hear us clearly. People like Vincente Fox are always eavesdropping from across the border.
While it’s great to try and bring privacy-supporting computer use to the masses, I’m not convinced this promise can be realized right now.
I find it unlikely such efforts as Signal succeed where the OS is proprietary (aka not freedom respecting, user-subjugating software), and the hardware is under the control of proprietors via schemes like a second processor that controls the processor running user-facing applications (see https://www.fsf.org/blogs/community/replicant-developers-find-and-close-samsung-galaxy-backdoor for an example). It seems to me writing such programs for non-free OSes on non-freedom-respecting hardware is akin to running a program to secure one’s privacy on a computer one cannot trust; it’s futile and at best ends up presenting a facade of security to the user while (perhaps inadvertently) helping share the user’s secrets.
It seems to me the best advice right now is to simply not trust one’s privacy to trackers (aka mobile phones, cell phones). Either don’t own one, or understand that whatever data one puts into the tracker (no matter the input mechanism: texting, anything in mic/camera range, GPS location data, etc.) can be spied on regardless of the program running atop the proprietary OS. Sadly, given the information in the aforementioned link I’m not convinced even Replicant OS apps can be trusted to keep one’s privacy either. But I don’t blame the Replicant hackers for that, that is the untrustworthy hardware design paying off for the proprietors.
I’d like to see development of free OSes (such as Replicant) and free software (free as in freedom) continue so that when trustworthy hardware is available we’ll have free software ready to use with those devices.
Hi,
I too agree that the proprietary OS poses a great risk to privacy and security, especially IOS, where Apple basically own you. At least on Android you can install applications from outside of Google Play, from F-Droid.
However, I don’t agree that it is pointless to write libre software for these operating systems. It is good to do this as you can protect people’s privacy, security and freedom by providing libre alternatives to bad applications and services.
What I would say is that The Intercept should write about libre operating systems, such as GNU+Linux and Replicant. Similarly, I think they should promote ChatSecure and Silence over Signal, as these more adequately respect freedom and are available on F-Droid (unlike Signal who are hostile to independent builds as they don’t control it).
You actually can have mostly-trustworthy hardware already such as computers that have been reverse-engineered to support Libreboot. Similarly, although it would be nice to have free software running on the modems of mobile phones, location tracking can happen regardless. The bootloaders on mobile phones are proprietary but they don’t do much so aren’t a concern of mine. Things are bad especially with Intel hardware but it is possible to protect privacy and freedom if you know how.
I think Micah Lee should write about these issues. (Especially Replicant and Signal alternatives) What do you say, Micah?
You could get an Unaphone. I don’t mean to advertise for them, but it might be what you’re asking for.
Hi,
The Unaphone does not solve the problems of proprietary software and hardware. The sad thing is that all these projects are just to make money. We need proper investment to produce libre hardware and software.
To use a mobile phone in relative freedom, Replicant is the best we have.
We very very VERY VERY much need open-source/libre baseband. It almost feels like we’re giving away EVERYTHING without it, no matter how much we may think our operating systems are safe.
I know we need libre baseband software. However, as it currently stands, although libre baseband software prevents remote control, I know that location tracking occurs which is far more of a danger to everyday life, if we can prove the baseband software has little access to the system (“modem isolation”). Remember, just because the software is libre doesn’t mean it isn’t malicious, or backdoored, or secure, although obviously security starts with free software.
One cannot patch, update, modify or even reflash (at least not easily, and certainly not with much actual trust) their own baseband for the most part. I wasn’t suggesting libre baseband is *the* answer, but I think it’s important as part of the answer, since IMHO one should be able to recover from anything at the firmware and bios-and/or-equivalent level for all of one’s devices (and be able to inspect that code). I guess I’m saying while noone can (at least as things are now) currently wipe out EVERY possible way to backdoor or hack someone anywhere from ring0, ring1, ring2, on up, it shouldn’t be impossible to rebuild a system that has been modified at such a low level.
I agree location tracking is one of the most egregious invasions of privacy we all face (not just people who might be considered ‘interesting’ or merely unfortunate). I’m not sure I see a solution if every option is either backdoored, cooptable, or exploitable though. I’ve seen your other posts (I enjoy them :)) and I think we can both agree that it’s very very difficult to rule out the possibility of wiping out every bug and every bug class (including bugdoors) but I think it’s far better for everyone if they can have fully-audited, open-source options. I suspect those with a good knowledge of the concepts outlined in, for eg, TAOSSA, would be greatly offended by at least some of the code.
I thought I’d mentioned reproducible builds in one of my other posts but I can’t find it; I’m sure it’s here somewhere. I saw your recent reply near the top though and wanted to say I agreed with your comments there as well. :)
I see your point. This is the ideal, but we are probably not going to get there without community hardware or some SoC manufacturer to release full source code. This would probably be a Chinese one, or perhaps Texas Instruments, but never Qualcomm or Intel. This is partly because there are laws and regulations by the FCC to make them compliant to “protect the network”.
Under the current circumstances, a mobile device with an isolated modem and a libre or extremely simple bootloader, with the rest fully free, is the best we have. Ironically this is probably less malicious than most Intel computers, providing the modem has no access to the hardware and the bootloader is so simple as to have less functionality than many circuits.
However, it is better to have a computer powered by Libreboot, in order to place VoIP calls e.g. through WebRTC and other secure alternatives, and messages through TorChat, Internet Relay Chat and XMPP. One could use Qubes OS or one of the free GNU+Linux distributions in order to do this.
At any rate current mobile devices are very creepy. We should bin them as soon as possible, especially ones with Qualcomm processors (it has two processors, of whicg the modem processor can spy on your whole system) and get dumb phones with removable batteries or phones supported in Replicant. And help Replicant by sending them devices which they have evaluated as possible candidates and funding and help develop drivers replacements.
Just my thoughts on mobile spying devices.
Real reply tomorrow — it’s getting late — but wanted to ask you first (if you see this before my reply): You’re not worried about the security and privacy implications (including information leaks) that can exist as a result of enabling (or not disabling) WebRTC? With Qubes I’m (and have been) quite leery of the fact it’s built in top of Xen which has had quite a spate of vulnerabilities. I agree it’s still better than most of the other options, but it does worry me — the lack of existence of a more secure in general virtualization solution worries me (one could make an argument for qemu on top of hardened gentoo but I suspect that if qubes intimidates most people, that’d be far more difficult of a sell than even qubes, even to other techies).
More of a reply later :)
I have a BlackBerry Passport, so Signal is unfortunately not among the options that I can consider. Because of this, I have a paid subscription for BBM Protected, which offers additional encryption on the regular BBM consumer version. Have you ever heard of this and if so, what is your opinion of it?
Get rid of your subscription, it is simply not worth it. It is not secure so long as BlackBerry can read it, as they probably can because you use their crypto keys. Secure from your average criminal but not from spooks and state spies, and employees of the BlackBerry corporation. You probably aren’t in much danger of targeted state surveillance but I don’t think such power should be at the disposal of anyone.
I think you should get a dumb phone with an easily removable battery for preventing location tracking. If you need smartphone features, get a phone powered by Replicant. You can send encrypted text messages using Silence, and encrypted IM using ChatSecure. You can make encrypted VOIP calls using LinPhone and CSipSimple. These are not controlled by corporations (although they may contribute and find some of these applications) so you minimise surveillance and increase privacy.
Sad that it’s come to this but it has. Unfortunately; whatever you do it will be like locking your possessions; that only dissuades the lazy crooks. For example, a $10 lock will secure a $100 bicycle. For a $7,000 bicycle they will come with cutters, grinders, torches and freezing sprays and the real bad guys will get what they want. We are in the “Brave New World” now with bread and circuses sufficing but as things get worse and they will, the 1984 mailed fist will come out. When the swat squad comes to your door, they won’t ask nicely.
Put seventy locks on the bike ?
Signal has caused me great trouble with contacts not receiving my messages. If one of your contacts uninstalls Signal, they will no longer receive text messages from you if you are using Signal, and there is no warning on either end. The messages just disappear. When I send messages to Signal users who have iphones, they are never notified. I’m afraid to uninstall signal because I’m worried I will stop receiving text messages in this way. I am also only able to text people who are also using Signal when I have a data connection. I find Signal to be very unreliable and I wish that I had not started using it. Encryption is great, when it doesn’t break standard operating.
If you uninstall Signal, here are instructions for how to deregister your phone number: http://support.whispersystems.org/hc/en-us/articles/213190627-How-do-I-unregister- — this fixes the problem of messages disappearing.
I did find that tool after some searching when the problem was first identified. But I can’t expect my less than technical friends to even think to look for something like that after they uninstall the product. It’s great that this app provides end to end encryption, but in my opinion, it is not worth recommending to others. You simply can’t offer an app that is going to make text messaging unreliable. I’m in trouble with my friend’s for suggesting they use Signal after they stopped receiving some of their text messages. Simply unacceptable, I will not recommend the app again.
It seems like cops could remove the SIM from a locked phone, unregister it, and receive future messages sent to it in plaintext. This is another way Signal’s poor handling of the “lost phone” case is a security problem, not a convenience problem.
Besides that, any agency with SMS passive surveillance capability could unregister phones at will. The target may notice this, but more likely she will lose it in the noise of general phonespaz, or blame Signal for being flakey.
These publicly-documented backdoors have never existed for more traditional encryption ecosystems like PGP, S/MIME, XMPP OTR, or Pond. Admittedly the traditional ecosystems came with other problems that Signal solves, but what happened to the spirit of “no new problems” and building on earlier work?
I think using phone numbers to identify endpoints was a poor decision because it gives Whisper Systems / Twitter an excuse to resist federation, but it also creates this security corner-case they have failed to solve. Signal is probably the most practical app currently available, but it doesn’t meet my expectations. Nothing I know of does. I think the expectations are pretty basic, but unfortunately it seems messaging app designers need to spend >95% of their effort on fashion and self-promotion to get anywhere, and Signal is certainly no exception to that.
Signal is anything but easy to use. My contacts with iphone never get message notifications if they use Signal. If a contact uninstalls Signal because they are tired of not receiving text messages like normal, they simply stop receiving my Signal messages entirely. I have a friend who had not replied to my texts for some time. Finally we figured out that after he found Signal to be buggy and uninstalled it, anyone who was previously sending him Signal messages would simply not get through. Signal is a buggy POS and I wish I had never started using it. I’m afraid to uninstall it because then I will be stuck never receiving text messages anymore. I highly advise anyone reading this to think twice before installing Signal. It’s not ready for prime time.
Signal could be recorded and intercepted in IPHONE. 100% guarantee. i could teach you how to intercept it
How about wicker?
With your permission, Micah, I’ve cc’d a copy of this article to presumptive Democratic presidential candidate Secretary of State Hillary R. Clinton … care of the FBI.
*normally, I don’t know what you’re talking about … but this seems relatively clear and easy to follow.
Hi Micah,
If you have the Erase Data setting turned on on your iPhone, do you need a passcode that is longer than four digits?
It depends on your threat model, on how much resources you think an attacker might be willing to spend on hacking your phone. The Erase Data setting, as well as the progressively longer time delays, aren’t enforced in hardware. If an attacker is willing to take apart your phone and do some complicated attacks, they can bypass the Erase Data feature. Here’s more information about how that works: https://www.aclu.org/blog/free-future/one-fbis-major-claims-iphone-case-fraudulent
But having a long enough passcode protects you against even this, because the longer your passcode, the longer it will take your attacker to guess every possibility. 4 digits has only 10,000 possible passcodes, but 11 digits has 100,000,000,000 possibilities, and shouldn’t be possible for anyone ever to crack it until long after you’ve died. More information: https://theintercept.com/2016/07/02/security-tips-every-signal-user-should-know/
So, what’s the best brand of phone to buy, if the overriding goal is to ensure secure communications?
BlackBerry (non-andriod operating system)
This is what Obama uses because ALL others are a security risk.
Haven’t you wondered why BlackBerry is being driven out of the market?
ibtimes.com/president-barack-obama-not-allowed-use-iphone-relies-blackberry-2016-2347945
Right now for a normal user, I would say either a Nexus Android phone or an iPhone.
If you’re a techie nerd, I’m particularly excited about CopperheadOS, which is a security-focused Android distribution that runs on Nexus phones. However you can’t both enable all the security features and get Google Apps, which ironically makes using Signal way more difficult.
Hi Micah,
Have you heard of Replicant? It is an Android distribution that does not have any proprietary software in it. Most importantly, there is no proprietary code running in kernel space, so you are less likely to be spied on by manufacturers.
It also runs on phones with reputed good modem isolation, meaning the Replicant developers are reasonably certain that the phone can’t be attacked and controlled remotely.
Will you write an article on Replicant?
Thanks!
Hey Micah,
If I have the Erase Data setting turned on on my iPhone, do I still need a password that’s more than four digits long? Supposedly 10 (<< 10^4) failed attempts will simply delete everything.
Thanks.
Be careful with that Romeo & Juliet example. These days, Romeo would be considered a pedophile, and Juliet was a victim of child abuse and grooming. You don’t want to be teaching people how to abuse children now do you!?
A few notes:
– While Signal is great for easy contacting like a text message or phone call based on someone’s phone number, it is not anonymous. Signal-like protocol OMEMO on XMPP is the next step up. ChatSecure/Conversations and Zom are great and are working on OMEMO but have OTR at the moment. Zom is like an XMPP for dummies. It sets you up with a free account at dukgo.com (Duck Duck Go service) and a password (changeable) and auto log in and generates a 1024 DSA key and shows the SHA-1 hash for you.
– WhatsApp is actually a more complete implementation of Signal protocol. Correct me if I’m wrong, but it has key continuity for calls and uses the same Ed25519 (a type of ECC signing key of 256 bits with a SHA-256 hash) that is used for texts.
SAS is a pain to re-establish every time and since it is the same app and same contact, then using a common key makes sense.
SAS is also a no-go for non-English speakers. I can’t believe this isn’t addressed more often. Not to mention a lot of people aren’t terribly bright in certain departments and won’t pronounce half of the words right or be comprehensible. “Paragon”? I wish all my friends knew how to say that word, let alone what it meant. Most do, but others don’t. Grown people who are college educated native speakers can’t spell “you’re” or pronounce/spell “intensive” or “conscience”. Having to explain to your friend how to pronounce the word defeats the purpose. And again, most people on Earth do not speak English fluently.
Nobody should ever use anything that employs SHA-1 because it is vulnerable to what is called the birthday attack.
Hahahahahaha. I’m quite good at math, don’t you worry.
Birthday attacks are only applicable if the attacker makes two documents and asks you to sign one.
I make a contract saying I will buy your bicycle for $50, but I also made a contract saying I will buy your house for $50.
I add null characters at the end of each contract until I get a match. It is only by doing this to two documents, that I make the chance of a collision exponentially greater (halving the theoretical security).
If you are trying to come up with a collision and one thing is already fixed, I have 160 bits of security.
SHA-1 has flaws and is less than 160-bit secure.
DSA1024 and DH1536 as used in OTR version 3 are also not good long term. You should rotate your keys once a year I’d say.
But! I made a note of the upcoming OMEMO which uses HMAC-SHA256 and Curve25519 (512 bit ecc diffie-hellman epehemeral; keys and Ed25519 (with SHA256 and 512 ecc signing key) and AES256 (extra protection from Meet in the middle.)
A wonderful article Micah, wonderful details and instructions, thank you. Am bookmarking it for sharing and future reference.
Have been using Signal for messaging with my Android friends and its very impressive.
Signal client can AES encrypt its database, yes, at a file level.
It is optional.
If you have full disk encryption (block level), then this is unnecessary, but it’s fine to have both running, though terribly inconvenient.
I activated full disk encryption since no phone and no apps do encryption of contacts or of emails. I just full disk encrypt everything.
Also be aware of swap space.
Swap is a hard disk copy of bits of data in RAM. Some things in RAM are referenced or used infrequently and so get moved to a section on the hard disk.
Without full disk encryption anyone can lift that data.
Not as dangerous as a cold boot attack (freezing and extracting RAM chips and dumping active data before it disintegrates).
Hi Micah and thanks for the great article. What’s your take on MITM detectability when Snowden has repeatedly talked about government agencies such as the NSA stealing keys from the end points, and that smartphones can be taken over by sending a single SMS to it. What threat model is Signal solving? What threat models is it not solving?
Having malware on an endpoint isn’t the same as a MITM attack. If your iPhone or Android phone has been hacked (and rooted), the attacker can spy on everything you are doing on it. In that case, you wouldn’t detect a MITM attack in Signal because there wouldn’t be a MITM attack (the attacker would be logging keystrokes/reading files/looking at your screen, not attacking the crypto, so the fingerprints would match).
Also, keep in mind that it’s not necessarily trivial to take over a phone. If you update the software on your phone promptly, you fix the known vulnerabilities. This means means you’re only vulnerable to zero day exploits which are more rare, and often expensive, compared to public ones that anyone can find. iOS and Android also use sandboxing, which means exploiting an app often isn’t enough to root your phone. An attacker needs to string multiple exploits: a vulnerability in the app, plus a vulnerability in the OS sandbox, before they can start spying on Signal messages undetected. (But it depends on the exploit — I think SMS vulns often exploit OS-level code rather than app-level code, which might give the attacker root with a single exploit.)
This is exactly what you want. You want to make it so 1) an attacker can’t passively spy on you (like they can with normal text message and phone calls) and 2) an attacker can’t do an active network attack (like a MITM) to spy on you. You want them to be stuck with hacking your endpoint as the last resort, the only option. And then you want to make your endpoint as hard as possible to hack. That’s the best that’s possible for anyone to do.
Unaphone Zenith looks “as hard as possible to hack”… kind of expensive, and I’m scared of unpatchable bugs, though.
I understand the difference between CNE and MITM. I guess I’m wondering about how difficult invisible MITM would be if keys are stolen from the end point. The reason government agencies might want to do that is because, compared to MITM, active transmission of everything you do on phone to network would be much more noisy. Over WiFi you could view the traffic yourself, and over mobile data the excessive data usage might be visible.
“This means means you’re only vulnerable to zero day exploits which are more rare, and often expensive, compared to public ones that anyone can find”
They are more rare and expensive. However, as far as I know 0-days are not one-time use only, but they remain unpatched for a long time before they’re discovered. Very few users actively monitor their devices or run IDS (if there even is one for smartphones).
If there was a 0-day OS-level vulnerability in SMS of latest version of Android/iOS, how difficult would it be to scale the attack to mass surveillance, provided the exploit doesn’t raise warnings in target system?
“You want them to be stuck with hacking your endpoint ”
I understand. But it appears governments like UK are moving towards bulk equipment interception, so it’s no longer just individual targets that are getting hacked, soon it will be entire cities. How do we drag governments back to targeting?
In their eyes they *are* targeting, which I think is part of the problem. By going for retroactive access, they’re basically giving themselves a free pass to say they’re not surveilling everybody when in fact (and I’m sure you know) they are. It’s part of what led to Clapper’s egregious forehead-sweating performance while being questioned about ‘collection’. So to answer your question, I think we first have to get them to see what ‘targeting’ even means. From what I’ve seen from a little bit of a distance, they’re pretty fast and loose about what might constitute ‘suspicious'; trying to avoid ubiquitous surveillance of everything, for instance, would in their eyes make you a ‘hard target’. I’m not sure how we can roll that back. If it can be done I think it has to be started at a local level, though. I’d really like it if there were towns and cities that flat-out banned street cameras and public transportation (and vehicular) cameras/mics etc — where certain people in certain parts of certain cities were willing to ‘risk’ a bit more crime or a bit less crime being solved in exchange for that level of privacy. Ironically the lower-middle class may have the best chance at making a difference in that regard, since back/side streets and non-development-type suburban areas at least have less cameras in residential areas. It’s funny to me how the wealthy and the impoverished both seem to be the most surveilled for different reasons; the wealthy want it to protect them from the impoverished, but at the same time they’re giving up their privacy to get it. And of course the impoverished are the most affected since they’re perceived to be shadier. That that extends to people waiting for a bus or in a subway or train station or what-have-you is outrageous.
The fact that those agencies have access to THIS data is something that I’d like to be covered in more depth. I never really worried that much about surveillance cameras when they were basically separate and on taped loops that wrote over themselves every day or two (I didn’t like it but I didn’t feel like my privacy was forever invaded); the current setup, however, one can assume it’d be a combination of attacking the endpoint of the places with cameras (many of them connect theirs to their internal network not realizing they’re exposing it to the entire internet if their router’s hacked) or just getting access via similar methods as we saw in the Snowden documents about accessing online services (in the case of municipal and larger networks).
I’m a bit off-topic but not really — you mentioned entire cities; can we be sure they’re not already hacked in that manner? I strongly suspect some are. Which goes back to my statement about localities: The best defense may be to encourage our localities to NOT network all of this stuff (even if they INSIST on it being done, something more like a black-box system might be imposed, creating more of a physical onus to check it ONLY in case of something truly egregious; it’s not a compromise I’d like but it’s difficult to shake peoples’ mindset from ‘more is better’ unfortunately)), to not even have it in the first place, and to pass more local laws preventing longer-term storage of surveillance footage and require posting of such cameras when in a location where they’re used.
With regards to 0days, they are not one-time use only but they’re rarely used en-masse; they can be, and sometimes are, but my understanding is the government prefers not to use them soas to make them last longer unless they’re truly after someone or some piece of information — but prefers not to use doesn’t mean doesn’t use (and one assumes they have quite a stockpile at the NatSec level; I don’t believe the FBI often has that level of access unless it’s acting as a ‘customer’ but I’m sure it occasionally does (eg the iPhone case (where it kind of was still a customer), some of the Tor cases, FinFisher and its ilk (again, customer)). They can last a long time or a little time depending on many factors, including luck and the skill of the person being attacked — not just how many times it’s been attempted to be used. 0days can die in a day.
There WAS a major SMS vulnerability a year or two ago published by jduck & co. One assumes that was used here and there on various people, including various governments (they actually probably prefer not to use 0day if they can help it since that’s even more deniable; it’s especially easy to avoid burning one (potentially) in the case of mobile since they’re (comparatively) so rarely up-to-date (or even updateable)).
FWIW you can hide both mobile data and wifi data if you have the correct level of operating system access (especially on a mobile device where it’s hard to attach something outside of the device to monitor the traffic (one can argue there’s adb and the like but then you threaten possibly breaching the integrity of any machine you’d attach it to to do said monitoring).
Forgot to mention — I find it horrible that various states have 2-party consent laws for monitoring phone calls but just walking outside or taking a bus assumes consent to be surveilled in video, audio, or both (and sometimes more).
Re: encrypt your whole phone.
Good advice although there seem to be issues: http://www.slashgear.com/qualcomm-powered-android-devices-found-to-have-faulty-full-disk-encryption-02446903/
That said, I would think that Signal stores its messages in an encrypted format and only decrypts them when you enter your password. Can anyone confirm if this is true or not?
AS the author of the article says: Keep your device (Android or iPhone) updated. The Android vulnerability being passed around by Apple fanboys has already been patched. Don’t take my word for it – check it out.
Sure — that’s all fine. But what I like is the belt and suspenders approach because all software of any size has bugs. So I’d like to have the messages encrypted by signal, and the whole device encrypted as well, because we don’t know how many vulnerabilities exist that we don’t know about yet and/or haven’t been patched yet.
UPDATES on this article ought to be ON THE READY from that main menu as you can bet your bippy the NSA CIA & F…B…I… have just finished reading this and have convened the teams to get past the security in all kinds of ways incl impersonation.
disclaimer: good security is required for persons, businesses and countries and police authorities need to be able to stop serious crimes before they cause real damage. At the same time, corporations and governments should not be employing police services to stop competition, disrupt democratic processes, or violate freedom of speech and press.
Surely an even more important security tip is not to let anyone know if you know someone else’s password. I mean in some countries, when the moon is full and the wolfsbane blooms, you might be able to claim some kind of right against self-incrimination not to reveal your password. But it’s 100% sure that if you know someone else’s password, you’ll be offered a choice between betraying them and going to jail, and you won’t even get the lousy pieces of silver.
Yet people will constantly invade other peoples’ lives and harass and bully and stalk them online. I’ve been thinking there need to be new laws that equate using such things to being a form of digital rape. I’m talking about those ‘lulz’ and ‘activist’ and ‘hacker’ types who go after their own — the very sorts of people most likely to get thrown into that sort of predicament, I assume.
Hacking should be legalized so that our computers will be more secure.
The laws against hacking were the first great mistake of the internet. We could have had companies humiliated into doing the right thing by punk teenagers posting little graphics on their front pages. Instead, the businessmen managed to keep their “respect” by such means as it is usually maintained (brutalizing those who would disagree) until such time as crews in nonextraditable countries got down to work.
The thing is, I don’t know if the American kids and coders have left the skills and the attitude and the free spirit to properly test corporate security even if the laws were completely repealed tomorrow.
I think prohibiting hacking is a violation of freedom of speech. It’s manipulative speech, but regardless of what the hacker’s computer said to the recipient’s computer, anything the recipient’s computer does in response is essentially voluntary. It’s the technological equivalent of, “He said something confusing and I did something stupid in response! Punish him!”
I think you misinterpreted me. I said ‘lulz and activist and hacker types who go after their own’ — not hackers. The CFAA and most of the current hacking laws are awful. At the same time, we don’t have strong enough laws for cyber harassment. And it’s really screwed up when people help to destroy what they say they care about by eroding trust instead of building trust. And the current trend towards political correctness in the hacker community can be just as dangerous as the malicious stuff I was talking about above. The funny thing is, I am absolutely against extradition for hacking cases (btw please support Lauri Love’s bid to not be extradited if whoever reads this is in the UK) — especially when it’s by the US. But I think I might understand if it was for SWATting and/or online harassment to an citizen of another country (yes that means I also think Americans should be extradited for the same sorts of offenses). Those cause actual grievous injury.
It’s the SWAT team causing the injury, though. They should be held criminally responsible for any negligent or malicious damage they cause. But that’s a whole other can of worms.
I don’t disagree with the meat of your argument — and SWAT teams now are totally overused and incredibly violent (and usually unnecessary; they’re terrifying to begin with). But that doesn’t change — and in fact it sort of emphasizes my argument — SWATting is done specifically BECAUSE of all of these things. Nobody’s suprised when it happens that does it. They find that part funniest of all. That’s flat-out sociopathy, and I’ve yet to hear of any way to rehabilitate that short of actual punishment. Too many kids consider it a joke — even when they SWAT one another — like it’s some sort of a status symbol. They derive JOY from terrifying, terrorizing, and destroying property at someone else’s hands. If it were murder, there’d be a strong statute and punishment; I’m not sure why it shouldn’t be treated as though it were a contract kill by an unwitting bunch of killers. I won’t go into the erosion of public trust and yelling wolf aspects. Sorry, I feel a bit strongly about this… mostly because I’m frustrated by seeing these sorts of things be called ‘part of hacker culture’ to some of the new generation. It’s a horrific premise.
I think that maliciously providing false information to police or 911 operators should be a crime in itself, but the hacking is primarily the fault of whoever was in charge of securing the device if it is expected to not get hacked, because you can’t reasonably expect without exception every single person in the entire world to be nice, and everyone is safer if we fix these bugs instead of hiding them.
DDOS/spam is kind of a separate entity from unauthorized access hacking though..
Can we agree that SWAT teams and the people SWATting people are both to varying degrees legally, morally and ethically responsible for harm (including of a psychological nature)? Can we further agree that (probably) most SWAT people think they’re doing something ‘good’ (despite how people who’ve been terrified by a SWAT team might feel), whereas someone SWATting someone probably knows what will at least happen, what they hope will happen, what might happen, and that they’re not being ‘good’ doing it (putting aside the quite bad but slightly less horrible call to get a day off from school by calling in a false threat — still a crime, though)?
I think we can both agree that computer hacking can be not too bad, somewhat neutral, bad, really bad, and deadly — but also sometimes good (in certain cases) even by the criminal definition — but that ‘hacking’ as a concept in and of itself is not bad (but IMHO unlike with guns when people DO overstep they tend to do so in such a way that they interpret ease with amount of harm — one reason I’ve been pretty much anti-metasploit but not anti-hacking, personally. Learning how systems work for one’s own edification shouldn’t ever be illegal as long as you’re not accessing other peoples’ property; when you are, there should be varying levels based not only on intent but actual harm; I’ve seen these get mixed up all-too-often… At the same time I’d consider someone robbing a bank by gunpoint far more terrifying than robbing on online. And I think most people consider SPAM a nuisance (if occasionally a big one) but not in and of itself necessarily all THAT harmful, usually; illegal, yes, but not that many people would die over it. I wouldn’t send it or want it but I don’t think it’s something to extradite someone over — that should be locally handled.
We can only fix the bugs we know about — and sometimes we can’t even fix those. Yes, the owner of systems has some responsibility but only some; at the end of the day you’re not let off the hook for robbing someone if they leave their door wide open… it’s still an invasion. I don’t think it’s even malicious a lot of the time, intentionally — but I do think the ease by which people can do things by being enabled to do so makes it very easy for a non-skilled, lightly skilled, or even moderately-skilled person to unintentionally completely screw up a system. Sometimes that can have unintended consequences. I’m not sure how that should be punished. Certainly it shouldn’t be punished the way the US often punishes people. It shouldn’t get someone extradited.
I wouldn’t mind seeing some of the intelligence agency personnel who’ve hacked people in other countries (regardless of where they’re from) being extradited to those foreign countries to stand trial by their countries’ rules and laws, by their countries’ citizens. There’s a double standard there that’s a big part of the problem.
There’s also a double standard (back to the kids stuff) in trying to say you encourage things like operational security and good privacy practices but then going around and destroying the lives of your peers by violating theirs, ‘doxing’ them publicly, affecting their physical lives and safety, harassing them online and other things that happen online when factions of things like Anonymous get out of control and turn on their own. I’ve never been a part of any of that (not a fan of drama) but some of them are probably our future activists once they get a bit more mature, and maturity isn’t always rewarded in some subcultures; sometimes the opposite gets rewarded instead.
I agree one can’t reasonably expect without exception that every single person in the entire world will be nice (heck, I don’t even expect most people to be) but I think the hacker ‘culture’ has taken a nosedive in the past decade or so and that worries me — because at least some of those people are going to be the ones working for the three letter agencies eventually.
I’m using a Windows phone a guy gave me last summer, for helping him move. What can work with it? It’s a Lumia 650
WindowsPhone is tough even though I loved the UI. Microsoft has basically killed the platform off and there aren’t many developers making things or updates for it (things you need for a secure messaging app) – and Microsoft is not a good company when it comes to respecting its customers privacy with regards to the government:
https://www.theguardian.com/world/2013/jul/11/microsoft-nsa-collaboration-user-data
If you want privacy on a smartphone, I’d suggest you sell the Windows Phone and get a used iPhone or Android (Nexus only)…JMHO…
Unfortunately there isn’t a version of Signal for Windows Phone. But WhatsApp is available, and it’s also end-to-end encrypted, and you can verify the crypto similar to how I described in this article. It has some worse privacy around metadata (here’s my last article explaining it all), but it’s not bad, and a lot more people use it than use Signal.
i cracked whatsapp and the end to end is FAKE. See my log on forensicfocus.
Hi Micah,
What’s your opinion on app called Wickr ?
Play around with it and have fun. Just use it as a phone/media player/GPS/computer. Don’t use if for anything that requires security, like banking.
Do a factory reset every so often. A rebuild agent would be nice so that it reloads your installed apps after the reset. I used to have a rebuild script for Debian that would tear down the computer and rebuild it. That should wipe out and most roots and malware.
Not everything requires hyper-security.
All messages sent using signal go trough signal’s servers. That means signal is spying for the government. If they were good guys they would have created a peer to peer system. Of course if they did that they wouldn’t earn (steal) any money…
It’s encrypted by the client, so they can’t read it (this is what end-to-end encryption means).
However, Signal is against federation, so you can’t use your own Signal server. Similarly, they don’t provide a .apk file without the malicious Google Play Services and library dependencies. They took down rebuilds that removed Google apps.
If you want a private messenger that also respects your freedom, try ChatSecure or Silence, both available on F-Droid, a FOSS app repository.
Any tips on how to get my teenage kids to install signal?
Ask them nicely?
Tell them this is so you can send them drug deals, and order your organised crime syndicate around. Tell them it’s so you can call Donald Trump without feeling embarrassed.
I wasn’t aware there was ever any method by which calling Donald Trump wouldn’t be a reason for personal embarrassment.
Tell them you want to use this for Family conversations since it makes them private. Normally there isn’t too much pushback for that. Going beyond that (friends using it etc. really depends on them, but once they know it makes things private, that helps).
Just tell them you want to use Signal for family messages, since it makes them private and you want that. I’ve had good luck with that.
Remind them of ‘the fappening’ and related incidents and tell them their odds of a similar incident is reduced if they go to Signal/TextSecure/etc, and even further reduced if they go decentralized and communicate with xmpp/jabber and otr (eg ChatSecure). Ask them how they’d feel if you could see their messages if that happened ;). Chances are there’s something there they do consider private but they don’t really think about it til you mention their parents might see it (it may seem hyperbolic but hacks are often random). Something that doesn’t store a chat history on a middle server is always better.
BTW in the same vein (or similar anyway) remind them chances are you know their security questions/answers. This might get them to actually choose something harder to guess (than something their friends also know… high school is cruel; help them protect themselves from bullies and online harassment). Always nice when a parent wants to help a kid learn good habits (and btw sorry for the Trump comment; wasn’t directed at you, just was offhand, didn’t have time for a proper reply).
Good work Mr. Lee, thanks for the information.