Transcript: Kee Jeffreys on privacy, messaging and the messy internet
The Session co-founder talks about why privacy matters, how messaging apps track you, and the challenge of building tools that protect users.
In this conversation, Kee Jeffreys, co-founder of the Session messaging app, talks about building a private, decentralised messaging platform. Listen to the full podcast here.
Callan Quinn: Do you want to start by sharing a little bit about your background and how you got into doing tech stuff with privacy?
Kee Jeffreys: When I was growing up, I was always interested in technology. I would take apart computers and stuff like that. When I got out of high school, I decided that I wanted to go study computer science, so I went to university for that — and that was great.
At the time, one of my lecturers casually mentioned Bitcoin and cryptocurrencies. This was probably in 2016, so I really started to dive down that hole a little bit. Before that point, I was already interested in decentralised technology as well.
We’d seen some bigger decentralised networks at that point, like Tor and also BitTorrent, which were showing how you could organise computers all over the world to be able to share information and also provide privacy. It was an emergence of multiple different factors around that time, which led me to start going to some meet-ups. I met a bunch of the other co-founders of Session. We thought about how we could mix some of the emerging technologies that were out there. Tor provides this private network, which is hosted on thousands of people’s computers around the world.
Cryptocurrency is doing really interesting things on the incentive side of things, where you can incentivise people to run nodes. And we were also quite interested in messaging at the time as well, because we thought it would be an interesting use case for a combination of all of these technologies.
In 2018, we started up the company and started putting together the decentralised network, and that was really the start of the journey.
Get Scamurai’s weekly newsletter delivered fresh and ironed to your email each week, with the latest news on scams, fraud and tech, as well as Scamurai’s latest investigations and podcast — all in one place!
// Tor and decentralisation
CQ: What was it specifically about the decentralisation aspect that interested you?
KJ: We had always been interested in networks which were governed by a protocol. So you can think about something like the internet — there are several different protocols which run on top of it, and it’s really a scenario where the code decides what the rules for the protocol will be.
With cryptocurrencies, you can really see people sticking to their code because they have a financial interest in the network as well. We saw what Tor was doing on the decentralised side, where you could split up all of these nodes all around the network and provide privacy that way.
We really thought that if we had a network where you had incentives to act according to the rules of the protocol, but also had the privacy aspect of splitting up data across different nodes and encrypting that data — that would be powerful. That’s more from a technical perspective, but from a philosophical perspective as well, privacy was starting to, or even by 2018 had started to, be eroded. It was being eroded by various rules and regulations that were being passed, and we wanted to create networks that would provide privacy according to a protocol — not according to some rule or regulation that had been passed in a despotic country.
Say, for example, Iran — they’ve done some big crackdowns on internet privacy. We wanted to be able to provide privacy to areas like that, regardless of whatever rule had been passed in that specific jurisdiction. We thought it would be really important for human rights activists, journalists, and protesters to be able to access those tools and use them to foment change and all of that good stuff.
CQ: You’ve mentioned Tor a couple of times. Can you give a little bit of background as to what Tor is?
KJ: People might be familiar with the concept of a VPN. Essentially, with a VPN you set it up or pay a company, and they give you access to a bunch of different servers in various locations. You can then connect to their server, and that server relays your traffic.
So, for example, you can appear as if you’re accessing Netflix from the US or Canada, even though you live in Australia — and the same is true for lots of different countries. Tor is similar to a VPN in that you’re tunnelling your traffic through external servers, but it actually does it in a trustless way.
If you look at a VPN, you’re typically paying a company, and that company can essentially see a lot of the metadata associated with your traffic when you use their servers. But the idea with Tor is: what if we have a community of server operators all around the world, and we use special encryption techniques to bounce your connection through multiple different servers? In Tor, it’s three different nodes that you hop through, and that provides a much higher level of privacy and anonymity. It also reduces the amount of trust required in the network, because you don’t have to trust a single operator — instead, you’re trusting that globally there are enough non-colluding nodes to safely get your traffic to its destination.
So it’s combining some of the same technologies that VPNs use, but in a more trustless manner.
CQ: And colloquially, when people talk about things “on Tor”, that’s essentially what they mean when they talk about the dark web, right?
KJ: I think there’s a bit of a distinction here, because there are the internal “onion” addresses in Tor, which are what we would typically call the dark web — and then there are people using Tor simply to access the normal internet. They might be using Tor as a tool to bypass a censorship regime, like in Iran, for example.
So there are really two different aspects of Tor: there are the internal onion sites, and then there are people using Tor to access the regular internet via exit nodes, which are a special type of node in Tor. Those are the two main distinctions.
CQ: I always find it interesting, because I think the onion side of it just looks like the really early internet still. It’s janky as hell, some of it.
KJ: Yeah, there have been a lot of trade-offs made on the onion side of things, mostly because there are more hops when you’re accessing an onion site. It’s essentially about six different hops before you reach an onion site, instead of the three used when you’re accessing the regular internet — so it feels a lot slower.
You’re also limited by the browser, which is how you access these onion websites. The browser is designed to remove fingerprinting technologies — those are techniques that try to assign a unique identifier to your browser so it can be used to track you around the web. Tor has built this special browser to protect people against that fingerprinting, but that’s also meant that many of the technologies we use on the modern web to provide a smoother user experience — like detecting country of origin, or using cookies from other sites to personalise the experience — aren’t available in the Tor browser.
That often leads to a clunkier user experience, but one with very high privacy. There are also many limitations on the JavaScript code that can run inside the Tor browser to protect users. But that’s also meant that onion sites probably don’t have the best user experience.
// Encrypted apps
CQ: To go back to Session, can you explain a little bit about the landscape for privacy or encrypted apps, and what you think Session brings to the table that doesn’t exist at the moment — or that exists, but isn’t Session?
KJ: When we’re talking about Session compared to other messaging applications, whether that be private messaging applications or not, I typically break it down into three main categories of things that Session does differently. The first, and probably the biggest thing for a lot of users, is that Session doesn’t require a phone number or an email address or any real-world identifier to sign up.
That means your privacy is protected much better than with those messaging applications where you’re required to sign up with a phone number. You can think about how we use our phone numbers so extensively across multiple different accounts. You might have it linked to your Facebook or your Google account, or even have it publicly listed for your business.
If a service gets hacked and personal information about you is retrieved in that hack, that often contains information like your phone number, address, or name. So there are a lot of these kinds of leaks out there that almost connect a user’s phone number to their real-world identity. As soon as you sign up for a messaging application with your phone number, you’re giving away a lot of your anonymity in the service.
Session doesn’t require a phone number — it just generates a random identifier, which becomes your identity on the network. Then you can start sending messages back and forth.
So that’s the first thing. The second thing is that Session is decentralised. There aren’t single or multiple centrally run servers that relay all the messages. If you’re using, say, Telegram, WhatsApp, or Signal, when you send a message it goes to their centralised servers — and often you’re interacting directly with their servers via your IP address.
You may also have signed up with your phone number. So even though the contents of your messages may be end-to-end encrypted, there’s still a lot of metadata that the service actually gets when you send a message. That’s metadata like your phone number or IP address.
That can be used to create fairly strong social graphs about what a user is doing. If two users with certain phone numbers or two users with the same IP address are connected and communicating regularly, you can start drawing out, “Okay, well, this might be this user.” If that phone number was connected to a name in a previous leak, then that’s this user talking to another user in another country. You can start to draw connections.
Session doesn’t use a centralised server, so there’s no central point where people — or the operators of the network — can see who’s talking to whom.
The third thing is that we use onion routing by default as well. I was talking about Tor earlier — Tor is an onion router, that’s what that technology is called. When you download and install Session, you’re immediately using onion routing to connect to the decentralised network. That means your IP address isn’t being leaked when you send a message; it’s hidden by the three hops that are created in the network.
So, yeah — we hide your IP address, you don’t require a phone number, and it’s decentralised. There’s a lot more I could get into there, but those are the main advantages of using Session over other messaging applications.
CQ: Yeah, it had the easiest sign-up I think I’ve ever had for an app.
KJ: No SMS verification pin thing?
CQ: Yeah. It drives me crazy because I feel like I’m getting paranoid about how much data I have to give away for literally everything.
I don’t know what it’s like in Australia, but here [in the UK], for example, if you go to the supermarket and you want discounts, now you have to give away your life story. I went to buy pizzas the other day, and they had an offer — a couple of pizzas and some dips and stuff for ten quid — but you can’t get the offer unless you sign up for their loyalty scheme.
The scheme saved me — I had a £30 shop, and it would’ve cost £45 if I hadn’t used the loyalty scheme. But I had to give away my phone number, my email address, my date of birth, my full name, and my home address.
KJ: Yeah, it’s happening pretty much all over the world. We have a similar system as well. When you shop at the supermarket, you can get this rewards card, which is essentially just a data-tracking system. And I guess it makes sense for the supermarket, because now they have every single thing you’ve purchased, your home address, and your email address — so they can send you spam like, “Oh, you bought this thing and now it’s on discount.” Now you can receive a little email update about a product you can buy.
I think it’s very valuable data for them to have. But everything going into these systems means there’s just more and more potential for data breaches. Then it’s like connecting your phone number with your home address, with your name — and if any of these databases ever get breached, well, it seems like every year we hear about a new major data breach affecting 20 or 30 million people. Eventually you’re going to get caught up in one of these breaches.
CQ: Yeah, it’s like… I just use fake names now, but I don’t think it makes much of a difference. My theory on it was that if you’re not paying for something, then you’re the product. But I am paying for it, so I’m paying for my pizzas and I’m still being harvested for my data.
But in terms of what you mentioned about how these companies can draw connections based on metadata from messages — are they actually, is there a room at Facebook or wherever where they’re sitting there doing this? Because it just seems Machiavellian to be tracking people like this. Is it an intentional thing, or is it a byproduct of the service they provide, do you reckon?
KJ: Look, I think it depends on what service you’re talking about. When it comes to Signal, for example, they do have a centralised server, and most people using Signal aren’t using a VPN or Tor. You can use it with a VPN or Tor, but most people are just loading it up on their phone, opening the app, and sending messages.
That means their IP address is connecting to the Signal servers, the server knows the phone number of the user, and then it delivers the message to the other user. In Signal’s case, I don’t think they have any malicious intent. I think they manage users’ private data well.
It’s just that you’re creating a honeypot when it doesn’t necessarily need to exist. If someone hacked the Signal servers, they would see a lot of metadata about users. In the case of, say, Facebook, on WhatsApp and Instagram — which all have messaging functionalities — there’s more scope for that.
I would say there’s probably a bit more… I wouldn’t call it mal-intent, because they were upfront about selling data to advertisers. But I do think they are drawing connections on users’ metadata, because it’s just so profitable for them to do so.
When you have a Facebook account linked to all your other accounts, if you send a message on Messenger, they can see that your IP address is in, say, Scotland, and that you’re sending messages to someone in India, or regularly messaging people in India. They may presume that you have family in India and are sending messages back and forth. Then they might show you, on your Facebook feed, flights to India or advertisements related to India. That’s far more profitable than if they didn’t use this data. So they really do have an incentive to use it.
Signal and AWS going down
CQ: You mentioned Signal. My impression was that Signal was private and sort of secure, but it was caught up in the AWS outage last week. Does that tell people who use it anything about how secure it is, or is that being misunderstood? What did that actually entail?
KJ: I don’t think it really pertains to Signal’s security, apart from the fact that Signal uses centralised servers, mostly run via AWS. But Signal is designed so that messages are end-to-end encrypted. The servers running out of AWS Virginia, or wherever, shouldn’t be able to read any of the users’ messages because of how the encryption is designed.
That doesn’t say anything about the metadata created when you use Signal. They do have some protections around metadata, but they’re not hiding users’ IP addresses, and they still require you to sign up with a phone number. Those are two massive pieces of metadata that they have access to.
The fact that AWS went down doesn’t necessarily say anything about security. It’s more about the censorship resistance of Signal — it does have what’s close to a centralised point of failure. If AWS fails in a particular region, it impacts all Signal users in that area.
In the case of Session, the servers are run all over the world. We didn’t have any disruption when AWS went down last week, as far as I’m aware, because the network is decentralised and nodes are in different data centres, run by different people.
It doesn’t affect the US as much, because the network load balances across different nodes. If some nodes go down, there are still other nodes to serve requests.
CQ: In terms of users for Session, how many do you have at the moment?
KJ: Monthly active users is over 1 million — I think it’s been closer to 1.3 or 1.4 million monthly active users currently. So yeah, I would say it’s still small compared to, say, Telegram, which has a billion users, or WhatsApp, which has a billion-plus users.
But we’re building that growth, and there are a lot of people using Session now, so it’s great to see that uptick.
CQ: I suppose that’s one of the biggest challenges, right? Because if you want people to use a different messaging app, they also have to get all the people they message to switch as well, right?
KJ: Yeah. There’s a network effect associated with it. Network effect can really help you grow, but it can also make it harder to grow initially, because you need a critical mass of people coming onto Session.
The way we’ve seen Session used in the last couple of years is that people will still use other messaging applications. This is the case with messaging apps anyway — you probably have five different messaging apps installed right now, whether that’s WhatsApp that you use to talk to your grandma, or something else. A lot of people find WhatsApp to be very easy to use.
It’s often the less tech-literate people who use it, and then you’ve got Signal, which people use for more private messaging, and then Facebook Messenger or Instagram. It’s all these different messaging applications for different purposes and target markets.
I’d say Session is a messaging app that you install when you want the most privacy. That’s when we see users using it a lot. They may install it and use it for private conversations — for example, sending passwords, credit card information, or data they don’t want to be leaked — but they’re still using other messaging applications as well.
That makes the switch slightly easier, because you don’t need people to completely move over to Session, even though that would be great because they’d get more privacy. They can use it in select scenarios.
CQ: Gotcha. Yeah, it drives me crazy how many messaging apps I need. I would really like to just have one, you know?
KJ: When we first started building Session, I thought messaging applications were really simple. We don’t actually need a million messaging apps. But as we’ve built it over the past couple of years, I’ve realised that messaging apps are actually incredibly complex pieces of software. People have very high expectations about what messaging apps should be able to do and the guarantees they need to provide.
It’s not the worst thing in the world if you go to a website and it doesn’t load — you just come back an hour later. But if you send a message to someone and it’s not delivered, that can be really detrimental, just in general.
People expect messaging apps to have all the features that other apps have, be super reliable, send fast notifications, and allow media sharing. That has split the messaging market into different applications focusing on different user experiences.
WhatsApp, for example, is one of the more polished, easy-to-understand apps. Then there are messaging apps that are part of other applications or social networks, like Facebook Messenger or Instagram. They all provide slightly different use cases. Slack, for example, is work-based — each app has its own niche.
CQ: Yeah, there’s sort of aggregator software now. I don’t know if you’ve tried Beeper, which tries to put them all onto one app, but it can be a little slow to update [with new messages].
In my head, I think, if I’m with one mobile service provider, I can call someone on a different provider. But on a tech level, what would it take for me to send a message via LinkedIn to someone on Facebook? It’s just not technically feasible?
KJ: One of the major barriers is the different encryption methods these applications use. I think Facebook Messenger has now implemented the Signal protocol, at least for one-to-one messages. LinkedIn wouldn’t have that — at least as far as I’m aware.
If you sent a message from LinkedIn to Facebook, it would probably be okay because the LinkedIn message is probably not encrypted by default. But Facebook could see the contents of that message as they deliver it to the Facebook user.
Going the other way, a Facebook user sending a message to LinkedIn, there’s a UX complication. The Facebook user may expect end-to-end encryption, but that wouldn’t be possible on LinkedIn. That creates a bit of a complication.
The account systems also don’t match. How do you recognise that you want to send a message to a LinkedIn account from Facebook? You’d need to grab the user’s identifier from LinkedIn and import it into Facebook. They all have different account systems, which makes it complicated.
CQ: So I’m just going to have to deal with ten different accounts for the rest of my life?
KJ: Yeah, at least until everyone gets on the same standard potentially. There’s been some work to standardise cross-platform messaging. XMTP is a protocol designed to let multiple apps communicate using the same protocol.
But it’s hard from a tech perspective to tell an app that built its entire tech stack under a certain encryption protocol to move to a new one. It could take years of development and they might lose features developed specifically for their protocol. It’s a tricky lift.
CQ: And I guess it doesn’t make business sense.
KJ: Not particularly. If you’re not getting people onto your platform, it’s not as great a business proposition if everyone just stays in their own messaging apps. It would be great for Session if there was an open standard for cross-platform communications, giving our users access to other apps. But I’m not sure it makes sense for larger operators unless they’re forced to.
Digital privacy around the world
(Note: The Chat Control legislation proposal has been withdrawn since the recording of this episode.)
CQ: You brought up encryption a few times. I don’t know — this is a big issue in Australia, but in Europe at least, there’s a lot of debate at the moment about law enforcement being able to access encrypted messages through something called chat control. If you’re familiar with it, do you want to go over a little bit about what that is and why messaging apps might be affected if it comes in?
KJ: Yeah. This isn’t just an EU thing — the EU has been talking about introducing this chat control legislation. I believe the UK has passed previous legislation, and Australia has similar measures as well. Session isn’t actually based in Australia. The steward of the Session project is the Session Technology Foundation, which is a Swiss entity.
It’s based in Switzerland, so it has to comply with Swiss law. Switzerland’s not in the EU, obviously, and it’s not in Australia or the UK. There’s a bit more constitutional protection in Switzerland for privacy, and that’s always been part of Swiss culture — providing privacy for individuals.
In terms of the legislation, most of the tools being proposed involve client-side scanning for images, content, or messages, which may violate rules on drugs, terrorism, or CSAM material. From all the legislation I’ve seen so far, it’s not technically feasible to have end-to-end encryption and client-side scanning at the same time.
If you have client-side scanning, that essentially creates a backdoor in the application. Either the content is sent to another server to be analysed, even if it’s end-to-end encrypted, or the scanning happens on the device itself. You’re not necessarily aware of what is being scanned for, and technically it doesn’t really work.
When scanning on the client side, you need lists of things to scan against. Those lists are constantly updated, and you’re not aware of what’s on them. False positive rates can be very high. For example, consider AI scanning for CSAM. If a father is sending a picture to his wife of their child playing in the bathtub, that’s not abuse, but AI might misclassify it.
Cases like that become extremely difficult for AI to handle. People can be flagged for sending normal images to family when they haven’t done anything wrong. There are a lot of issues with the proposed legislation.
CQ: Are there any scenarios where this could actually be done effectively, or is it flawed from the beginning?
KJ: I don’t see how it could be done effectively without creating a backdoor into people’s encrypted messages. With current technology, it seems ill-advised.
There are problems like false positives and inconsistent definitions of offences. For example, what counts as a drug offence? If cannabis is legal in one US state, is it still a federal offence? Or in the EU, if one country says a drug is illegal and another doesn’t, who decides?
The same applies to terrorism content. If a state decides a protest group is a terrorist group, are users messaging about that protest now terrorists when other states would call them protesters? It creates huge moral, ethical, and legal issues.
We don’t install security cameras in people’s houses to monitor everything they do. Messaging apps are the most private way people communicate, and scanning them assumes wrongdoing, criminalising all users, not just malicious ones.
CQ: I find it fascinating. There’s been a shift towards more supervision and moderation online. But if someone is part of a terrorist cell or selling CSAM, they’ll likely know how to evade restrictions. It ends up affecting innocent users.
I’ve covered cases where Facebook’s moderation is bizarre. There was one where the Taiwanese police posted some advice on job scams, and Facebook flagged it as promoting human exploitation. They had to appeal to Facebook’s independent oversight board to get it overturned. Meanwhile, horrific content often goes unflagged. It shows that current moderation isn’t effective.
Enshrining this into law in the EU seems like a really bad idea.
KJ: I think there does need to be some content moderation. I’m not sure it should be put into law. Maybe users should pressure platforms to improve moderation.
Social media spreads information to millions of users via public feeds. Messaging apps are different — one user talking to another, or a few people in a group of a couple of hundred. The spread of information is much lower, and the public accessibility is far lower too.
The negative impacts of content, or the negative impacts of not moderating, are far less significant. It’s like the difference between giving a public speech to 10,000 people versus talking to your friends. Different standards should apply. A one-size-fits-all approach — making all messages accessible to law enforcement or platforms for moderation — doesn’t make sense for private messaging.
CQ: The UK had its Online Safety Act come in a while back. It’s preventing me from joining groups about mushroom cultivation on Reddit without showing my passport. I think Australia is planning something similar with banning social media for under sixteens. What’s going on there, and how is that going?
KJ: Well, the legislation’s passed. It will activate in December, as far as I understand. It’s essentially designed to stop under sixteens from accessing social media at all. With their parents’ permission, they might get past it, but as you said before, the people being blocked — under sixteens — are probably some of the more tech-savvy users. They’re super online and very aware of how to get around some of these bans.
I think we’re going to see a proliferation of VPN use in Australia. Teenagers will change their location to, say, New Zealand or another nearby country that hasn’t introduced these social media bans. Teenagers aren’t going to want to be cut off from their friends. They’re already very used to using social media, so I don’t think it’s going to have a massive impact. It will likely be widely circumvented. That begs the question: why are they even doing this?
I think they believe teenagers won’t circumvent it, which is not true. Then they’ll have to ask themselves: what are we going to do about VPN companies? That’s a much harder question from a government perspective. With platforms like Facebook, YouTube, and what they call social media websites, they do have some sort of physical presence in Australia — servers, staff, financial operations.
VPN companies, on the other hand, are often registered in the US or the EU and don’t have any physical presence in Australia. It’s hard to see how an Australian law would apply to them. Australia might claim global jurisdiction, but if a US VPN provider receives a notice that a user under sixteen is bypassing the social media ban, they’ll probably ignore it. Ultimately, they’re subject to US, UK, or EU law, and won’t be massively concerned about the reach of the Australian government.
CQ: Yeah, this is exactly what’s happened in the UK with the Online Safety Act. The UK government tried to find 4chan, and they were basically like, “We’re not based in the UK, piss off”. The UK’s argument is that these sites are being used by people in the UK, but it’s a weird situation.
KJ: It still seems like maybe only the US government can really enforce extrajudicial laws globally. They can prosecute people outside the US for violating US law even if it’s not present in that host country. If the US passed something like this, that would be concerning. But the UK or Australia doesn’t have the same ability to enforce these laws overseas.
CQ: In the UK, one effect of requiring passports to access certain websites is that people are using alternatives that don’t comply. The Wall Street Journal published an article about people who used to go on PornHub now going to other sites where they don’t need to provide their information. That puts people more at risk of malware or other threats.
I struggle to understand the motivation. Politicians might not understand how the internet works, or if they do, their approach seems very naïve. I understand the desire for better online safety for kids, but this approach seems clueless.
KJ: Yeah, it’s very haphazard in Australia too. Part of it is political theatre. They see issues like online bullying and social media addiction, and they feel they need to respond. They want to appear proactive. If people say, “You didn’t do anything,” they can point to this legislation and say, “We acted.” They cherry-pick stats to show it fixed the issue, and then they move on to the next election.
A lot of people don’t realise the social media ban affects everyone, not just under sixteens. How do you determine if someone is over sixteen? You need identification or proof. People assume it only applies to under sixteens, but when it activates in December, many will be surprised: they’ll have to upload their national ID or driver’s licence to access YouTube or send messages on Facebook. That’s going to feel ridiculous.
CQ: Oh, completely. I’ve run into the “please upload your passport” wall a few times. It’s fine as I have a VPN, so I just live in Malaysia as far as they’re concerned. But for most people, it’s too much hassle. I’m usually not invested enough to provide my details. The security aside, I just don’t want to upload my passport to Reddit or similar sites. It’s too complicated.
KJ: From a user experience perspective, this is one of the nice things about Session — not requiring a phone number. It makes the signup process really easy. You don’t need to deal with SMS verification or email links.
When you design truly privacy-preserving technologies, the user experience often improves. Requiring passports adds unnecessary friction — taking a picture of your passport, scanning your face, taking a selfie with the passport. It’s just annoying, even without considering privacy and security.
Get the newsletter straight to your inbox every week by subscribing.
Tips, vitriol and all other messages should be directed to Callan Quinn at callan@scamurai.io. Or get in touch below.




