How private is your private Discord server?

(Image credit: Discord)

It's slightly misleading to refer to Discord groups as 'servers.' The word was carried over from a time when gaming clans would rent out private voice servers for software such as TeamSpeak and Ventrillo, and while some still do, many have migrated to Discord for its convenience. But unlike a rented TeamSpeak server, a Discord 'server' is a subsection of a platform where there are universal rules of conduct and an official moderation team. That got me wondering: Just how private are our private Discord servers?

The introduction of the Go Live feature, which allows users to stream games to up to 10 viewers in a voice channel, prompted my questioning. I noticed that I could stream any program I'm running, including browsers and video players. I wondered if Discord was peeking in on private streams to catch copyrighted films, the way public platforms like YouTube and Twitch would. 

The answer is no. There's no one poring over a giant wall of Go Live streams checking to make sure I'm not streaming Highlander 2: The Quickening to my friends. Unless you invited a bunch of Disney and Warner Bros lawyers to your server, you're in the clear.

What Discord does and doesn't do

Even if no one's snooping on your private streams, Discord shouldn't be treated like a secure line. Your private messages are not end-to-end encrypted, and data breaches are a possibility on any online platform (Discord has a bounty out on vulnerabilities). Furthermore, Discord's trust and safety team does have the ability to read private messages and messages sent in private servers when investigating user reports. 

That doesn't mean Discord is spying on you or your server—it's not doing that at all, according to multiple company reps and director of trust and safety Sean Li, who all tell me that proactive monitoring is both logistically impossible and not done by policy.

Broadly speaking, Discord's moderation policy is reactive rather than proactive. Every chat message has a unique ID which users can report. If someone reports, say, a threatening DM, a member of the trust and safety team may view the reported message and decide what action to take. When a pattern of behavior is reported, they may view more than a single message. All access to message logs is itself logged, so abuse of the system would be discoverable. The ability to view messages is solely with the trust and safety team, and all of Discord's trust and safety team members are employees. None of the work is outsourced.

Discord also notes on its support page that deleted messages are truly deleted. They can't view them once they're gone, which on one hand allows abusers to remove evidence, but on the other gives normal users the ability to erase benign chat messages that they simply don't want sitting on a server.

In a blog post, Discord said it received 52,400 user reports during the first three months of 2019.

Li says the team views itself as analogous to law enforcement, in that cops don't hang out in your house waiting for you to commit a crime, but will come to your house if someone reports one—the 'crime' in this case being a TOS violation, or, sometimes, an actual crime. Discord also won't intervene in petty personal disputes which can be solved by one user blocking another, or leaving a server where they aren't getting along. It's only when something serious has happened that it intervenes: targeted harassment, hate speech, threats to others or of self-harm (Discord may contact law enforcement in these cases), raiding, and so on.

In a blog post, the company said it received 52,400 user reports during the first three months of 2019. That comes to an average of 582 reports per day. According to Discord, if you don't count spam bots, only 0.0003 percent of Discord's userbase was banned after being reported in that three month period, a statistic it presents as evidence that abuse is not widespread.

"As the platform has scaled, Discord’s trust and safety technologies have scaled and evolved with it," says a company representative. "As a result, violations have remained very low as a percentage of total messages."

(Image credit: Discord)

Walking the line

Discord isn't entirely reactive, however. As Slate reported in 2018, the relative privacy afforded to Discord groups meant that hate speech and violent rhetoric could go unnoticed. It was somewhat big news at the time, and the servers pointed out by Slate and other publications in 2017 and 2018 have since been shut down. Discord also worked with the Southern Poverty Law Center to shut down other white nationalist servers and ban users.

By letting private groups manage themselves, Discord is able to keep something close to server culture alive, where the only mods are those chosen by the server owner.

Though Discord's search function doesn't return any results for words like 'Nazi' by design, unofficial server lists such as disboard.org still do—but that doesn't mean Discord has stopped kicking them out. I attempted to join around 20 servers with white supremacist messaging in their names or descriptions, and only one of them still existed. It was brand new. Clearly, they are being spotted and deleted. 

It is conceivable that some of the banned hate groups have reappeared with benign-sounding names, staying out of the spotlight this time. Without being tipped off, that would be hard to detect on a platform with somewhere around 250 million users. But if Discord announced a plan to start actively snooping around in small private servers just in case, I can't imagine anyone would be happy about that, either. 

By letting private groups manage themselves, Discord is able to keep something close to server culture alive, where the only mods are those chosen by the server owner. I've never been in a server that Discord intervened in, and in my experience, users are comfortable enough with the level of privacy to chat openly.

(Image credit: Microsoft/PhotoDNA)

Combating illegal material

There is one more special exception to the reactive approach. Discord uses PhotoDNA, a piece of software created by Microsoft and Dartmouth and donated to the National Center for Missing and Exploited Children (NCMEC). The software runs images through a cryptographic function that generates a 'hash' unique to each image. Think of hashing as one-way encryption. The same image will always generate the same hash (with some variability in this case due to the nature of digital images), but the hash cannot be decoded back into the image. 

When you upload an image to Discord, it's scanned by PhotoDNA, encoded into a hash, and then that hash is checked against a database of hashes which correspond to known illegal photos. If a photo is flagged by PhotoDNA, Discord reviews it, removes it, and contacts NCMEC. 

Because this isn't a 'machine learning' program like Discord's optional DM filter, which effectively guesses at whether or not an image sent to you is explicit, the chance of a false positive is infinitesimal. According to NCMEC vice president John Shehan, it's "one in ten billion."

Take that estimate with a grain of salt (I've also seen 'less than one in one billion'), but it's true that two different images resulting in the same hash is extremely unlikely. "[PhotoDNA] has scanned over two billion images without a single false positive," said Dartmouth computer science professor Hany Farid in a different report from 2011

PhotoDNA is also used by Facebook, Google, Twitter, and other companies which host user-uploaded images and videos.

Discord did have a problem with users sending unsolicited child pornography through DMs, which Gizmodo pointed out in 2017. The company did not disclose how many images PhotoDNA has flagged since its implementation, though I haven't seen any similar reports in 2018 or 2019.

(Image credit: Discord)

The limits of your privacy

Returning to legal Discord usage, my advice is generic: be careful about how much you say about yourself. Apps like Signal, which offer end-to-end encryption, can be considered private and secure. Wherever messages are logged on a server, however, there is always the risk of a data breach, or the possibility that someone will report your messages to the platform owner.

Large Discord servers are also more likely to include people who may spread chat logs beyond the server walls, whether by reporting them or sharing screenshots. When there's a big stack of avatars, there's no way to know who's behind all of them. One of them might be there to stir shit. One of them might be a cop. One of them might be me. (I am pretty easy to spot.)

Building large networks of online acquaintances and lasting friends is easier than ever, in part thanks to Discord. But the consequence of social media platforms is that details about our lives are increasingly being stored, transmitted, and aggregated. 

I believe Discord when it says it has no interest in proactively checking in on servers that haven't been reported. Unless they really want to see my opinion on Patrick Marleau signing with the San Jose Sharks, it would be a massive waste of time on a platform so large. As a general rule, though, don't share sensitive information on big platforms. Your private Discord server is reasonably private, but it isn't encased in concrete.

Tyler Wilde
Executive Editor

Tyler grew up in Silicon Valley during the '80s and '90s, playing games like Zork and Arkanoid on early PCs. He was later captivated by Myst, SimCity, Civilization, Command & Conquer, all the shooters they call "boomer shooters" now, and PS1 classic Bushido Blade (that's right: he had Bleem!). Tyler joined PC Gamer in 2011, and today he's focused on the site's news coverage. His hobbies include amateur boxing and adding to his 1,200-plus hours in Rocket League.