As Meta removes privacy controls, TikTok explains why it never had any

Trending 3 hours ago

On May 8, Instagram will be able to read your DMs again. Meta is ending support for end-to-end encrypted direct messages — reversing a feature it introduced just two years ago — and reopening the door to automated content scanning, AI-powered moderation, and easier compliance with law enforcement requests. TikTok, meanwhile, confirmed it never offered the protection at all. Together, the moves signal that the era of unconditional privacy promises on social media is over.

In the span of two weeks, two of the world’s largest social media platforms have signaled they are done treating privacy as an unconditional promise. Together, the moves mark a decisive reckoning with what private messaging on social media actually costs—and who pays the price.

A TikTok spokesperson told Fortune that the company’s approach to messaging has not changed. “Direct messages on TikTok are secured using industry-standard encryption in transit and at rest,” the spokesperson said, comparing the technology to what Gmail uses. “People’s messages are private and protected. Access to message content is strictly limited, subject to internal authorization controls, and only available to trained personnel with a demonstrated need to review the information as part of safety investigations, legal compliance, or other limited circumstances.” In other words: not end-to-end encrypted, but far from an open book.

The distinction matters. The TikTok spokesperson said the design is deliberate—and that the lack of end-to-end encryption is itself a safety feature. “Messaging on TikTok is not end-to-end encrypted,” they said. “This helps make our platform undesirable for those who would attempt to share illegal material.” Meta had not yet responded to requests for comments.

When Instagram’s encryption sunsets in two months, Meta will regain the technical ability to scan and act on the content of users’ DMs. Right now, under the opt-in encrypted system, even Meta’s own servers cannot see message content. That changes May 8, reopening the door to automated content moderation, AI-powered scam detection, and easier compliance with law enforcement requests.

End-to-end encryption isn’t keeping people safe

Brian Long, CEO and co-founder of Adaptive Security, a firm that trains organizations to defend against AI-powered attacks, including deepfakes and voice cloning, says the calculus both companies are making reflects a necessary course correction. “It’s a challenging place, because on the one side, I think a lot of these companies have leaned into privacy,” Long told Fortune. “But on the other hand, it’s also led bad actors to do anything from run scams in the background to attack consumers. What they’re recognizing is that as great as it sounds for everything to be encrypted, it’s giving a lot of runway to bad actors.”

The regulatory pressure is accelerating that shift. The Take It Down Act, signed into law last year, requires platforms to remove non-consensual intimate imagery—including AI-generated deepfakes—within 48 hours of a valid request, with enforcement beginning May 19, just eleven days after Instagram’s encryption cutoff. Long said that end-to-end encryption had made that kind of compliance nearly impossible. “If it’s all encrypted and they can’t see the messages, it gets harder for them to actually police those actions,” he said. “They’re going to be accountable under the law.”

Beyond legal deadlines, Long argues that internal safety teams and not law enforcement are the first and most important line of defense, and encryption had effectively neutralized them. “The safety team can jump in and flag messages to the consumer before they fall for a scam,” he said. “When everything is protected by encryption, the safety team really can’t do anything. A lot of this stuff should be handled by the company before it hits law enforcement. Otherwise, law enforcement would just be completely overwhelmed.”

Last year, over a million seniors fell victim to fraud, costing them more than $81 billion in estimated losses, according to an FTC report. AI-powered attacks, from deepfakes, voice cloning, and year-long romance scams, are growing at an estimated 17 times year over year. “The scale of the attacks, especially on alternate messaging channels, is something we’re hearing consistently from customers,” Long said. “Those channels where you had encryption historically were particularly ripe for this issue.”

For privacy advocates, lifting encryption is still a serious concession, and one that opens user data to platform surveillance alongside the safety benefits. But for scam prevention professionals, it’s the right call. “I think companies are recognizing there are some potential serious downsides to privacy,” Long said. “At the end of the day, this correction is probably needed in order to stop more of the bad actors. And if privacy is the biggest priority, there are applications available that people can go use.”

More
Source sosmed
sosmed