Instagram Removes Encryption: The Child Safety Argument Has Limits

by admin477351

One of the primary justifications offered for removing end-to-end encryption from Instagram direct messages has been child safety. Law enforcement agencies and child safety organizations have argued consistently that encryption on Instagram creates environments where child exploitation can occur without detection. Meta’s confirmation that encryption will be removed by May 8, 2026, has been welcomed by some as a child safety win. But the child safety argument has important limits that deserve examination.

The core of the argument is that without encryption, Instagram can scan DM content for child sexual abuse material (CSAM) using automated tools, and can report detected content to law enforcement and child protection agencies. This is a real capability. Before encryption, Instagram used these tools. During the period of opt-in encryption, the tool was less effective for users who had enabled the feature. After the removal of encryption, these capabilities will apply universally to Instagram DMs.

The limitation of this argument is that it treats Instagram in isolation. Child sexual abuse does not occur only — or even primarily — on Instagram. Determined abusers who know that Instagram DMs are being scanned will migrate to other platforms that offer encryption. The removal of encryption from Instagram may shift the location of harmful activity rather than reducing it. The net benefit to child safety is therefore potentially much more limited than the argument implies.

A more targeted and proportionate approach would be to develop harm detection tools that can operate on encrypted platforms — a technically challenging but not impossible goal. Some researchers have proposed approaches that allow detection of specific harm indicators without requiring the platform to access the full content of all messages. These approaches could potentially deliver child safety benefits without the privacy cost imposed on all users.

The child safety argument is not dishonest — the harms it describes are real. But when it is used as the primary justification for removing privacy protections from hundreds of millions of people, without examination of alternatives or acknowledgment of the limits of the proposed solution, it deserves critical scrutiny rather than uncritical acceptance.

You may also like