June 24, 2025
2
Minute
News & Briefings

Meta AI Privacy Self-Share Sparks Backlash

Meta’s standalone AI chatbot has ignited privacy concerns after users discovered their private conversations appearing in the public “Discover” feed. Given sensitive topics like mental health struggles, relationship difficulties, sexuality, and political opinions, many users mistakenly believed their content was private. Meta emphasizes that sharing is turned off by default, and only those who actively opt in see their chats published. Critics, however, argue that the UI subtly nudges users toward making content public without clear warnings.

Privacy advocates argue that the combination of ambiguous prompts and social pressure to share can mislead, especially older adults and minors, increasing the potential harm. Digital rights organizations and consumer watchdogs are calling on Meta to revise the design of its platform. They recommend implementing clearer consent processes, explicit privacy-by-design features, and contextual warnings before sharing sensitive prompts. Meanwhile, users are being urged to review their privacy settings and avoid sharing personal or identifiable information until transparency improves.

Meta’s CTO has acknowledged the issue and stated they are evaluating changes to the platform. However, without firm timelines or detailed plans, trust in the brand continues to suffer. Some digital policy experts are urging regulators to investigate the incident under privacy laws in Europe and select U.S. states with robust data protection, citing Meta’s obligations under GDPR and state privacy acts.

Washington Post - Meta AI Privacy Users Chatbot
https://www.washingtonpost.com/technology/2025/06/13/meta-ai-privacy-users-chatbot/

Similar Articles