Tech and AISignal President Meredith Whittaker calls out agentic AI as...

Signal President Meredith Whittaker calls out agentic AI as having ‘profound’ security and privacy issues

-


Signal President Meredith Whittaker warned Friday that agentic AI could come with a risk to user privacy.

Speaking onstage at the SXSW conference in Austin, Texas, the advocate for secure communications referred to the use of AI agents as “putting your brain in a jar,” and cautioned that this new paradigm of computing — where AI performs tasks on users’ behalf — has a “profound issue” with both privacy and security.

Whittaker explained how AI agents are being marketed as a way to add value to your life by handling various online tasks for the user. For instance, AI agents would be able to take on tasks like looking up concerts, booking tickets, scheduling the event on your calendar, and messaging your friends that it’s booked.

“So we can just put our brain in a jar because the thing is doing that and we don’t have to touch it, right?,” Whittaker mused.

Then she explained the type of access the AI agent would need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for tickets, our calendar, and messaging app to send the text to your friends.

“It would need to be able to drive that [process] across our entire system with something that looks like root permission, accessing every single one of those databases — probably in the clear, because there’s no model to do that encrypted,” Whittaker warned.

“And if we’re talking about a sufficiently powerful … AI model that’s powering that, there’s no way that’s happening on device,” she continued. “That’s almost certainly being sent to a cloud server where it’s being processed and sent back. So there’s a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data,” Whittaker concluded.

If a messaging app like Signal were to integrate with AI agents, it would undermine the privacy of your messages, she said. The agent has to access the app to text your friends and also pull data back to summarize those texts.

Her comments followed remarks she made earlier during the panel on how the AI industry had been built on a surveillance model with mass data collection. She said that the “bigger is better AI paradigm” — meaning the more data, the better — had potential consequences that she didn’t think were good.

With agentic AI, Whittaker warned we’d further undermine privacy and security in the name of a “magic genie bot that’s going to take care of the exigencies of life,” she concluded.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest news

AGII Reinvents Smart Contracts with Self-Learning AI for Web3 Automation

AGII pushes the boundaries of blockchain automation, introducing AI-driven smart contracts for next-gen Web3 efficiency. March 10, 2025 1:00...

The Biggest US Banks Have All Backed Out of a Commitment to Reach Net Zero

Danielle Fugere, president and chief counsel for the shareholder advocacy nonprofit As You Sow, said disclosure is a...

Social Engineering Attacks in Crypto: How to Identify, Prevent, and Protect Your Assets

With surging crypto prices, regulatory shifts, and recent high-profile hacks such as the $1.46 billion Bybit breach, social...

Advertisement

Shiba Inu (SHIB) ‘to the Moon’ But Under This Crucial Condition (Bitcoin Advocate Weighs in)

TL;DR Jeremie Davinci sees SHIB soaring if Shibarium’s adoption and utility issues are addressed. The price may also benefit...

Neom is reportedly turning into a financial disaster, except for McKinsey & Co.

A new WSJ report suggests that Saudi Arabia’s now eight-year-old Neom project — a futuristic, carbon-neutral, 105-mile-long linear...

Must read

You might also likeRELATED
Recommended to you