Always Intercepted

I'll keep this one short and sweet: anyone who cares about the fact that Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects’ laptops but also uses cloud-based AI models with their private data is a hypocrite.

If you use any hosted AI model, that data is easily intercepted by the US government or any other government. If you use any app that has integration with an AI model that is on by-default or isn't manually invoked, that is also a risk.

In the past, if you wanted to get access to someone's notes or journal, you'd have to ask Notion or some provider directly. It's at least a game of recon trying to figure out which service to target. But now? If you know the target you can easily serve the AI model providers and intercept all the data that way.

Hosted AI models centralize everything: they create massive single points-of-failure *and* single points-of-interception. Sure, you trust the government now, but will you always? What if the government is overthrown? What if it's a foreign government instead?

And what if an AI service provider gets hacked, or has a security vulnerability?