Apple Intelligence is launching later this month, bringing a first wave of AI features to your iPhone, iPad, and Mac. But as with all AI technology, the matter of privacy is a key one to pay attention to. How does Apple Intelligence handle user privacy? Here’s what you should know.
Apple’s privacy approach: on-device first, Private Cloud Compute second
For years, Apple has been a leader in on-device processing for all sorts of powerful features. The advantages of prioritizing on-device are twofold:
- Processes run faster when they’re not dependent on an external server
- User data can stay safely localized for maximum privacy
It should come as no surprise, then, that Apple Intelligence will lean heavily into an ‘on-device first’ approach.
Apple has built its AI features in such a way that the vast majority of the time, everything will run entirely on device. No data is being sent away to the cloud, it stays with you on your physical device.
There will be times, though, when Apple Intelligence needs to tap into external servers for additional processing.
For those situations, Apple has built Private Cloud Compute with the goal of providing just as much security off device as on it.
The promises of Private Cloud Compute
The day that Apple Intelligence was first announced, Apple published a detailed security research paper on Private Cloud Compute.
From that paper’s introduction:
we believe PCC is the most advanced security architecture ever deployed for cloud AI compute at scale.
There are five core requirements serving as the foundation of Private Cloud Compute:
- Stateless computation on personal user data, meaning data can’t be used for anything but the purpose it was sent for
- Enforceable guarantees, so it’s designed in such a way that its promises are ‘entirely technically enforceable’
- No privileged runtime access, meaning Apple doesn’t have a security bypass mechanism for itself
- Non-targetability, so no user can be individually targeted by an attacker
- Verifiable transparency, which allows third-party security researchers to analyze and verify the claims of Apple’s system
There’s significantly more detail available in the full paper.
Some AI tasks simply can’t be run efficiently using on-device models. The flexibility to tap into larger cloud models and more numerous Apple Silicon power opens new opportunities for Apple Intelligence. This could be especially valuable as more features are added in the future.
Trying to make Private Cloud Compute just as safe as on-device processing is a lofty goal. Time will tell whether Apple succeeds, but the amount of transparency and built-in verifiability are great signs.
Wildcard: ChatGPT integration and more
Later this year in iOS 18.2, Apple Intelligence will integrate the smarts of ChatGPT in two key places: Siri and writing tools.
What that means is users will have the option of tapping into ChatGPT’s expansive knowledge, but only when needed.
This ChatGPT integration will ask your permission before it’s ever used. So if you make a Siri request that Siri can’t answer, it may recommend using ChatGPT instead. You’ll then have the option to say yes or no.
Once you do authorize ChatGPT, your data will be sent to OpenAI’s servers and be covered under their own privacy policy, not Apple’s.
Apple has said it may bring additional partners on board in the future, too, such as Google Gemini. With all of these third-party integrations, Apple Intelligence will ask you first before it shares your data, but if you give authorization, the privacy promises of other Apple Intelligence features won’t apply for those requests.
Apple Intelligence privacy: wrap-up
Privacy has been a core part of Apple’s products for years. Just about every year, there are new software and hardware features rolling out to offer better protection for users’ privacy. Apple Intelligence seems designed to continue that trend.
What are your views on Apple Intelligence and privacy? Let us know in the comments.
FTC: We use income earning auto affiliate links. More.