The latest version of ChatGPT has been described as “a data hoover on steroids” as a result of its new capabilities (like seeing everything happening on your screen) and extremely loose privacy policy.
While Apple Intelligence will use ChatGPT as a fallback option for queries which cannot be answered by the new Siri, Apple has put in place additional safeguards which will likely make it the safest way to use the chatbot …
ChatGPT is ‘a data hoover on steroids’
Wired reports a number of AI experts expressing concern about the privacy of personal data when using OpenAI’s latest model, ChatGPT-4o. The company’s casual attitude to privacy was highlighted when it was revealed that the Mac app stored chat logs in plain text.
The current model allows you to ask verbal questions and to give access to your device’s camera to see what you are seeing, and the company’s privacy policy appears to make both your voice and your images fair game for training.
AI consultant Angus Allan says the privacy policy gives the company permission to use all of the personal data exposed to it.
“Their privacy policy explicitly states they collect all user input and reserve the right to train their models on this.”
The catch-all “user content” clause likely covers images and voice data too, says Allan. “It’s a data hoover on steroids, and it’s all there in black and white. The policy hasn’t changed significantly with ChatGPT-4o, but given its expanded capabilities, the scope of what constitutes ‘user content’ has broadened dramatically.”
Another consultant, Jules Love, agrees.
“It uses everything from prompts and responses to email addresses, phone numbers, geolocation data, network activity, and what device you’re using.”
Apple Intelligence use of ChatGPT is more private
Apple’s own AI offers an “extraordinary” level of privacy, and even when it falls back to ChatGPT, the company’s deal with OpenAI means that privacy protections are still strong.
Apple anonymizes all ChatGPT handoffs, so OpenAI’s servers have no idea who has made a particular request, or who is getting the response. Apple’s agreement with OpenAI also ensures that data from these sessions will not be used as training material for ChatGPT models.
9to5Mac’s Take
There are still potential privacy risks, but it seems pretty clear that once Apple Intelligence is fully live, it will be by far the safest way to use ChatGPT.
Image: 9to5Mac collage of Apple icons
FTC: We use income earning auto affiliate links. More.