Apple has full access to Gemini to customize the model for Siri and other AI features, reports The Information. Google gave Apple “complete access” to the Gemini model in its own data centers, and Apple can use the access for distillation, or creating smaller models for specific tasks. Apple is able to design models that are built to run on Apple devices without the need to connect to the internet.

The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power.
Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn’t always meet Apple’s needs.
Apple is relying on Google’s Gemini models for the smarter, chatbot version of Siri that’s planned for iOS 27, but the Apple Foundation Models team is still working on Apple AI models that are distinct from the Gemini models.
Siri will be able to do many of the same things that Gemini and other chatbots are able to do, such as answering questions, summarizing information, scanning and understanding uploaded documents, telling stories, providing emotional support, and completing tasks like booking travel.
This article, “Apple Can Create Smaller On-Device AI Models From Google’s Gemini” first appeared on MacRumors.com
Discuss this article in our forums

Gemini, Google
MacRumors: Mac News and Rumors – All Stories
[crypto-donation-box type=”tabular” show-coin=”all”]