Skip to content(if available)orjump to list(if available)

Apple Intelligence Foundation Language Models Tech Report 2025

perfmode

> We believe in training our models using diverse and high-quality data. This includes data that we’ve licensed from publishers, curated from publicly available or open- sourced datasets, and publicly available information crawled by our web-crawler, Applebot.

> We do not use our users’ private personal data or user interactions when training our foundation models. Additionally, we take steps to apply filters to remove certain categories of personally identifiable information and to exclude profanity and unsafe material.

> Further, we continue to follow best practices for ethical web crawling, including following widely-adopted robots.txt protocols to allow web publishers to opt out of their content being used to train Apple’s generative foundation models. Web publishers have fine-grained controls over which pages Applebot can see and how they are used while still appearing in search results within Siri and Spotlight.

Respect.

bitpush

When Apple inevitably partners with OpenAI or Anthropic, which by their definition isnt doing "ethical crawling", I wonder how I should be reading that.

brookst

I mean they also buy from companies with less ethical supply chain practices than their own. I don’t know that I need to feel anything about that beyond recognizing there’s a big difference between exercising good practices and refusing to deal with anyone who does less.

jhickok

They already partnered with OpenAI, right?

DSingularity

To use their APIs at a discount, so what?

wmf

In theory Apple could provide their training data to be used by OpenAI/Anthropic.

bitpush

It isn't "apple proprietary" data to give it to OpenAI.

Also the bigger problem is, you can't train a good model with smaller data. The model would be subpar.

fridder

Same way as the other parts of their supply chain I suppose.

bigyabai

"Good artists copy; great artists steal"

- Famous Dead Person

simonw

One problem with Apple's approach here is that they were scraping the web for training data long before they published the details of their activities and told people how to exclude them using robots.txt

dijit

Uncharitable.

Robots.txt is already the understood mechanism for getting robots to avoid scraping a website.

simonw

People often use specific user agents in there, which is hard if you don't know what the user agents are in advance!

astrange

> Using our web crawling strategy, we sourced pairs of images with corresponding alt-texts.

An issue for anti-AI people, as seen on Bluesky, is that they're often "insisting you write alt text for all images" people as well. But this is probably the main use for alt text at this point, so they're essentially doing annotation work for free.

simonw

I think it is entirely morally consistent to provide alt text for accessibility even if you personally dislike it being used to train AI models.

astrange

It's fine if you want to, but I think they should consider that basically nobody is reading it. If it was important for society, photo apps would prompt you to embed it in the image like EXIF.

Computer vision is getting good enough to generate it; it has to be, because real-world objects don't have alt text.

bigyabai

Gotta polish that fig-leaf to hide Apple's real stance towards user privacy: arstechnica.com/tech-policy/2023/12/apple-admits-to-secretly-giving-governments-push-notification-data/

> Apple has since confirmed in a statement provided to Ars that the US federal government "prohibited" the company "from sharing any information,"

brookst

I mean if you throw out all contrary examples, I suppose you are left with the simple lack of nuance you want to believe

ApolloVonZ

Despite all the “Apple is evil” or “Apple is behind” (because they don’t do evil). Well, what they made with the Foundation Model is great. The fact that they build a system within the Swift language that allows you to specify structured data models (structs) to be used like any other model in a modern programming language, and you actually get back generated data in that format is great. Unlike a lot of other AIs where you might get back a well formatted JSON after a carefully crafted request, but still you never can’t be sure and need to implement a bunch of safeguards. Obviously it’s still the beginning and other tools might do something similar. But as an iOS developer that makes the usage of AI so much simpler. Especially with the bridge to external AIs that still allows you to map back to the type safe structured Swift models. I try not to be a hater, every progress, even slow or underwhelming at first might lead to improvements everywhere else.

0x457

Guided generation is called "Structured Output" by other providers?

Well partially generated content streaming thing is great and I haven't seen it anywhere else.

ApolloVonZ

Sorry if I didn’t use the correct terms. Didn’t catch up on all the terminology coming from my native language. ;) But yes, I agree, the fact that parts, different parameters, of the model can be completed asynchronous by streaming the output of the model, is quite unique. Apple/swift was late with async/await, but putting it all together, it probably plays well with the ‘never’ (I know ) asynchronous and reactive coding.

astrange

An issue with this is that model quality can get a lot lower when you force it into a structured form, because it's out of distribution for the model.

(I'm pretty sure this is actually what drove Microsoft Sydney insane.)

Reasoning models can do better at this, because they can write out a good freeform output and then do another pass to transform it.

mittermayr

All I can say is, I asked Siri today (verbatim): What is 75 degrees fahrenheit in celsius, and what is 85 degrees in fahrenheit — and it offered a web search about fahrenheit. The "and" completely disabled its most basic ability to do metric conversions.

So, it's nice to see Apple is doing research and talking about it, but we're out here waiting, still waiting, for anything useful to make of it all on our thousand-dollar devices that literally connect us to the world and contain our entire life data. It's what I would've expected from one of the most valuable companies in the world.

al_borland

You asked 2 questions in a system made for 1 question at a time. Split these up and Siri answers them fine. You’re holding it wrong.

manoweb

Your usage of Siri today (probably on an old version of iOS) frankly has nothing to do with the article we are discussing. Sorry to say this but it is going to take time. Comparing the performance of a chatgpt running in a big data center with a model running locally on a phone device... give it a few years.

ninkendo

People have been giving Siri a few years for a decade now. Siri used to run in a data center (and still does for older hardware and things like HomePods) and it has never supported compound queries.

Siri needs to be taken out back and shot. The problem with “upgrading” it is the pull to maintain backwards compatibility for every little thing Siri did, which leads them to try and incorporate existing Siri functionality (and existing Siri engineers) to work alongside any LLM. Which leads to disaster, and none of it works and just made it all slower. They’ve been trying to do an LLM assisted Siri for years now and it’s the most public facing disaster the company has had in a while. Time to start over.

losvedir

> What is 75 degrees fahrenheit in celsius, and what is 85 degrees in fahrenheit

Err, what? As a native English speaker human that's a pretty confusing question to me, too!

hu3

First, most of the English speaking world is not native.

"As of 2022, there were about 400 million native speakers of English. Including people who speak English as a second language, estimates of the total number of Anglophones vary from 1.5 billion to 2 billion."

Second, all popular models I tested did well with that query, including Gemini on Android (aka "ok Google"), except Apple's.

https://en.m.wikipedia.org/wiki/English-speaking_world

manoweb

I am not sure why you go on the subject of English speaking world etc. Anyway, the models you tested with that query, which I am not sure why we think is a good benchmark, are local models running on a wireless device or they use datacenter and only convey the text back and forth?

basisword

>> What is 75 degrees fahrenheit in celsius, and what is 85 degrees in fahrenheit

Probably wouldn't have made a difference but the second half of that statement isn't exactly clear. 85 degrees what?

I also think when you're chaining these two separate calculations together you get a problem when it comes to displaying the results.

vosper

That exact phrase "What is 75 degrees fahrenheit in celsius, and what is 85 degrees in fahrenheit" given to ChatGPT produces the correct result (it infers that the second degrees must be Celsius) and ChatGPT gives me a nicely laid out formula for the math of the conversion.

So yeah, Apple is way behind on this stuff.

seydor

the fact is that gemini responds with this: 75 degrees Fahrenheit is 23.89 degrees Celsius, and 85 degrees Celsius is 185.00 degrees Fahrenheit.

bronco21016

Meanwhile users have been conditioned to expect a system that understand the multiple queries and answers them appropriately.

JKCalhoun

True. But for most of us, only in the past year. I have a few friends/relatives who have still never conversed with an LLM.

jonplackett

Every time I see a paper from Apple I just feel like, OK so why isn’t my iPhone actually doing any of this yet?

Why give this to developers if you haven’t been able to get Siri to use it yet? Does it not work or something? I guess we’ll find out when devs start trying to make stuff

imoverclocked

> why isn’t my iPhone actually doing any of this yet?

What exactly are you referring to? Models do run on iPhone and there are features that take advantage of it, today.

jonplackett

None of those features are in any way interesting though. Image playground is a joke, Siri is a joke, that generative emoji thing is a joke.

The AI stuff with photography sure, but that’s more like machine learning.

The photo touch up thing is… useable? Sometimes?

What is it you’ve been so impressed with?

astrange

The main features are text summarization, search, and writing tools.

totetsu

Apple silicone unified memory is amazing for running things like ollama. You don’t have to wait for them to release their own applications.

bayindirh

> why isn’t my iPhone actually doing any of this yet?

Probably Apple is trying to distill the models so they can run on your phone locally. Remember, most, if not all, of Siri is running on your device. There's no round trip whatsoever for voice processing.

Also, for larger models, there will be throwaway VMs per request, so building that infra takes time.

jonplackett

It says there’s 2 models - one local. It’s already released to app developers to use locally I think (it was in the keynote for WWDC).

geoffpado

The model now available to developers (in beta, not in released versions of iOS) is the same model that powers stuff like the much-maligned notification summaries from iOS 18. So your phone does have features that are powered by this stuff… you may just not be particularly overwhelmed by those features.

mensetmanusman

Apple can’t afford to run models, there are too many iPhones and not enough data centers.

Running on device is also risky because cycle limitations will make it seem dumb in comparison.

chevman

Siri is literally a joke!

My son (he's 11 years old now and fairly skilled with all the main AI tools, eg chatgpt, gemini, etc) and I retry her every month or so, and this past time we just laughed. Can't handle basic questions - hears the question wrong, starts, stops, takes us to some random ass webpage, etc, etc.

"She's so jacked up!" he said.

Apple needs to get this under control and figured out, stat!

frankfrank13

AFAICT this is the first commercial model trying to be marketed as responsibly-sourced. Love it, but it also seems like the noise around this issue has died down. Is this for legal cover? Or more apple-privacy marketing

Daedren

Stockholders are suing them over Apple Intelligence. Definitely legal cover.

woah

"Sorry we are hilariously far behind everyone else in the industry after having made a huge amount of fanfare about 'Apple Intelligence' for years. It's just that we have shot ourselves in the knee to satisfy Bluesky posters and the NY Time's lawyers"

msgodel

Do people have an issue with the smollm datasets? I guess it isn't really commercial.

bitpush

The more I think about Apple, the more I realize that Apple is so far behind. While other companies are pushing the envelope (OpenAI, Anthropic, Google ..) Apple's ambitions seem much much smaller.

And this is after they made very big claims with Apple Intelligence last year, when they had everyone fooled.

This is like watching a train-wreck in slow motion.

halJordan

Apple's ambitions are actually bigger than openai or anthropopic. Only Google's ambition (surprise surprise) is similar. Apple fundamentally wants the llm to be a tool. It doesn't want the llm to be the product.

dialup_sounds

Apple is only "behind" if you think they're in the same race. They haven't shown any interest in developing frontier models or taking on the enormous costs of doing so.

outworlder

Only if you think they _must_ compete with large models on the internet.

Uehreka

I wouldn’t go as far as GP, but yes, absolutely, they must compete with large models on the internet. Customers are now used to being able to ask a computer a question and get something better than “I just ran a web search for what you said, here are the uncurated, unsummarized results”.

Yes, this is in fact what people want. Apple is the biggest company in the world (don’t quibble this y’all, you know what I mean) and should be able to deliver this experience. And sure, if they could do it on device that would be aces, but that’s not an item on the menu, and customers seem fine with web-based things like ChatGPT for now. To act like Apple is doing anything other than fumbling right now is cope.

avianlyric

Erm, have you heard of these things called apps? It’s this magical concept where other companies can run code your iPhone, and deliver all the features you just talked about.

I don’t really understand why Apple has to provide a ChatGPT product, baked directly into their software. Why on earth would Apple want to get involved in the race to the bottom for the cheapest LLMs? Apple doesn’t produce commodity products, they package commodities into something much more unique that gives them a real competitive advantage, so people are willing to pay a premium for the Apple’s product, rather than just buying the cheapest commodity equivalent.

There is no point Apple just delivering an LLM. OpenAI, Anthropic, Google etc already do that, and Apple is never going to get into the pay-per-call API service they all offer. Delivering AI experiences using on-device only compute, that’s something OpenAI, Anthropic and Google can’t build, which means Apple can easily charge an premium for it, assuming they build it.

const_cast

This usecase is run of the mill for someone like Google, who used to store and show you your location forever, but it's not in Apple style.

It's hard to be like "uhhh privacy" when you send all requests to a remote server where they're stored in clear text for god knows how long.

As of right now, there is no way to run big LLMs in a privacy preserving manner. It just doesn't exist. You can't E2EE encrypt these services, because the compute is done on the server, so it has to decrypt it.

There are some services which will randomize your instance and things like that, but that kind of defeats the a big part of what makes LLMs useful, context. Until we can run these models locally, there's no way to get around the privacy nightmare aspects of it.

GeekyBear

> I wouldn’t go as far as GP, but yes, absolutely, they must compete with large models on the internet

The people running large models want to charge a monthly fee for that.

I'm fine with having a free model that runs on device without slurping up my data.

specialist

I'm fine with Apple chilling on the sidelines for a bit.

JKCalhoun

I see it as the opposite. Apple is absolutely positioned to own "chat". I am not worried they'll soon sort things out — and eventually we'll have an LLM integrated into the iPhone; call it Siri or otherwise.

With my history encrypted in the cloud, and the trust that Apple has built around privacy ... I think they're going to come out alright.

martinald

But they have de facto admitted failure of most of the strategy if the rumours are true that they are switching much harder to OpenAI/Anthropic for upcoming LLM products.

This is the first time in 10+ years I've seen Apple so far on the back foot. They usually launch category defining products that are so far ahead of the competition, even by the time they work through the 'drawbacks' in the first versions of them they are still far ahead. OS X, the iPhone and the iPad were all like that. They are still way ahead of the competition on Apple Silicon as well.

I am not very confident on their on device strategy at least in the short to medium term. Nearly all their devices do not have enough RAM and even if they did SLMs are very far behind what users "know" as AI - even the free ChatGPT plan is leap years ahead of the best 3B param on device model. Maybe there will be huge efficiency gains.

Private cloud is used AFIAK for virtually 0 use cases so far. Perhaps it will be more interesting longer term but not very useful at the moment given the lack of a suitable (ie: non Chinese), large (>500b param) model. They would also struggle to scale it if they roll it out to billions of iOS devices especially if they put features that use a lot of tokens.

Then they've got OpenAI/Gemini/Anthropic via API. But this completely goes against all their private cloud messaging and gives those providers enormous potential control over Apple, which is not a position Apple usually finds itself in. It will also be extremely expensive to pay someone per token for OS level features for billions of iOS/Mac devices and unless they can recoup this via some sort of subscription will hit services margins badly.

To me its clear the future of "OS" is going to involve a lot of agentic tool calling. These require good models, with large context windows and a lot of tokens - this will definitely not work on device. Indeed this is exactly what the Siri vapourware demo was.

I'm sure they can potentially get to a great UX (though these missteps are making me question this). But having such a core feature outsourced does not leave them in a good position.

alwillis

> Private cloud is used AFIAK for virtually 0 use cases so far.

Applications using Apple's foundation models can seamlessly switch from on-device models to Private Compute Cloud.

Research is already showing the use of LLMs for people's most intimate relationship and medical issues. The usual suspects will try to monetize that, which why Private Cloud Compute is a thing from the jump.

> Then they've got OpenAI/Gemini/Anthropic via API. But this completely goes against all their private cloud messaging

Using ChatGPT via Siri today, no personally identifying information is shared with OpenAI and those prompts aren't used for training. I suspect Apple would want something similar for Google, Anthropic, etc.

At some point, there will be the inevitable enshitification of AI platforms to recoup the billions VCs have invested, which means ads, which won't happen to Apple users using foundation model-based apps.

> Nearly all their devices do not have enough RAM and

Every Apple Silicon Mac (going back to the M1 in 2020) can run Apple Intelligence. 8 GB RAM is all they need. Every iPhone 15 Pro, Pro Max and the entire 16 line can all run Apple Intelligence.

Flagship iPhone 17 models are expected to come with 12 GB of RAM and all current Mac models come with at least 16 GB.

Apple sells over 200 million iPhones in a given year.

There's no doubt Apple stumbled out of the gate regarding AI; these are early days. They can't be counted out.

robotresearcher

When the Blackberry ruled the Earth, people asked 'Why doesn't Apple do a smartphone?'.

visarga

The paper was a very nice read, and they did many creative things. It's a pity this model won't be directly accessible, only integrated in some apps.

alwillis

> It's a pity this model won't be directly accessible, only integrated in some apps.

It's already accessible using Shortcuts, even to non-developers "iOS 26 Shortcuts + Apple Intelligence is POWERFUL " (Youtube) [1].

[1]: https://youtu.be/Msde-lZwOxg?si=KJqTgtWjpdNDxneh

jiehong

Looks nice. I just wish they’d improve the models behind dictation on both iPhone and Mac to have better accuracy and on the fly multiple language transcription.

JacobJack

I'd really like to be able to use this 3B model on my little 4GB GPU card! It looks very capable for a reasonable weight. Maybe one day on HhuggingFace

sneilan1

I feel like this is the most exciting news today about AI on hn. I really hope apple shows that small models can be just as capable as the bigger ones. Maybe they have the people on perplexity working on these small models.

JKCalhoun

I wonder if we'll see these models running on the phone (aiPhone) hardware in the future.

alwillis

As someone mentioned, this model is available in the beta version of iOS 26; it's also part of macOS 26, iPadOS 26 and visionOS 26. Anyone with a free developer account can install the developer betas; the public beta is expected next week.

There's a WWDC video "Meet the Foundation Models Framework" [1].

[1]: https://developer.apple.com/videos/play/wwdc2025/286

floam

It does. You can use it directly on iOS 26 beta - without writing a line of code I can toy with the on-device model through Shortcuts on my 16 Pro. It’s not meant to be a general purpose chatbot… but it can work as a general purpose chatbot in airplane mode which is a novel experience.

https://share.icloud.com/photos/018AYAPEm06ALXciiJAsLGyuA

https://share.icloud.com/photos/0f9IzuYQwmhLIcUIhIuDiudFw

The above took like 3 seconds to generate. That little box that says On-device can be flipped between On-device, Private Cloud Compute, and ChatGPT.

Their LLM uses the ANE sipping battery and leaves the GPU available.

JKCalhoun

Wild to see what improvements might come if there is additional hardware support in future Apple Silicon chips.

ivape

What’s the cost of pointing it to Private Cloud Compute? It can’t be free, can it?

bigyabai

It would be interesting to see the tok/s comparison between the ANE and GPU for inference. I bet these small models are a lot friendlier than the 7B/12B models that technically fit on a phone but won't accelerate well without a GPU.

gleenn

I thought the big difference between the GPU and ANE was that you couldn't use the ANE to train. Does the GPU actually perform faster during inference as well? Is that because the ANE are designed more for efficiency or is there another bigger reason?

kingnothing

> The new Foundation Models framework gives access to developers to start creating their own reliable, production-quality generative AI features with the approximately 3B parameter on-device language model. The ∼3B language foundation model at the core of Apple Intelligence excels at a diverse range of text tasks like summarization, entity extraction, text understanding, refinement, short dialog, generating creative content, and more. While we have specialized our on-device model for these tasks, it is not designed to be a chatbot for general world knowledge. We encourage app developers to use this framework to design helpful features tailored to their apps

Zee2

> a ∼3B-parameter on-device model

ThomasBb

There are even already some local AFM to Open AI API bridge project on GitHub - that lets you point basically any Open AI compatible client at the local models. Super nice for basic summarisation and completions.

JKCalhoun

I was worried "device" was a Mac mini, not an iPhone. (I already have been running models on my MacBook Pro.)