There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
I hoped for a moment that "Containerization Framework" meant that macOS itself would be getting containers. Running Linux containers and VMs on macOS via virtualization is already pretty easy and has many good options. If you're willing to use proprietary applications to do this, OrbStack is the slickest, but Lima/Colima is fine, and Podman Desktop and Rancher Desktop work well, too.
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.
Some 15 years ago, A friend of mine said to me "mark my words, Apple will eventually merge OSX with iOS on the iPad". And with every passing keynote since then, it seemed Apple's been inching towards that prophecy, and today, the iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
I'm still a little dissapointed. It seems those models are only available for iPhone series 16 and iPhone 15 pro. According to mixpanel that's only 25% of all iOS devices and even less if taking into account iPadOS. You will still have to use some other online model if you want to cover all iOS 26 users because I doubt apple will approve your app if it will only work on those Apple Intelligence devices.
Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
The video on Containerization.framework, and the Container tool, is live [0].
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
looks like there isn't much to take away from this, here's a few bullet points:
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
iPad update is going to encourage a new series of folks trying to use iPads for general programming. I'm curious how it goes this time around. I'm cautiously optimistic
I watched the video and it seems they are statically linking atop musl to build their lightweight VM layer. I guess the container app itself might use glibc, but will the musl build for the VM itself cause downstream performance issues? I'm no expert in virtualization to be able to understand if this should be a concern or not.
I'm cautious. Apple's history with developer tools is hit or miss. And while Xcode integrating ChatGPT sounds helpful in theory, I wonder how smooth that experience really is.
> Every Apple Developer Program membership includes 200GB of Apple hosting capacity for the App Store. Apple-Hosted Background Assets can be submitted separately from an app build.
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
Excited to try these out and see benchmarks. Expectations for on device small local model should be pretty low but let’s see if Apple cooked up any magic here.
I like that there's support for locally-run models on Xcode.
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
I sure hope they provide an accessibility option to turn down translucency to improve contrast or this UI is a non-starter for me. Without using it, this new UI looks like it may favor design over usability. Why don’t they do something more novel and let user tweak interface to their liking?
I hope they don't turn Liquid Glass into Aqua... which I hated. The only time I started to like the iOS interface was iOS 7 with flat design. I hope they don't turn this into old, skeuomorphic, Aqua-like UI by time.
Does the privacy preserving aspect of this mean that Apple Intelligence can be invoked within an app, but the results provided by Apple Intelligence are not accessible to the app to be transmitted to their server or utilized in other ways? Or is the privacy preservation handled in a different way?
After reading the book "Apple in China", it’s hilarious to observe the contrast between Apple as a ruthless, amoral capitalist corporation behind the scenes and these WWDC presentations...
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
HN should have a conference-findings thread for something like WWDC, with priority impact rankings
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
> This year, App Intents gains support for visual intelligence. This enables apps to provide visual search results within the visual intelligence experience, allowing users to go directly into the app from those results.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).
Apple announces Foundation Models and Containerization frameworks, etc
(apple.com)843 points by thm 9 June 2025 | 488 comments
Comments
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
1: https://news.ycombinator.com/item?id=44226612
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
2: https://github.com/macOScontainers
3: https://github.com/Okerew/osxiec
"Foundation Models" is an Apple product name for a framework that taps into a bunch of Apple's on-device AI models.
Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
[0] https://developer.apple.com/videos/play/wwdc2025/346
[1] https://github.com/apple/containerization
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
Full summary (https://extraakt.com/extraakts/apple-intelligence-macos-ui-o...)
This doesn’t sound impressive, it sounds insane.
Containerization is a Swift package for running Linux containers on macOS - https://news.ycombinator.com/item?id=44229348 - June 2025 (158 comments)
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)
See also:
- https://edu.chainguard.dev/chainguard/chainguard-images/abou...
- https://andygrove.io/2020/05/why-musl-extremely-slow/
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
Ultimately UI widgets are rooted in reality (switches, knobs, doohickeys) and liquid glass is Salvador-Dali-Esque.
Imagine driving a car and the gear shifter was made of liquid glass… people would hit more grannies than a self-driving Tesla.
Don’t use macOS but had just kinda assumed it would by virtue of shared unixy background with Linux
im confused
https://katacontainers.io/
https://developer.apple.com/documentation/hypervisor
Edit: surprised apple is dumping resources into gaming, maybe they are playing the long game here?
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
https://en.wikipedia.org/wiki/Aqua_%28user_interface%29
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).