Machine learning
Core ML
Updates to Core ML will help you optimize and run advanced generative machine learning and AI models on device faster and more efficiently. Core ML Tools offer more granular and composable weight compression techniques to help you bring your large language models and diffusion models to Apple silicon. Models can now hold multiple functions and efficiently manage state, enabling more flexible and efficient execution of large language models and adapters. The Core ML framework also adds a new MLTensor type that provides an efficient, simple, and familiar API for expressing operations on multi-dimensional arrays. And Core ML performance reports in Xcode have been updated to provide more insight into support and estimated cost of each operation in your model.
Create ML
Object tracking, the first spatial computing template, is designed to help you track real world objects in your visionOS app. Enhance your customized model training workflow with the new data preview functionality in the Create ML app and new Swift APIs from Create ML Components that help you create time series models directly within your app.
Machine learning APIs
The new Translation framework allows you to translate text across different languages in your app. The Vision framework API has been redesigned to leverage modern Swift features, and also supports two new features: image aesthetics and holistic body pose. And the Natural Language framework offers extended language support with multilingual contextual embedding.
Learn about machine learning
Watch the latest videos
|