Fatbobman’s Swift Weekly #031 | Apple Uses M4 to Showcase Commitment to Embracing AI

fatbobman ( 东坡肘子)
6 min readMay 13, 2024

--

Weekly Comment

On May 7, Apple finally updated the iPad series after a year and a half, with the highlight being the new iPad Pro equipped with the latest M4 chip. According to leaked benchmark data online, the M4 significantly outperforms the M2 and even M3 chips.

Apple claims that the M4 chip has significantly improved machine learning performance, especially enhancing the performance of the Neural Processing Unit (NPU). The full display of its AI capabilities may still need to be paired with the new system and APIs released at WWDC 2024. By debuting the latest M-series chip on the iPad Pro, Apple breaks tradition and fully demonstrates its determination to outpace other manufacturers in the AI era.

With the introduction of the M4 chip, I am full of anticipation for Apple’s potential Mac product line this year. All signs point to Apple unveiling several AI-related updates, new features, and services at WWDC 2024. As a developer in the Apple ecosystem, I not only look forward to experiencing the convenience brought by AI during development but also hope Apple will introduce more secure and user-friendly APIs to help developers provide excellent AI services in their apps.

Given Apple’s consistent emphasis on privacy, it is expected that most AI functionalities will run locally on devices. This not only poses higher demands on the device’s AI capabilities but also presents a significant challenge in terms of energy consumption. After all, users do not want to see a significant reduction in battery life after updating to a new system. I am eager to see how Apple balances AI performance, energy consumption, privacy, development convenience, and user experience.

Although generative AI is currently experiencing a surge in popularity, and there are continuous reports of Apple’s collaborations with top generative AI service providers, I firmly believe that everyday AI functions should primarily operate on local devices, using smaller models to serve users in an almost imperceptible manner. In the age of AI, energy-efficient hardware is crucial.

The iPad Pro equipped with the M4 chip will be more focused on scenarios that highlight its “Pro” level positioning. For most users, the new iPad Air, powered by the M2 chip and offering decent AI capabilities with a higher cost-effectiveness, may be a more suitable choice.

Whether or not you are focused on AI, it is undeniable that AI will spark a new wave of device upgrades and application experience innovations (at least at the marketing level). As developers, we must be prepared for this, even if we may not immediately offer or apply AI services, we should have a grasp of the basic operations and application scenarios of AI development.

Don’t miss out on the latest updates and excellent articles about Swift, SwiftUI, Core Data, and SwiftData. Subscribe to Fatbobman’s Swift Weekly and receive weekly insights and valuable content directly to your inbox.

Originals

Mastering the containerRelativeFrame Modifier in SwiftUI

Fatbobman

The containerRelativeFrame modifier starts from the view it is applied to and searches up the view hierarchy for the nearest container that fits within the list of containers. Based on the transformation rules set by the developer, it calculates the size provided by that container and uses this as the proposed size for the view. In a sense, it can be seen as a special version of the frame modifier that allows for custom transformation rules. This modifier simplifies some layout operations that were previously difficult to achieve through conventional methods.

This article will delve into the containerRelativeFrame modifier, covering its definition, layout rules, use cases, and relevant considerations. At the end of the article, we will also create a backward-compatible replica of containerRelativeFrame for older versions of SwiftUI, further enhancing our understanding of its functionalities.

Recent Selections

Swift’s native Clocks are very inefficient

Wade Tregaskis

In Swift concurrency programming, ContinuousClock and SuspendingClock are used to manage time and delay tasks. ContinuousClock is a continuously running clock that does not stop due to system sleep or other factors. In contrast, SuspendingClock stops when the system is suspended, such as when entering sleep mode. The author, Wade Tregaskis, found through testing that although these two clocks have a very low absolute operational cost (mostly at the sub-microsecond level), their inefficiency can become a serious performance bottleneck when used frequently to handle time and timing issues.

This article has sparked widespread discussion within the developer community, with many developers sharing their views and suggestions on HackerNews.

How to train your first machine learning model and run it inside your iOS app via CoreML

Felix Krause

In this article, Felix Krause meticulously explains how to implement your first machine learning model inside an iOS app using CoreML. The text thoroughly details the key stages of the entire process: data collection, data preparation and model training, model export, model integration, and the execution of the model on the device. Besides describing the specific technical steps for deploying a machine learning model within an app, the article also delves into relevant best practices and potential challenges encountered.

Turning AirPods into a Fitness Tracker to Fight Cancer

Richard Das

In this article, Richard Das explains how to utilize the motion sensor features of AirPods, combined with Core Motion, SwiftUI, and a bit of artificial intelligence technology, to develop an application that counts the number of push-ups performed. This project not only demonstrates the potential of technology to solve real-world problems but also reflects the personal satisfaction and fun involved in creating meaningful things.

This article is a response by the author to the 100 Push-Ups a Day Challenge launched by Cancer Research UK in April 2024, an initiative aimed at raising public awareness about cancer.

New Tutorial of TCA — Building SyncUps

Point-Free

The Composable Architecture (TCA) is a powerful framework, and its latest version 1.10 has introduced efficient state sharing tools. These tools enable seamless state sharing across various functional modules of an application, while also supporting the persistence of state data, such as user defaults and the file system, ensuring 100% testability of features. This tutorial provides a detailed guide on how to build a complex SwiftUI application named “SyncUps” from scratch, covering core principles such as using value types to model domains, state-driven navigation, simplifying domain models, controlling dependencies, and thoroughly testing application logic.

Migrating from CocoaPods to Tuist at Playtomic

Mohammadreza Koohkan

As the Playtomic project scaled up, the existing CocoaPods dependency management tool started to fall short. The team faced major issues including compatibility problems with SwiftUI and modern Swift packages, interruptions in the Xcode SwiftUI preview feature, slow storyboard loading, and increased complexity and maintenance difficulties with the Podfile. To address these issues, Playtomic decided to migrate to Tuist, a tool that optimizes project structure and enhances build efficiency.

In this article, Mohammadreza Koohkan thoroughly explains the challenges encountered during the migration process and the solutions implemented. The results of the migration show that Tuist not only resolved issues related to CocoaPods but also significantly improved the app’s startup time and reduced the size of the binary files. Moreover, compared to CocoaPods, Tuist offers shorter compilation times.

Tuist is an open-source tool designed to help developers manage the configuration and dependencies of Xcode projects and workspaces. It simplifies project configuration and automates repetitive tasks, enhancing the development experience for large projects and teams.

Converting Local LLMs to Core ML Models — How to Use 🤗 Exporters

Shuichi Tsutsumi

As generative artificial intelligence technology continues to evolve and become more widespread, an increasing number of developers are seeking to implement AI services based on local devices, extending these services to mobile devices as well. In this article, Shuichi Tsutsumi provides a detailed explanation on how to use the “Exporters” tool released by Hugging Face to convert local large language models (LLMs) into Core ML models. The article explores the efficiency and effectiveness of this tool through several model conversion examples, including attempts to customize conversions for smaller models. Despite some challenges encountered during the process, the author notes that the validation errors that appeared do not necessarily indicate problems with the models, as these comparisons are based on absolute differences, which are sometimes within acceptable ranges.

Exporters is a tool that wraps around coremltools, designed to simplify the process of converting Transformer models into Core ML models and to address various issues encountered during the conversion.

If you found this weekly helpful or enjoyed reading it, consider making a donation to support my writing. Your contribution will help me continue creating valuable content for you.
Donate via Patreon, Buy Me aCoffee or PayPal.

Want to Connect?

@fatbobman on Twitter

--

--

fatbobman ( 东坡肘子)

Blogger | Sharing articles at https://fatbobman.com | Publisher of a weekly newsletter on Swift at http://https://weekly.fatbobman.com