Cactus can help your business benefit from AI through "StartAI", the AI program of Agoria and Vlaio

The Rise of AI in Mobile Development: CoreML, TensorFlow Lite, and Beyond

As we stand on the brink of a new technological era, artificial intelligence (AI) is increasingly making its way into our daily lives, and mobile devices are no exception. In this article, we will explore the integration of artificial intelligence in mobile apps, the tools Apple and Google provide for developers, and the native solutions they create alongside their operating systems. We will also cover the technical steps to introduce machine learning models in iOS development using CoreML and most of their limitations.

Content authored by David Duarte, iOS Developer of The Cactai Team.

A Brief History of AI in Mobile

The first milestone in this story was laid with the release of the first iPhone in 2007. The iPhone redefined what a smartphone could be, with its impact being immediate and profound. It set a new standard for mobile devices and sparked a wave of innovation that continues to this day. Following this, in 2011, artificial intelligence in mobile apps started making its mark on Apple products with the introduction of Siri, which leveraged natural language processing in mobile interfaces. Using NLP, Siri could understand and respond to user queries, setting a new standard for mobile interactivity. This was followed by Google Now, which offered AI-driven recommendations systems and predictive information based on user habits and preferences. These early implementations showcased the potential of AI to enhance user experience through personalised and intelligent interactions.

As AI continued to evolve, the development of AI-specific hardware played a crucial role in its integration into mobile devices. Apple’s A11 Bionic chip, introduced in 2017, marked a new era, enabling more complex and efficient AI computations directly on mobile devices. Fast forward to 2022, the Apple A16 chip introduced even more advancements, making it a strong foundation for CoreML integration, which allows efficient AI-powered app features.

Alongside hardware advances, software tools and libraries were also introduced. CoreML and TensorFlow Lite, released in 2017 and 2018 respectively, provide mobile developers with the necessary tools to create machine learning models or import already developed Python models. This perfect hardware and software symbiosis optimises the use of device hardware, allowing specific code to run on the CPU, GPU, or Neural Engine, ensuring efficient performance.

The journey of AI in mobile is marked by continuous innovation and integration. From the early days with simple virtual assistants to sophisticated large language models (LLMs) like Apple Intelligence, AI is becoming deeply integrated into our phones.

The Necessity of AI in Our Mobile Devices

The pursuit of improving user experience and personalization drives the need for personalization through AI in modern mobile solutions. Each year, Apple devices receive neural engine hardware improvements, alongside increased memory and computing capacity, to support more powerful solutions. For instance, while writing this from my phone, I can get predictive analytics in mobile apps, unlock the iPhone through FaceID, or use Duolingo for personalized lessons — all leveraging AI. Similarly, the AI in Apple Watch includes advanced AI-powered app features like crash detection, AFib monitoring, and fall detection.

In summary, the necessity of AI in mobile devices is driven by the demand for more intelligent, efficient, and personalised user experiences.

Let’s Get Down to Business: CoreML Integration

CoreML is Apple’s machine learning framework, designed to make it easy for developers to integrate AI into iOS apps. Introduced in 2017, CoreML allows for efficient and optimized execution of machine learning models on Apple devices. CoreML has evolved significantly through its versions, now supporting powerful stateful and transformer models, such as the conversion of Stable Diffusion XL to integrate on mobile or a large model like Mistral 7B. For a deeper dive into the capabilities of this library, I highly recommend watching this video from the last WWDC: CoreML at WWDC 2024.

It started with support for popular ML libraries like Keras and scikit-learn in 2017, introduced model compression and custom layers in 2018, added on-device training and advanced neural network support in 2019, improved integration with CreateML and Swift for TensorFlow in 2020, and unified model formats across Apple platforms with enhanced model security in 2021. By 2023, CoreML supported advanced Transformer models, better integration with Vision and Natural Language frameworks, and real-time application performance improvements. In 2024, it introduced a faster inference engine with Async Prediction API support, BERT embeddings, multi-label image classification, a new Augmentation API, and enhanced model conversion options. These updates ensure CoreML stays at the forefront of mobile AI capabilities.

Matching Devices with CoreML Versions

Aside from the CoreML version we can support in our app (which depends on the iOS version), we must consider the device’s computational power. Mobile devices have less powerful CPUs and GPUs compared to desktop computers and servers, which limits their ability to process large and complex machine learning models in iOS development.

Device CategoryExamplesCPU/GPURAMSuitable for
High-EndiPhone 15 Pro, iPhone 14 Pro, iPad ProA17 Pro, A16 Bionic, M1/M28-16 GBComplex AI tasks and models like Stable Diffusion
Mid-RangeiPhone 15, iPhone 14, iPad Air (2020+)A16 Bionic, A15 Bionic, A144-6 GBCapable of handling most AI applications efficiently
Entry-LeveliPhone SE (2022), iPad (9th gen)A15 Bionic, A13 Bionic3-4 GBSimple AI apps like simple image classification, speech-to-text
Older ModelsiPhone X, iPhone 8A11 Bionic2-3 GBLimited capabilities for running complex AI models
CoreML model Version

Understanding the capabilities and limitations of the device is crucial when developing AI applications. While CoreML provides a powerful framework for integrating machine learning models into iOS apps, the device’s hardware capabilities will ultimately determine the complexity and performance of these models.

Working with CoreML: Creating and Integrating ML Models in iOS

CoreML supports a variety of model types and integrates seamlessly with other Apple frameworks, such as Vision for image analysis and computer vision capabilities; Natural Language for processing and analyzing text; Speech for transcribing audio input to text and generating spoken output from text; and Sound Analysis for identifying sounds such as applause, laughter, or music genres.

To make predictions or use any of the machine learning frameworks, you need a CoreML model. Developers can either use pre-trained models available from Apple or the community, or they can create custom models using tools like CreateML. Alternatively, developers can import models created in Python using CoreML integration, which is ideal for deploying cross-platform solutions with AI-powered app features.

Getting a model is not a difficult task. If this is your starting point, you can download any of Apple’s official models from Apple’s Machine Learning Models. Then, you can add this model to your app, instantiate it, and make predictions. This is the easiest way if you don’t have ML knowledge, you have to just focus and understand the input and output of the model. You can find this info in the model description.

Create your own model with CreateML

If using an already developed model is not possible, you can use CreateML to train your own. CreateML is a powerful tool provided by Apple that allows developers to train machine learning models using a simple and intuitive interface. CreateML abstracts much of the complexity involved in training models, making it accessible even to developers with limited machine learning experience. It provides pre-built templates and workflows for various types of machine learning tasks, including image classification, object detection, text classification, and more.

While CreateML simplifies the process of training machine learning models, having a basic understanding of AI concepts can be beneficial. Developers should be familiar with:

  • Data Preparation: Understanding how to collect, clean, and preprocess data for training.
  • Model Training: Basic concepts of training, validation, and testing machine learning models.
  • Evaluation Metrics: Knowledge of metrics used to evaluate model performance, such as accuracy, precision, recall, and F1-score.

Import a Python model with CoreML

Importing an already developed Python model into CoreML involves several steps, leveraging the coremltools library to convert the model into a format compatible with iOS. CoreML supports a wide range of models from popular frameworks like TensorFlow, Keras, PyTorch, and ONNX. This method is particularly useful for developers who have experience with machine learning frameworks such as TensorFlow, Keras, or PyTorch and want to deploy their models on Apple devices or for large teams with dedicated developers working on a cross-platform machine learning model solution.

For these solutions the team (or iOS developers) must have a familiarity with Python and ML fundamentals, such as the understanding of training, validation, and testing machine learning models.

ML in Android: TensorFlow Lite

TensorFlow Lite is Google’s open-source deep learning framework designed for on-device machine learning inference. It is optimized for mobile devices, enabling developers to deploy machine learning models on Android and iOS. TensorFlow Lite is a lighter version of TensorFlow, specifically tailored to perform efficiently on devices with limited computational power.

Some good points of Tensorflow Lite are, it is cross platform, it is optimized for mobile devices with low latency and efficient execution and, like CoreML it also supports a model optimization.

Of course, not all TensorFlow operations are supported in TensorFlow Lite. Some advanced features and custom layers may require adjustments or simplifications to work within the constraints of TensorFlow Lite. Large and highly complex models may not perform well on mobile devices due to limited computational resources.

Wrapping up

Both CoreML and TensorFlow Lite offer powerful tools to bring artificial intelligence in mobile apps to life. Developers working with machine learning models in iOS development benefit from the tight integration of CoreML within the Apple ecosystem, while TensorFlow Lite usage provides a versatile framework for Android developers. Both tools support model optimization, but CoreML’s focus on device-specific performance makes it particularly advantageous for complex AI models.

The future of AI in mobile devices

The future of AI in mobile devices is promising, with advancements in technology that will revolutionise the way we interact with our smartphones and other mobile devices. The focus seems to be improving the understanding of the user behaviour and preferences, taking care their privacy, in greater detail to provide highly personalised experiences. In therms of solutions by the os we will get improvements in health and fitness monitoring with more advanced health metrics.

Augmented Reality and Virtual Reality continue improving and the integration with AI provide more immersive and interactive experiences. This includes real-time object recognition, gesture control, and environment adaptation (call to take a look of ARKit and Vision frameworks advancements).

We can’t forget mention the future improvements (not released yet) in virtual assistance and the integration of a LLM in our mobile devices (Apple Intelligence).

Conclusion

In conclusion, the future of artificial intelligence in mobile apps is bright, with both iOS and Android platforms offering robust solutions for AI-powered app features. Through CoreML integration and TensorFlow Lite usage, developers have the tools needed to create highly personalized, efficient, and intelligent mobile applications.

Share this page
We are experts
If there is a project needing help or even a skill set you are missing, contact us.

Similar Articles

Contact us today to explore how Cactus
can support your digital journey
Subscribe To Our Newsletter
To get new inspiring articles and news right in your inbox.