How To Create Updatable Models Using Core ML 3

Re-train models on the iOS device

Core ML got a big boost this year with the Core ML 3 update during WWDC 2019. Among the many improvements, On-Device Learning stands out.

The goal of this article is to show you the process to create your own Core ML 3 models that can be updated on your iPhone or iPad.

Though it’s fancy to do an on-device model training, it isn’t feasible given the long duration it’ll take for the model to train on iOS devices. At best, it’s recommended to retrain a model on the device, specifically for the user.

Before we go down the implementation road, let’s admire the other updates Core ML 3 has in store for us.


What’s New in Core ML 3

  • 70 new neural network layers — New layers allow more complex neural network models to be converted to Core ML without the need of writing custom layers

  • Variety of new models — Models such as the KNN classifier, ItemSimilarityRecommenderSoundAnalysisPreprocessingWordEmbedding, and Linked Models will just help us to solve more machine-learning problems.

  • Core ML API has more abstraction — You need not convert images to CVPixelBuffer for CNN models anymore. MLFeatureValue does that for you automatically.

  • ML Model Configuration — We can now configure the ML Model as per our needs. Test setting the model to run on CPU, GPU, and/or neural engine by assigning the respective MLComputeUnits property.

  • On-Device Learning — Now you can easily train/retrain your models on the device itself. No need to recompile it from your machine.

Linked Models allow reusability across models. So if two models are using another model, that model would now be loaded just once.

Let’s Deep Dive Into the Logic Now

We’ll be creating a Keras vs. Image Classifier CNN model using categorical cross-entropy loss function.

If you directly want to jump onto the Core ML part, skip the next section.

Creating a Keras Image Classifier Model

The model is compiled with the categorical cross-entropy loss function since currently it is the only CNN-loss layer that is updatable in Core ML 3. Take a look at the pynb file.

We ran the above model for five epochs and got an accuracy of around 73%, which is pretty decent.

Now that we’ve got our Keras model, all that is left is converting it into an updatable Core ML model.

Creating an Updatable Core ML Model

To create an updatable model, we need to set the isUpdatable flag on the model and some neural network layers.

Besides that, we need to define the training inputs, loss function, optimizers, and other parameters like epochs and batch size on our Core ML model.

We need to update coremltools to 3.0. At the time of writing, the beta 6 release was the latest. Just use the pip command to install it:

pip install coremltools==3.0b6

Converting Keras to Core ML model

import coremltools
output_labels = ['Cat', 'Dog']
coreml_model = coremltools.converters.keras.convert('model.h5', input_names=['image'], output_names=['output'],
class_labels=output_labels,
image_input_names='image')
coreml_model.author = 'Anupam Chugh'
coreml_model.short_description = 'Cat Dog Classifier converted from a Keras model'
coreml_model.input_description['image'] = 'Takes as input an image'
coreml_model.output_description['output'] = 'Prediction as cat or dog'
coreml_model.output_description['classLabel'] = 'Returns Cat Or Dog as class label'
coreml_model.save('catdogclassifier.mlmodel')

Inspecting the neural network layers

First, load the specs from the existing Core ML model into a NeuralNetworkBuilder.

coreml_model_path = "./catdogclassifier.mlmodel"
spec = coremltools.utils.load_spec(coreml_model_path)
builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)
builder.inspect_layers(last=3)
builder.inspect_input_features()

By inspecting the layers, we can determine the dense_layers that need to made updatable.

Along with that, we need to set the input parameters for the training input for on-device training.

neuralnetwork_spec = builder.spec
neuralnetwork_spec.description.input[0].type.imageType.width = 150
neuralnetwork_spec.description.input[0].type.imageType.height = 150
neuralnetwork_spec.description.metadata.author = 'Anupam Chugh'
neuralnetwork_spec.description.metadata.license = 'MIT'
neuralnetwork_spec.description.metadata.shortDescription = (
'Cat Dog Classifier converted from a Keras model')

Specifying updatable layers, loss functions, and optimisers

Now, we need to set the last two dense layers as updatable and set the loss-function input. The loss-function input is the output value from the softmax activation layer (‘output’ in our case):

model_spec = builder.spec
builder.make_updatable(['dense_5', 'dense_6'])
builder.set_categorical_cross_entropy_loss(name='lossLayer', input='output')
from coremltools.models.neural_network import SgdParams
builder.set_sgd_optimizer(SgdParams(lr=0.01, batch=5))
builder.set_epochs(2)

The make_updatable method is used to make a layer updatable.

It requires a list of layer names. The layers are dense_5 and dense_6 in our case, available from neural network builder properties.

Specifying training-input descriptions

Now we just need to set the training-input description and save our specifications in a new model as shown below:

model_spec.isUpdatable = True
model_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION
model_spec.description.trainingInput[0].shortDescription = 'Image for training and updating the model'
model_spec.description.trainingInput[1].shortDescription = 'Set the value as Cat or Dog and update the model'

The specificationVersion property is important since it ensures the correct version number is used for the mlmodel

Core ML 3 is version 4, so the above updatable model works on version 4 and above only.

Generating updatable model

Now that we’ve specified all the model specifications, we can generate our model, which is updatable on the device using the following line of code:

coremltools.utils.save_spec(model_spec, “CatDogUpdatable.mlmodel”)

Our Updatable Core ML model is now ready to use.

Let’s import it into an Xcode project. You shall see a model description similar to the one in the image below:

It’s clear from the image above that two new sections, Update and Properties, are added with Core ML 3. The Update section consists of the input descriptions for training the input.

Our above classifier model has two outputs: the label for the predicted class and a dictionary that has the confidence for all (both) of the classes.

There’s More

In the next part, we’ll deploy the updatable model we’ve just built, in an iOS application.