Image Classification. Image from Apple, Inc.

Over the short Christmas-New Year vacation, I've dived into learning Swift, the language used in building iOS/WatchOS/MacOS apps. Coding in Swift is really fun. Apple has provided a lot of frameworks we can use for a lot of things. CreateML is one of them. In this blog post, we're going to try to train an image classification model.

What is CreateML?

Image from Apple, Inc.

CreateML is a machine learning framework built by Apple to let us use Swift in creating and training machine learning models. CreateML currently accepts images, text and tabular data as inputs.

Read more on Apple's documentation about CreateML here.

Preparing our dataset

For this mini-tutorial, we're going to build an image classifier that can differentiate a cat from a dog. Typical examples...

First we need to create two folders named Training Data and Testing Data.
Screen-Shot-2019-01-02-at-6.26.18-PM

In Training Data folder, create different folders fo all your labels (e.g. Dog, Cat, etc.). For this mini-tutorial, we're only going to train the model to classify images as a dog or cat.

Screen-Shot-2019-01-02-at-6.32.57-PM

Do the same with the Testing Data folder.

Now, add images to respective folders. The more images you put in the Training Data folder, the more knowledgable your model will be. Also, remember the 80:20 rule. For example, if you have a total of 100 images, 80 of them should be training data and the remaining 20 are for testing. Don't worry about the filenames of the images.

By the end of this part, you should have something like this:
Screen-Shot-2019-01-02-at-6.39.47-PM

Building the Image Classification Model

Open up Xcode.
Screen-Shot-2019-01-02-at-6.42.33-PM

Create a new Playground by clicking on "Get started with a playground".

Switch to the MacOS tab and select a Blank template. Name it anything you like.

Apple is really generous to provide us with graphical drag-and-drop interface to train our model. To open this, we just need to run some codes.

import CreateMLUI

let classifierBuilder = MLImageClassifierBuilder()

classifierBuilder.showInLiveView()

Screen-Shot-2019-01-02-at-6.49.22-PM

After running the code, you might notice that the graphical interface is still not visible. This is because we also need to open the assistant editor. To do this, hit option + command + return. Or we could also just click the overlapping circles (like a Venn diagram) located at the top right of the window.

Screen-Shot-2019-01-02-at-6.53.25-PM

We can now start training our model!

Drag and drop the whole Training Data folder to the graphical interface, inside the "Drop images to begin training" placeholder.

Screen-Shot-2019-01-02-at-6.57.46-PM-2

Screen-Shot-2019-01-02-at-7.12.31-PM

It's done when you see this in the console:

Screen-Shot-2019-01-02-at-7.13.00-PM

Our model now knows how to classify dogs and cats, hopefully...

To verify this, let us test its intelligence by feeding it our Testing Data. Drag and drop our Testing Data folder to the graphical interface, inside the "Drop images to begin testing" placeholder.

Screen-Shot-2019-01-02-at-7.16.28-PM

Screen-Shot-2019-01-02-at-7.17.01-PM

Looks like our model is quite knowledgable!

Now, let us save our model in .mlmodel format so we can use it in our app via Apple's CoreML (probably discussed in the next blog post).

To do this, click on the dropdown icon just above the Model Accuracy section. Fill out the fields then click on save.

Screen-Shot-2019-01-02-at-7.21.15-PM-3

We're done! We now have our model that is ready to be consumed by Apple's CoreML.

Screen-Shot-2019-01-02-at-7.24.43-PM

Here's an example of a .mlmodel consumed by CoreML in action. This example is inspired by Jian-yang's Hotdog or Not Hotdog app shown in the section above.

Hope you enjoy this mini-tutorial.

Thank you for reading! If you have any questions, you can reach out to me on Twitter anytime.