iOS Vision Animal Classifier Cat vs. Dog
We’ve already covered Cat vs Dog Image Classifier using our own Core ML model in a previous article.
With iOS 13, Vision is even more powerful. VNImageRequest
now has VNRecognizeAnimalRequest
to identify cats and dogs in images.
There’s no need for creating our own Core ML model as this is pretty accurate.
Currently, the Vision Animal Detector can only detect cats and dogs.
You can now build a cat vs dog image classifier application in less than 10 minutes
In the following section, we’ll be jumping straight into the code and build an exciting Cat vs Dog Image Classifier iOS Application. Let’s get started with a new Xcode project.
Our Storyboard
Code
import UIKit | |
import Vision | |
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate { | |
@IBOutlet weak var imageView: UIImageView! | |
@IBOutlet weak var textView: UITextView! | |
var animalRecognitionRequest = VNRecognizeAnimalsRequest(completionHandler: nil) | |
private let animalRecognitionWorkQueue = DispatchQueue(label: "PetClassifierRequest", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem) | |
override func viewDidLoad() { | |
super.viewDidLoad() | |
textView.isEditable = false | |
setupVision() | |
} | |
@IBAction func takePicture(_ sender: Any) { | |
let imagePicker = UIImagePickerController() | |
imagePicker.sourceType = .photoLibrary | |
imagePicker.delegate = self | |
present(imagePicker, animated: true, completion: nil) | |
} | |
private func setupVision() { | |
animalRecognitionRequest = VNRecognizeAnimalsRequest { (request, error) in | |
DispatchQueue.main.async { | |
if let results = request.results as? [VNRecognizedObjectObservation] { | |
var detectionString = "" | |
var animalCount = 0 | |
for result in results | |
{ | |
let animals = result.labels | |
for animal in animals { | |
animalCount = animalCount + 1 | |
var animalLabel = "" | |
if animal.identifier == "Cat"{ | |
animalLabel = "😸" | |
} | |
else{ | |
animalLabel = "🐶" | |
} | |
let string = "#\(animalCount) \(animal.identifier) \(animalLabel) confidence is \(animal.confidence)\n" | |
detectionString = detectionString + string | |
} | |
} | |
if detectionString.isEmpty{ | |
detectionString = "Neither cat nor dog" | |
} | |
self.textView.text = detectionString | |
} | |
} | |
} | |
} | |
private func processImage(_ image: UIImage) { | |
imageView.image = image | |
animalClassifier(image) | |
} | |
private func animalClassifier(_ image: UIImage) { | |
guard let cgImage = image.cgImage else { return } | |
textView.text = "" | |
animalRecognitionWorkQueue.async { | |
let requestHandler = VNImageRequestHandler(cgImage: cgImage, options: [:]) | |
do { | |
try requestHandler.perform([self.animalRecognitionRequest]) | |
} catch { | |
print(error) | |
} | |
} | |
} | |
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) { | |
dismiss(animated: true) { | |
if let image = info[UIImagePickerController.InfoKey.originalImage] as? UIImage { | |
self.imageView.image = image | |
self.processImage(image) | |
} | |
} | |
} | |
} |
That’s it! We created a VNRecognizeAnimalsRequest
which classifies the image passed to the VNImageRequestHandler
as a cat or a dog.
In the above code, we iterate over all the VNRecognizedObjectObservation
instances and look for the identifier string to be “Cat” or “Dog” and print it along with the confidence level returned.
We ran the above iOS application over a few randomly selected images from Google and here’s the output we got:
That was pretty quick to do! You can download the full source for this tutorial from here.