Apple iOS 13 has gone to the dogs. (Figuratively, of course!) Apple is launching a new feature in its Vision framework that enables iPhones and iPads to detect cats and dogs in images. 

Apple’s Vision framework combines a series of algorithms that utilize machine learning to analyze pictures and videos, detecting the features that make our pets… pets! From facial characteristics such as eyes, mouths, and noses, to text and barcodes, to image saliency analysis (which is basically a map that highlights the parts of an image that are most likely draw attention), the framework uses context clues to quickly and accurately identify not only humans and pets, but also other important objects. 

Building off of their long history of using machine learning in their Photo app to identify and tag people, objects, and events, their computer vision advancements will now also allow the system to determine what’s happening in an image. From significant events such as weddings and birthdays, to seasons, to types of fruit, to people, and dogs, Apple is simplifying saving and sharing memories through their new framework which is due to fully launch by Fall 2019.