Who Let the Dogs Out: Apple iOS 13

Apple iOS 13 has gone to the dogs. (Figuratively, of course!) Apple is launching a new feature in its Vision framework that enables iPhones and iPads to detect cats and dogs in images. 

Apple’s Vision framework combines a series of algorithms that utilize machine learning to analyze pictures and videos, detecting the features that make our pets… pets! From facial characteristics such as eyes, mouths, and noses, to text and barcodes, to image saliency analysis (which is basically a map that highlights the parts of an image that are most likely draw attention), the framework uses context clues to quickly and accurately identify not only humans and pets, but also other important objects. 

Building off of their long history of using machine learning in their Photo app to identify and tag people, objects, and events, their computer vision advancements will now also allow the system to determine what’s happening in an image. From significant events such as weddings and birthdays, to seasons, to types of fruit, to people, and dogs, Apple is simplifying saving and sharing memories through their new framework which is due to fully launch by Fall 2019. 

Author: Michele Currenti

Michele is a creative content intern in Educational Activities at IEEE. She is currently pursing her masters in Voice & Opera at the University of Maryland, College Park. She also completed a Bachelor of Science in Brain & Cognitive Science at the University of Rochester and a Bachelor of Music at the Eastman School of Music. She is interested in finding the various intersections of science and the arts to better humanity.