Apple doesn’t really like to use the words “Artificial Intelligence” or “AI”, more often going with the more generalized term “Machine Learning.” But call it ML, AI, or whatever you want, it has roots throughout Apple’s products–especially on iPhone. It’s used in the Photos app, Camera, keyboard, Siri, the Health app, and much more.
But it’s often used in what feels like mostly invisible ways. We don’t think about facial recognition when Photos groups images of people together for us. We have no idea how much AI is at play every time we tap the shutter button in order to get a great photo out of such a tiny camera sensor and lens system.
If you want a really obvious example of AI on your iPhone, look no further than the Magnifier app. Specifically, the object detection mode. Point your iPhone at objects and you’ll get a description of the object and usually how it is situated, all using entirely on-device AI. Here’s how to give it a try.
Enabling detection mode in Magnifier
First, open the Magnifier app. You may have added it to Control Center or the Action button on iPhone 15 Pro, but you can also find it in the App Library or via search.
The app is designed to help people see close-up details or read fine print and as assistance for the visually impaired. I use it all the time to read the tiny ingredients text on food packaging.
From there, tap the brackets box on the far right side (on iPhones with LiDAR) and then the text bubble button on the left. If your iPhone doesn’t have LiDAR, you’ll just see the text bubble button. Don’t see the buttons? Swipe up on the zoom slider to reveal them.
Foundry
You’ll now be in detection mode. The settings gear will give you the option of viewing text and/or hearing speech descriptions of whatever you’re pointing at.
Watch the AI at work
Now, just start pointing your iPhone at things! You’ll see a text overlay of whatever is in view–animals, objects, plants, you name it. It can sometimes take a split-second to update, and it’s sometimes a little wrong (and on occasion hilariously wrong), but the image analysis on display here is impressive. You’ll get adjectives to describe not just colors but positions and activities.
Check this out:
Foundry
It recognized not just that it was a cat, but a sleeping cat, and that it was in not just a bed, but a pet bed.
Notice how in the following example, it was able to describe the headphones (white), their position (on a black surface), and its relation to another object it detected (next to a computer monitor).
Foundry
Of course, it gets things wrong sometimes. This isn’t a glass table, it’s a glass bowl. Perhaps the chairs in the background were confusing, or it couldn’t determine the bowl shape due to all the facets and angles.
Foundry
But indoors or outdoors, I am constantly amazed at just how well this feature is able to describe what it is looking at. And here’s the catch: this is entirely on-device. Seriously, try putting your iPhone in Airplane Mode and it will work just the same. You’re not sending a video feed to the cloud; nobody has any idea where you are or what you’re doing.
The precision of this object detection and description is not very impressive compared to AI routines that run on giant servers in the cloud, taking more time with massive processing power and huge object recognition models. Big Tech has been able to outperform this with cloud-computing models for years.
But this is just one relatively minor function running easily on any modern iPhone, even those that are several years old. It’s a great example of what Apple is capable of in the AI space, and what, with better code and more training, our iPhones could be capable of in the future, all while respecting our privacy and security.