Jason Hiner/ZDNET
While we didn’t hear much about Siri or Apple Intelligence during the 2025 Apple Event that launched new iPhones, AirPods, and Apple Watches, there were two huge AI features announced that have largely slipped under the radar. That’s mostly because they were presented as great new features and didn’t use overhyped AI marketing language.
Nevertheless, I got to demo both features at Apple Park on Tuesday and my first impression was that both of them are nearly fully baked and ready to start improving daily life — and those are my favorite kinds of features to talk about.
Also: 5 new AI-powered features that flew under the radar at Apple’s launch event
Here’s what they are:
1. A selfie camera that automatically frames the best shot
Apple has implemented a new kind of front-facing camera. This uses a square sensor and increases the resolution of the sensor from 12 megapixels on previous models to 24MP. However, because the sensor is square, it actually outputs 18MP images. The real trick is that it can output in either vertical or horizontal formats.
In fact, you don’t even have to turn the phone to switch between vertical and horizontal mode any more. You can now simply keep the phone in one hand and in the vertical or horizontal position you prefer, you can tap the rotate button and it will flip from vertical to horizontal and vice versa. And because it has an ultrawide sensor with double the megapixels, it can take equally crisp photos in either orientation.
Now, here’s where the AI comes in. You can set the front-facing camera to Auto Zoom and Auto Rotate. Then, it will automatically find faces in your shot and it will widen or tighten the shot and decide whether a vertical or horizontal orientation will work best to fit everyone in the picture. Apple calls this its “Center Stage” feature, which is the same term it uses for centering you in the middle of the screen for a video call on the iPad and the Mac.
Also: Every iPhone 17 model compared
The feature technically uses machine learning, but it’s still fair to call this an AI feature. The Center Stage branding is a little bit confusing though, because the selfie camera on iPhone 17 is used for photos while the feature on iPad and Mac is for video calls. The selfie camera feature is also aimed at photos with multiple people, while the iPad/Mac feature is primarily used with just you in the frame.
Still, after trying it on various demo iPhones at Apple Park after the keynote on Tuesday, it’s easy to call this the smartest and best selfie camera I’ve seen. I have no doubt that other phone makers will start copying this feature in 2026. And the best part is that Apple didn’t just limit it to this year’s high-end iPhone 17 Pro and Pro Max, but put it on the standard iPhone 17 and the iPhone Air as well. That’s great news for consumers, who took 500 billion selfies on iPhones last year according to Apple.
2. Live Translation in AirPods Pro 3
I’ve said this many times before, but language translation is one of the best and most accurate uses for Large Language Models. In fact, it’s one of the best things you can do with generative AI.
It has enabled companies like Apple that lag far behind Google and others in its language translation app to take big strides forward in implementing language translation features into key products. While Google Translate supports 249 different languages, Apple’s Translate app supports 20. Google Translate has been around since 2006 while Apple’s Translate app launched in 2020.
Also: AirPods Pro 3 vs. AirPods Pro 2: Here’s who should upgrade
Nevertheless, while Google has been doing demos for years showing real-time translation in its phones and earbuds, none of the features have ever really worked very well in the real world. Google again made a big deal about real-time translation during its Pixel 10 launch event in August, but even during the onstage demo the feature hiccuped a bit.
Enter Apple. Alongside the AirPods Pro 3, it launched its own live translation feature. And while the number of languages isn’t as broad, the implementation is a lot smoother and more user friendly. I got an in-person demo of this with a group of other journalists at Apple Park on Tuesday after the keynote.
Nina Raemont/ZDNET
A Spanish speaker came into the room and started speaking while we had AirPods Pro 3 in our ears and an iPhone 17 Pro in our hands with the Live Translation feature turned on. The AirPods Pro 3 immediately went into ANC mode and started translating the words spoken in Spanish into English so that we could look into the person’s face during the conversation while hearing their words in our own language without the distraction of hearing the words in both languages. And I know enough Spanish to understand that the translation was pretty accurate — with no hiccups in this case.
It was only one brief demo but it traded the tricks of Google’s Pixel 10 demo (trying to put the words and intonation into an AI-cloned voice of the speaker) for more practical usability. Apple’s version is a beta feature that will be limited to English, Spanish, French, German, and Portuguese to start.
And to be clear, the iPhone does most of the processing work while the AirPods make the experience better by automatically invoking noise cancellation. But because of that, the feature isn’t just limited to AirPods Pro 3. It will also work with AirPods Pro 2 and AirPods 4, as long as they are connected to a phone that can run Apple Intelligence (iPhone 15 Pro or newer). That’s another win for consumers.
Since I’m planning to test both the Pixel 10 Pro XL with Pixel Buds Pro 2 as well as the iPhone 17 Pro Max with AirPods Pro 3, I’ll put their new translation features head-to-head to see how well they perform. Every week, I’m in contact with friends and community members who speak one or more of the supported languages, so I’ll have ample opportunities to put it to work in the real world and then share what I learn.