There are credible rumors that Apple is working on AR glasses. Of course that’s not to say that they’ll decide they’re good enough to launch.
If and when they do appear, though, I think one of the first attention-grabbing features will be realtime language translation. So, the combination of quality direction microphones, voice recognition, and natural language processing.
I believe this will become a primary feature early on for the following reasons:
Eliminating the barrier to instant communication between people from different places and cultures would provide a massive advantage to humanity as a whole.
Many of the required technologies are reaching maturity right now.
It would create a significant market opportunity in big cities, where people would love to know what people around them are saying in languages they don’t understand.
The products—from any vendor, not just Apple—could leverage multiple technologies to accomplish this overall goal.
High quality microphones that target speakers and use voice recognition and natural language processing to extract one conversation from within many.
Machine learning to assist with every part of that.
Speaking a translated version into your ear while blocking out their original speech.
Printing a text version on the screen to augment or replace the audio translation.
Now imagine multiple conversations being captured and displayed in this way, oriented in the display according to where the voices are coming from. Perhaps with avatars for each.
This is the type of thing that will make AR so compelling once it lands successfully—practical applications that improve the user’s chances of success in the world.
As I wrote about in The Real Internet of Things:
Realtime language translation will be a milestone achievement in precisely this way, and I think we should expect to see it as one of the early feature sets in personal AR interfaces.