Google’s AR translation glasses are a feel-good steamer

At the end of its I/O presentation on Wednesday, Google rolled out a “one more thing” surprise. In a short video, Google showed off a pair of augmented reality glasses that serve one purpose – to display audible language translations in front of your eyeballs. In the video, Google product manager Max Spear refers to the prototype’s capabilities as “world captions,” and we see family members communicate for the first time.

Now wait. Like many, we’ve used Google Translate before and largely think it’s a very impressive tool that just happens to make a lot of embarrassing blunders. While we may believe it can provide us with directions to the bus, that’s a far cry from believing that it can properly interpret and convey our parents’ childhood stories. Didn’t Google say before that it finally broke the language barrier?

In 2017, Google marketed real-time translation as a feature of its original Pixel Buds. Our former colleague Sean O’Kane described the experience as “a laudable idea but a sad execution” and reports that some of those who tried it with him said , it sounds like he’s a 5 year old. That’s not what Google showed in its video.

Also, we don’t want to ignore the fact that Google promises this translation will happen Inside a pair of AR glasses. Don’t hit the sore spot, but augmented reality reality hasn’t even really caught up with Google’s concept video from a decade ago. Did you know it was the predecessor to the much-maligned and awkward-wearing Google Glass?

To be fair, Google’s AR translation glasses seem more focused than Glass is trying to accomplish. From what Google has shown, they’re designed to do one thing — display translated text — rather than serve as an ambient computing experience that could replace a smartphone. But even so, making AR glasses isn’t easy. Even moderate ambient light can make it very difficult to view text on see-through screens. Reading subtitles on TV with the sun glare through the windows is challenging enough; now imagine that experience, but with the added bonus of being strapped to your face (and having a conversation with someone you yourself can’t understand) pressure).

But hey, technology moves fast — and Google may be able to overcome the hurdles holding back its rivals. That doesn’t change the fact that Google Translate isn’t a panacea for cross-language conversations. If you’ve ever tried to have an actual conversation through a translation app, you probably know that you have to speak slowly. And methodically. And it’s very clear. Unless you want to risk translating gibberish. One slip of the tongue and you could be screwed.

People don’t talk in a vacuum like machines do. Just as we transcode when talking to voice assistants like Alexa, Siri or Google Assistant, we know we have to use simpler sentences when dealing with machine translation. Even if we speak correctly, the translation can still look awkward and misunderstood.some of our edge Colleagues who are fluent in Korean point out that Google’s own pre-I/O countdown shows a Korean honorific version of “welcome” that no one actually uses.

According to the tweet, this slightly embarrassing blunder pales in comparison to the facts From Rami Ismail and Sam EttingerGoogle shows more than half a dozen backwards, broken, or otherwise incorrect scripts on one slide in its translation demo. (android police Note that Google employees have acknowledged the mistake, and it was corrected in the YouTube version of the keynote. ) to be clear, it’s not that we expect perfection – it’s that Google is trying to tell us it’s close to cracking real-time translation, and these errors make this seem implausible.

Google is trying to solve a Very complicated question. Translating words is easy; figuring out grammar is difficult, but possible. But language and communication are far more complex than those two things. As a relatively simple example, Antonio’s mother speaks three languages ​​(Italian, Spanish, and English). She sometimes borrows words from different languages ​​in the middle of sentences – including her regional Italian dialect (like a fourth language). This type of thing is relatively easy for humans to parse, but can Google’s prototype glasses handle it? Don’t mind the messier parts of the conversation, such as unclear references, incomplete ideas, or innuendo.

Not that Google’s goals aren’t admirable. We absolutely want to live in a world where everyone can experience what the study participants in the video do and stare in amazement when they see the words of a loved one come to them. Breaking down language barriers and understanding each other in ways we couldn’t before is something the world needs more than ever; it’s just that there’s still a long way to go before we get to that future. Machine translation is here and has been around for a long time. But while it can handle a plethora of languages, it doesn’t speak humans yet.


related:

technology .

Leave a Reply