Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
I tested Android XR headphones and glasses from Google and Samsung: Gemini AI can see my life now
Android XR, coming in 2025, will arrive alongside a Samsung mixed reality headset and glasses after that. I’ve tried a bunch of demos and the always-on AI that can see what I see is amazing and full of unanswered questions.
Google has announced Android Xr. Samsung has a mixed reality headset for Google and there are also glasses in the works. This is a lot to digest and I just got out of demoing this stuff. Let me tell you about it. Google has been promising a collaboration with Samsung and Qualcomm for a year. Now we are talking about this future of mixed reality and we already have Apple on the horizon, we already have Meta on the horizon and we also have other French players. Now, Google is back on the scene. They had been out of VR and AR for the most part and are back on Android. Xr is a new platform and it will work with phones and it will work with other things and it is powered by Gemini in the sense that Gemini AI will be run to work, not just in mixed reality headsets, but Google says it will. Use AI glasses, AR glasses and all kinds of emerging wearable devices for your face. Now, what I have to try, because at the moment it is only for developers, are a headset made by Samsung that is a mixed reality headset and also a pair of notification glasses that Google calls Project Astra. These have been in the works for a while and will soon begin to be tested in the field. Now, bear with me because I’m going to talk a lot and we don’t have any photos or videos of my demo because they weren’t allowed. In my experience, that’s a pretty standard problem for early stages of VR and AR. So first of all, Project Mohan, which is the name of the mixed reality headset for now that Samsung will be releasing next year, looks a lot like a Vision Pro or looks like the Meta Quest Pro. If you remember those headsets, they’re a Visor-type headphones that fit your head. It’s got lenses here that can block the light if you want the Titans behind it. It has a nice, sharp screen, it has hand tracking and eye tracking. That’s not the exciting part. A lot of that sounds very familiar to me. But when I tried demos on this headset, two D apps appeared in a very Android-like interface. I was able to open familiar Google apps and navigate with my hands and mention different pains. But I could also turn to Gemini to be my companion throughout the trip. Now, you already have things like Siri working in Apple Vision Pro and there are some voice AI things and the meta mission. But this is something that can also see what you’re doing and the reason it becomes fascinating is that I can point to something and say, hey, what is that? Or tell me more about this and move my hand there and Gemini will tell me. And what I discovered I could do is surf the web. I was able to ask him a question about something or ask him about where I live. He took out a map. And Google has immersive apps for this too. At the moment, YouTube and Maps were the most notable. Apple doesn’t even have a Vision Pro immersive mapping app. However, the maps allowed me to expand this great 3D immersive environment into my home. But I was also able to point out things in the landscape and ask about them and Gemini was able to tell me what they were and that combination became really interesting. I felt like I was starting to get lost in apps and browsing. Gemini may not simply recognize what you are doing, it may be your memory. When I was done with a bunch of demos, I asked Gemini to summarize what he had been doing and he told me all the different things he had been doing and he has tricks I hadn’t seen before, like he had played YouTube. video and Gemini improvised subtitles for the YouTube video. Google has a few other tricks up its sleeve with Android Xr. One of them is the automatic conversion of two D photos to 3D, which is something that Apple also does now with Vision pro but it also does it with videos. I watched a video clip on the headphones that were already pre-converted. Uh, Google said, but I also saw a YouTube clip that had been converted. I didn’t get to see the conversion process live, but it was pretty awesome to see that that capability will be there. So that’s all the mixed reality stuff that’s going to be in these developer headsets that are meant for people to start getting a feel for it and kind of build a starting point for everything else. But then there are the glasses I put on, which look like Meta’s ray bands, a little thick but pretty normal and wireless. And there was a screen on these glasses, a small area here etched with waveguides and a micro LED screen projector here that could show me a front screen. What I could do with it. A lot of things you could activate Gemini with a button here and turn it into a sort of always-on Gemini mode. Identify works of art on the wall of a fixed living room set, identify questions about objects. I had questions about books. I was able to point out certain things again and have him tell me about them and the translation. You could translate things and then also ask for the language back. And what I found fascinating is that they also do live interactive translations. Then someone in the room came up and spoke to me in Spanish, instantly not only did the subtitles appear for what they were telling me in English, but they also appeared still in English when they started speaking in Spanish. All of this assumes that you will have an always active Gemini assistant in your life. Now, with conversational responses that may seem very intrusive, but on the glasses, you can pause them by simply tapping the side and basically suspending them. But it was very different than something like meta ray bands that assume you won’t use them until you request them here. It was like it assumed you had it on until you wanted to pause it. And the same goes for virtual reality and mixed reality. Am I going to turn on Gemini and suddenly run this all the time, which could really affect the experience of using the apps? It’s fascinating for things like let’s say I’m playing and I want to know a tutorial or I have questions about something. It sort of breaks the fourth wall of virtual reality and augmented reality and can change the way applications are developed. But let’s take a step back. This is all for developers right now and Android Xr will be introduced in a more complete state next year. According to Google and Samsung they are going to announce and go into more detail about what is potentially very expensive. The next reality headset will be again, it is expected to be something like a vision professional and many people may not necessarily want to buy it, but it will be there. The other thing that’s really interesting is where all the goggles go and that could be the year after we start these types of assistive goggles, there are already other partners in the works like X reel. We just saw some glasses made by them. Google will have many different partners making AI and AR glasses that will work now that will work with phones. Well, the ball will be in Apple’s court. When is Apple introducing generative AI in Vision pro? And when will Apple start allowing connections to the iPhone for vision products? Because right now that’s not happening. But I think it’s really key to what’s coming next and it looks like it’s going to make the idea of everyday glasses feel a lot more instantaneous, but we don’t really know about the battery life yet. This is a wild card that could be quite significant. There’s a lot to digest and I’m still digesting it in my head. You have questions and I’m sure you leave them below and I’ll follow up with more as we learn more about this. But I was glad to get the demo and I’m asking a lot more questions than ever before about what AI will be in VR and AR. Thanks for watching.