Opinion

VR and AR Need Brain-Computer Interfaces to Achieve Their Full Potential

VR and AR Need Brain-Computer Interfaces to Achieve Their Full Potential
Advertisement
What does the future mean in terms of personal technology? In my vision, our tools - which are currently clunky, radiation-emitting, time-consuming devices - will become intuitive and unobtrusive. There is no better example of this transformation than your communications terminal that also doubles up as a media consumption device - the smartphone. Having already replaced portable media players, the phone is also slowly replacing books (or ebook readers, if you will) and tablets, apart from point-and-shoot cameras.

This isn't the end-point of progress though. The future is hands-free, when users no longer need to learn rituals of input in order to use their devices, where holding up a display to type in commands would be obsolete. I believe that there will come a point when what we actually carry around with ourselves is a device without a built-in display; instead, the phone will become just a computing, communications brick, which is tied in either to Microsoft HoloLens or Google Glass style eyewear, or more futuristically, contact lenses or implants that directly throw light (whether real or virtual) onto your retina.

Despite the swiftly growing interest in augmented reality (AR) and virtual reality (VR), I believe companies are still looking at only the short term picture. Their current implementations only achieve success in the limited spheres, such as media consumption (as with the Gear VR), and heads-up display navigation (as seen on Google Glass). Some of the biggest names in tech are those involved with AR/ VR - Google, Sony, Microsoft, Facebook (with Oculus VR), Samsung, Valve, and HTC.

For now though, none appear to be trying to solve what is popularly referred to as the 'input problem of VR', or in other words creating innovative and intuitive input methods for the medium. Instead users will either have to utilise gestures on the inbuilt touch panel to control the device, or resort to inputting commands via keyboard, controllers, sensor gloves, or eye-tracking. Learning these gestures or input commands could stop AR/ VR products from becoming as widely-prevalent as the smartphone, which only had the bare minimum of inputs, and did not require time to learn.

We can all manage to push pixels and press keys on our smartphones, even the dinosaurs among us. These are usually directly-translatable actions limited to a few buttons and gestures. A left swipe brings up the right panel, pressing an icon opens the app, long pressing brings up extended options, and so forth. On the other hand, learning how to properly manipulate virtual worlds would require the user to learn a much wider set of gestures. There are some cases - such as manipulating a 3D model to find design flaws - where hand gestures are appropriate enough, but in other cases, such as selecting option number two in a dropdown menu, we are just wasting time to do something very basic and simple.

One would also look quite foolish, walking down the road swiping thin air. You could say eye-tracking negates these flaws - neither would others really notice your manipulation of AR/ VR with eye gestures, nor do you sacrifice the use of a hand. But let us be both realistic and idealistic here. Just how difficult is it to expect users to control a complex series of inputs with their eyes? We can't even stop ourselves blinking, but we are expected to be able to direct a machine to perform our bidding, all while not going cross-eyed and still managing to see what's actually in front of us - which brings us to the primary flaw of eye-tracking, sacrificing concentration and vision for machine manipulation.

Voice commands could be used, but leaving aside the limiting factor of the maximum speed at which you can speak intelligibly, you would of course be audible to others around you. Using the inner voice, or subliminal speech, would be better on this front, and is in fact an interesting starting point for text input in a brain-computer interface.

Though some users will be fine with this learning of a new set of rituals for performing a few actions, a wide range of tasks would leave most people exasperated while trying to learn the actions by rote. The willingness to learn a wide range of gestures to perform a multitude of tasks is something we cannot take for granted. It is what also what separates a casual gamer from a 'real' gamer, and we know not everyone will sit down and play a proper game. If there ever was an audience more receptive to learning complex input rituals, or being happy with controllers as an input mechanism - it would be the gaming audience. To them, it's just another combo, a way to achieve a desired output through a long series of inputs.

This is why of the four or five players in the game, perhaps Sony is best suited to make its VR headset a satisfying product; with a large audience already anticipating using their controllers to play games while immersed in 3D environments produced by the Morpheus headset. You could say Valve would enjoy the same benefit, because of the large number of hardcore gamers on its platform; however, it will have to convince PC gamers to purchase lots of expensive new hardware.

sony_project_morpheus_right.jpg

I feel complex, unintuitive gestures and input mechanisms will be an obstacle for AR and VR systems to achieve their full potential, and the only solution is a brain-computer interface. Imagine looking around a virtual world without moving your head in real life, or running around without seeming somnambulant. Of course, once developed, such technology will not be limited to just these display mechanisms, and would herald a paradigm shift in the way we interact with our machines.

Unfortunately, the first generation of AR and VR wearables seem clumsy to operate, with the 'immersive' aspect their only claim to being truly revolutionary.

While the future seems to me to be ideally a brain-computer interface (BCI), no tech giant seems to be investing too much into the technology in the recent past - at least as directly as reported acquisitions and granted patents. Even a product as secretively developed as the Microsoft HoloLens was leaked several times in the past couple of years before being announced earlier this year. That nothing has been tipped yet about major companies working on mind control only indicates these firms seem to be concentrating on other fields like artificial intelligence, behavioural prediction, cognitive and quantum computing, and natural language interfaces.

Of course, a multidisciplinary approach would be required to push forward brain-computer interfaces, where the noise and filtering of random thought may be one of the most advanced forms of signal processing yet, ignoring the wheat for the chaff of input, and AI, behaviour prediction, as well as quantum computing would definitely help speed things up on this front.

It seems like the big money being spent on brain-computer interface research is through government-funded academic research, or by small startups, and large tech firms appear to be waiting for breakthroughs rather than investing their own money into the technology. Notably, BCI research has relied on both invasive and non-invasive methods of reading brain signals, and arguably this has been one of the reasons for slow progress. Invasive interfaces have in the past been tested on volunteers with intractable epilepsy or other severely disabled patients who consent to piggybacking BCI research on the intracranial electrodes already implanted in their brains for potential surgical treatment. Modern non-invasive techniques are aimed providing similar deep brain resolution as older invasive ones, with the added benefit of having the subject on the move.

Both types of research have yielded fruits that been used to help disabled and paralysed patients, as well as those suffering from neuro-degenerative diseases, relearn how to move paralysed limbs or control prostheses.

Some consumer and developer-facing headsets have already made their way into the market but with very limited abilities; headsets like Mattel's MindFlex Duel, Uncle Milton's upcoming Force Trainer II, Neurosky MindWave, and the Emotiv EPOC. This last is perhaps the most popular right now, and provides a wide range of readings gleaned from the brain along with positioning data that could be used by researchers and developers as input signals in future applications.

emotiv_epoc.jpg

The state of real-world mind control applications is just as limited as the above headsets though, such as the MindRDR app for Google Glass. It enables users who are wearing both the eyewear and the Neurosky MindWave headset to take pictures with a thought. There have been other examples of mind control applications in the recent past, with game demos such as Throw Trucks With Your Mind and MindMaze requiring users to wear EEG sensors, relax their mind, and then try to control in-game objects. Work on this front is accelerating, with an annual NeuroGaming Conference and Expo just one of the places where scientists and developers working in the field can share ideas. The hope of course is to see fruits of these labours extend beyond just gamers, to the rest of us as well.

While major tech companies are publicly working on basic implementations of AR and VR, science fiction fans are hoping for something more. After all, a working, non-invasive, and intuitive BCI would herald a paradigm shift in personal technology. Admittedly, research on this front is hard, but that shouldn't stop tech giants from throwing a lot of money into it. It is the next big thing, right up there with driverless cars to solve the traffic problem, and tissue regeneration via stem cells.

Of course, there's always the possibility that some of these tech giants are already secretively working on mind control. I hope so, because Google Glass and Microsoft HoloLens are just short term goals.
Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Microsoft Lumia 540 Dual SIM With 5-Inch Display Launched at Rs. 10,199
Lenovo A6000 Plus Review: A Welcome Boost Over Its Predecessor
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »