Good Vibes with VIVE
Good Vibes with VIVE
Mind unlocked: building a world without limitations through neurotech with Neurable
Join us for a conversation about Brain Computer Interface with Dr. Ramses Alcaide of Neurable. He tells us about his mission to create a world without limitations by bringing neurotech to everyone. Dr. Alcaide shares how he developed an interest in using neurotech to allow everybody to participate equally, the differences between invasive and non-invasive methods of BCI, and the importance of understanding a user’s intent. We touch on the fears around BCI, its shortcomings and the future of BCI. We hope you join us to hear all this and more today.
- An introduction to Dr. Ramses Alcaide, his work at Neurable, and why it is personal to him.
- Dr. Alcaide’s studies in control systems and electrical engineering for prosthetics at the University of Washington.
- The key gap between the brain-mind connection and devices that inspired his PhD at the University of Michigan.
- Why he created his company: to bring BCI and neurotech to everybody, remove limitations and allow everyone to participate equally.
- Two key buckets: invasive and non-invasive Brain Computer Interface systems.
- How brain data is gathered non-invasively: through electroencephalography, infrared, optics and more.
- Muscle activations as an additional way to gather information from the brain through EMG.
- Eye trackers, accelerometers, and other tools that can be built if you understand user intent.
- Enten headphones and the soft fabric electrodes embedded into them.
- Dr. Alcaide addresses concerns that BCI is outlandish.
- How human advancement happens: through communicating better, or understanding yourself better.
- Key aspects of BCI: passive controls and active controls.
- The goal to create a seamless and invisible interface that’s able to be predictive and help you before you need to reach out for help.
- Neurable’s aim to integrate neurotech into everyday devices.
- Five areas that Brain Computer Interfaces have had shortcomings in: function, cost, societal fit, comfort, and user experience.
- Roadblocks to user experience: calibration, positioning, and response rate.
- Dr. Alcaide’s thoughts on the future of BCI in education and entertainment.
- How BCI could be integrated into the future of game design.
- What businesses should know about leveraging BCI: leveraging the best times to work, creating customized learning for individual learners.
“My uncle got into a trucking accident and lost both his legs. I saw him work and struggle through the unnaturalness of his prosthetic systems and that inspired me to develop my time towards creating technology that would help those that are differently abled.” — @RamsesAlcaide [0:02:03]
“The short answer is, the people are right, it is crazy, it is outlandish, it is out there, but that’s why we should do it. You need to be able to push those frontiers.” @RamsesAlcaide [0:12:30]
“What we’re doing here at Neurable is helping everybody right now. How do we find really key, critical, non-invasive solutions that can really help an individual either communicate now in their lives or understand themselves better in their lives.” — @RamsesAlcaide [0:13:05]
Links Mentioned in Today’s Episode:
Neurable
Enten Headphones
Dr. Ramses Alcaide on LinkedIn
Dr. Ramses Alcaide on Twitter
Pearly Chen on Twitter
VIVE
[INTRODUCTION]
Pearly Chen: Welcome to Good Vibes with VIVE. I’m your host, Pearly Chen. I’m an executive with global technology company, HTC. As a mother of three young girls, I’ve loved building and investing in profound immersive technologies then make a positive difference in people’s lives. Each week, I speak with founders at the forefront of VR, AR in the metaverse. All of them inspire me and some, I’ve been lucky enough to back as an investor. Tune in every week to hear some of the most inspiring closed-door conversations, and walk away informed, inspired, and full of good vibes.
Today, we’re going to talk about brain computer interface, BCI with Dr. Ramses Alcaide from Neurable, who is on a mission to create a world without limitations through bring neurotech to everyone. What exactly is BCI anyway? Our brain as a command center of our nervous system has three very important functions, collecting sensory inputs, processing it, and integrating the information, deciding what to do and then issuing an output. That’s the nervous system’s main functions. The brain computer interface essentially provides an alternative augmentative control and communication system for people that can exist, not depending on the brain’s normal output pathway of the peripheral nerves and muscles. Imagine what they can do for people with severe motor disabilities like paralysis. Imagine having a much more intimate and immersive relationship between us humans and the machines we interact with. Imagine augmented humans.
[INTERVIEW]
[00:01:53] PC: Now, before we dive deeper into this extremely fascinating rabbit hole, I would like to introduce Ramses to join us for this conversation. Ramses started and runs Neurable, this really exciting startup, which is also a Vive X company that I’m very, very personally excited about and have been very proud to have an opportunity to back since 2017. Welcome, Ramses. How and what exactly inspire you to dedicate basically your entire life to pursue neuroscience and building brain computer interface systems?
[00:02:27] Ramses Alcaide: Well, it’s a pleasure to be here, Pearly. It’s always great to connect with you and share. Neurable to me to me is a very personal journey. It really started when I was eight years old, and my uncle got into a trucking accident and lost both his legs. I saw him as I was growing up work and struggle through a lot of the unnaturalness of his prosthetic systems and that really inspired me to develop my time toward creating technology to help those that were differently abled. I went to the University of Washington, and studied control systems in electrical engineering for prosthetics. I realized that the brain-mind connection with the devices for example, prosthetic limbs is where there was a key gap. I did my PhD at the University of Michigan. That’s where I was exposed to an even greater diversity of individuals, people who had ALS, children with severe cerebral palsy, and I began to understand the bigger picture.
We developed at the University of Michigan, a signal processing pipeline that enabled us to understand brain signals 100 times more cleanly than existing methods that existed at the time. I decided, “Hey! This is a perfect opportunity to try to create a company where we can bring brain computer interfaces and the benefits of neuro tech to everybody to create an everyday brain computer interface to really create that world without limitations that enables anybody to participate equally.”
[00:03:47] PC: Thank you so much for sharing that very intimate personal story that inspire you to pursue the science and developing real world and everyday technologies that people can benefit from. Let’s help our audience get a little smarter when it comes to brain computer interface. We’ve heard these different techniques and brain signals that can be collected to make brain computer interface systems like the EEG and the EMGs and others. The two buckets that relates to the brain computer interface systems are the noninvasive part, which is collecting these signals or electric signals through the skin, through the muscular system, processing those signals, and then issuing a command to control your digital devices around you. A lot of that is for medical applications as where you started as well. Then there’s also the invasive side of things where it involves a neurosurgery to implant an electrode to get better fidelity signal directly from the neurons level. These aren’t very sort of superficial understanding that I have about the brain computer interface technologies. I would love for you to give us a little one-on-one and lecture in understanding all these different signals, how they differ and what we really need to know about all of them.
[00:05:00] RA: Yeah, no. I mean, you did a really great job kind of summarizing it. There are two key buckets invasive and noninvasive. Invasive is similar to what you think of when you think of a company like Neuralink, where IT requires surgery. You typically don’t get these types of systems implanted unless something goes wrong. For example, you have severe epilepsy or you’re a quadriplegic. From that perspective, it’s very high risk, but at the same time, you get really high-quality electrical signals from the brain.
Then from the noninvasive perspective, there’s a lot of ways of gathering brain data as well, too. There is electroencephalography, which is the most popular kind or EEG. There’s also infrared or near infrared, which uses optics as well too. There’s a lot of ways of gathering brain data directly from the brain. But noninvasive does not require any type of surgery. Now, the downside to that is that the signals that you pick up are far less clear and less precise, so you can really understand only the highest-level pieces of information the brain is providing. Then within noninvasive, there’s also groups that pick up data directly from the brain, like EEG, versus other ones that pick it up from muscle activations of you thinking about moving your arm, for example. That would be something like electromyograph or EMG. They pick up the brain signal from the muscle activations, and then from there, you can turn into some sort of command, and there’s companies or control labs that do that, or [inaudible 00:06:28] that do that, et cetera.
[00:06:30] PC: Basically, that wristband, right from companies like control lab that you mentioned, that you wear on your wrist that can detect us micro muscular movements, that the technology basically then translates into those intention to control your digital devices. That’s the EMG.
[00:06:47] RA: Exactly. Really, from that perspective, since it’s a secondary method of collecting brain data, there’s a lot of other modalities at work too, such as eye trackers that directly connect, your eye directly connects to the brain, so you get a lot of really great information from that or accelerometers or other types of systems. As long as you can understand a user’s intent, you can build a brain computer interface on top of it.
[00:07:10] PC: Right. Even cochlear implants, right, and many other different devices and systems that collect brain signals from different parts of our bodies really, as long as there’s that input signal processing, which is the heavy lifting part, usually powered by machine learning, and then the output to communicate intention to activate the actual action on digital devices in many cases. That’s really, really fascinating. On the noninvasive side, which is generally understood as a lot more approachable, since it doesn’t a neurosurgery. The EEG applications are usually understood as those that help us understand or meditate better, or understand our emotional state or how focused or stressed we are, et cetera, outside of the medical applications, right? But even in the EEG realm, I understand that the technology has evolved through a few decades. To this day, there’s the web-based gel base kind of multichannel electrode system to collect the higher fidelity signals. To today, I see that Neurable your team is building a system that collects these signals from just a pair of earbuds. Please walk us through on this technology evolution from software, hardware systems side of things, and we’re this is all going. So exciting.
[00:08:32] RA: Yeah, absolutely. There’s kind of two main areas where there has been a ton of development over the last 10 years. The first one is in the sensors, the hardware itself, sensors have gone from being something that required gel to one of the biggest innovations, picking it up dry. Without any type of gel whatsoever. The biggest issue with that is that you get a significant loss in signal, and so a lot of the innovations have been on amplification and types of materials in order to overcome the signal loss that you get when you don’t put gel in between the electrode and the person. Then there’s been a lot of innovations on the software side and this is where Neurable secret sauces at were. We’re a software company that develops Ais. It’s based off of some of my research work at the University of Michigan where we were able to take brain-based signals from EEG and amplify the brain data so that we’re able to pick it up, even though we would have less sensors.
Traditionally, you would require up to sometimes 64 to 128 sensors on a person’s head. Then in 2017, we were able to bring that down to just six sensors that are required that were dry. That was our first kind of big thing that we did with HTC. We were able to do telekinetic control, with a VR headset, grab objects, throw them do a bunch of really cool stuff just using brain data. Fast forward to today we’ve collected enough data, improved our machine learning pipeline. Then now, we’ve introduced these headphones called Enten. From the word entender, which in Spanish means to understand. They have woven into the ear cup soft fabric electrodes. What’s really impressive is the fact that up until this day, most consumer brain computer interfaces just didn’t work very well. But using our signal processing pipeline, we’ve been very methodical and engender we’ve created a product that, I truly believe consumers are going to put on, and will love to use.
Actually, we have a challenge, which is, come to our office in Boston, if you want to check them out and use it yourself. We want you to post videos, we want to show demos, we’ve always been a company that loves demos, because we always want to make sure we build product that works.
[00:10:36] PC: Let’s make sure we get in the demo for this episode as well, so our audience who cannot fly to Boston today can see it for themselves. But I love that explanation of, hardware has evolved for sure, but the software part is where improvements can really grow exponentially, the signal processing capability by the machine learning algorithms can grow and learn at an exponential rate. This will continue to do so in finally bringing some of these technologies that used to belong really in the labs and in very strictly medical setting to a more everyday use case. That’s truly exciting. First, it was really inspired by this medical use. I think there was also a military component, but we’ll get to that later.
But the medical uses basically allowing people with very severe motor disabilities who cannot control even their prosthetic limbs to be able to control devices and communicate in a way that is otherwise not possible at all. Those kind of use cases of brain computer interfaces are just so inspiring, really showcases human ingenuity and what advancing technology can do to improve lives and bring us forward. And thinking about that, that this has been in development for maybe two or three decades. Now, there are these consumer technology angle of bringing this technology to everyday people’s hands in understanding how their brain is doing and applying those insights into something. To me, that is just fascinating.
Let’s for a moment talk about the invasive side, since we’re still talking about this technology part of things. Clearly, Neurolink is one very famous company trying to ambition of a neuron level signal collection through implanting electrodes to the human brain. They’ve been making different events in material science and robotic surgery to be able to insert such fine micro threads and electrodes into human brain. Of course, on the software and technology side of things as well. But to most people who are watching this, I think this all sounds incredibly outlandish, sci-fi, so far away who would do that? But clearly, it’s being built by someone who has incredibly strong optimism and vision for using this not just for medical applications. Right?
I love to hear your thoughts on that, and sort of what we need to know in the development of them base of BCI side of things.
[00:13:00] RA: Yeah, for sure. I mean, the short answers, the people are right, it is crazy, it is outlandish, it is out there. But that’s why we should do it, like, you need to be able to push those frontiers. It’s going to take a very long time far more than Musk suggests is going to take to get to the point where it becomes an everyday device. It’s going to take so long that I’m not even going to make a prediction about it. But we need to push those barriers, we need to continue building those types of technologies. We’re going to find individuals who are in these most severe cases, where hopefully we’re going to be able to apply invasive techniques to really increase their quality of life.
Then what we’re doing here at Neurable is, what about helping everybody right now? It’s how do we find really key critical noninvasive solutions that can really help an individual either communicate now in their lives or understand themselves better in their lives. Both ends of those candles need to be burned? Because that’s how human advancement happens.
[00:14:01] PC: I totally agree with that. Just like going to Mars, right? Probably none of us need to go to Mars today or right now. But there’s going to be people crazy ambitious enough to have those ideas to be able to push those boundaries and advance humanity to a different level. Totally agree with the burning both sides in advancing that envelope as well as making sure there’s a practical application today to help people of this technology. That’s what we’re going to focus on now, EEG. What can we do with EEG today?
[00:14:30] RA: One thing I do want to touch on is like both advanced humanity. For example, think of Uber, think of the iPhone, right? If somebody was trying to make brain-based computers and nobody was trying to make the iPhone, we would be in a completely different society than we are now, so we need to tackle both.
[00:14:47] PC: No doubt, and it could be the same technology and the near end, and near-term applications as well as those more drastic level, pushing and thinking about completely different realm. That’s what we’re talking about here today, and invasive methods, noninvasive methods. Let’s focus now then for the rest of the program too, at least have people feel, and touch, and experience and reap the benefits from today. What can people do with everyday EEG devices and what kind of informatics can they collect and understand better of themselves and change their everyday life now?
[00:15:20] RA: I feel like everything is going into buckets of two.
[00:15:24] PC: Great. They need to understand.
[00:15:26] RA: Brain computer interface, the key thing is that, there’s some sort of aspect of controllability. We like to bucket that controllability into two separate areas. One is passive controls, and one is active controls. For example, with the headphones that we’re building right now, the key value proposition is that we’re able to understand when you are focused, and when you’ve been impacted by a distractor, or a distraction that’s happening. So imagine in today’s world, you get Slack notifications, you get iMessages, you get FaceTime calls. All these messages and notifications create distractors that decrease your productivity.
With our current headphones, we can identify attention and distractors, and then passively, we can turn on noise cancellation for you, which we know increases your ability to focus and we can turn on Do Not Disturb on your computer, which reduces those notifications at the pivotal time when you’re starting to get focused. Then on top of that, we can create small chimes in order to help you snap back into focus faster. That’s what I would call like the bucket of passive control systems that exist. Then you have active ones where you’re volitionally or consciously making actions. In this case, the Enten headphones can be used – I’ll show you later on the demo to control slides, right? I use them right now on my VC meetings, it’s a good way to get people excited about it.
Then in the future, there’s even more things that we can do with it. You can imagine a future where some of the technology, at least that we’re building inside the lab right now that can’t fully disclose, but enables you to send messages using the headphones that you’re wearing, right? So you can imagine a future where a lot of the actions that you do with your phone, you can start to replace with your hearable or your wearable device, and then you can imagine in the future wearing a pair of eyeglasses that have sensors on the rims, and you’re able to do those types of actions. That’s kind of like where I would see EEG being really valuable, especially as you go into the future of control systems of mobility, et cetera. Because, you know, although there’s going to be use cases where you want extra controllers on your wrist or your hands. For an everyday device, you just want to put on one device and that to be your mouse and your keyboard along with your visual screen.
[00:17:38] PC: Totally, and that’s how I got so excited about Neurable in the first place, where you’re starting with that C graph, mind control game that I went through. Basically, without any controllers, or hands or any kind of other conventional input, you’re able to use your thoughts and only your thoughts to navigate through a game and control things, typing passwords, and all these kinds of very Sci-Fi type of application. That was really exciting. We know how when spatial computing and AR and VR becomes our everyday personal computing platform. There has got to be more than mouse, and a keyboard, and this touch screen and swipes. There’s got to be more natural and bio input and interactions, right? We see eye tracking already being commercialized. We see different hand tracking technologies being developed.
Next step really is about understanding more of our biomatrix too, and not just heart rate, but also how users are feeling in in these different experiences, how they’re focused, how they’re stressed, or how they’re learning, as well as why not, use our brain signals as a form of input and control of your devices. Perhaps as you mentioned in sort of the compromises you need to make in collecting EEG signals, that’s not as a high fidelity as one that you would collect from invasive methods. If maybe, when that is used with a lot of other different modalities of interfaces, that would really make our spatial computing device one that is super powerful and can truly augment our everyday experience and we can all become an augmented human. To me, that’s not a creepy scenario, but a pretty exciting one. What do you think?
[00:19:20] RA: Yeah. I see that as being like such a valuable step forward in human interaction, because we went from a computer which required a mouse and an operating system in the way you interacted with it as you move your hand, which move the mouse, which sends a signal to the computer, which moved the cursor, and then you clicked on the mouse and you can see all these different steps of translation. When we got to touchscreen, you’re directly interacting with the technology, right? But now as we start to go even deeper into different biometrics, for example, the eyes, the brain, accelerometers, heart rate monitors, et cetera. You can really start to use those pieces of information to create a really seamless interface. A seamless invisible interface that’s able to be predictive and help you before you necessarily need to reach out for help. Imagine how we could tackle mental health problems with that. Imagine how we could longitudinally understand different developmental diseases or cognitive diseases. Like there’s just so much potential there.
[00:20:20] PC: Yeah. That’s really fascinating. In this area of using EEG for these different applications, there’s been a few pioneers like the Emotivs and the Neurosky and the consumer device companies like Muse, et cetera. And of course, there’s that open BCI initiative that’s tied to gaming now for different topics. Perhaps you can help the audience understand a little bit on how all these different players contribute to the evolution of this technology what users need to understand about their respective technologies and applications today.
[00:20:52] RA: Yeah. We’re so fortunate to be with so many incredible leaders in the neurotech. Emotiv was, what I would say is the first big consumer of brain computer interface company. Muse was the first one to really target a group and be successful in the consumer space for meditation. Then Conor Russomanno, his company, OpenBCI, has been really focused on how do you make BCI available to everyone at a really accessible price. Their company mission is very similar to ours when it comes to like what our beliefs are. And that’s why Conor and I, we connect really well. We believe in accessibility, how do we create this everyday system, is this more from opening to individuals to test with and build with. Ours is more, to open it up to the consumer to understand themselves, to integrate everyday devices into technology neurotech into everyday devices, essentially. But there’s a lot of incredible pioneers in the field. Each one kind of has their own separate direction, but it’s those – we’re so early on in the tech that all boats will rise together with the tide. That’s just a really powerful place to be.
[00:21:57] PC: Absolutely agree. What do you see as some of the existing challenges of these different applications and their commercialization? In Muse, that was a huge kind of very innovative idea of how wearable lightweight device can help you understand how your brain works, how you should meditate better. And of course, Emotiv’s devices are being used by a lot of researchers and labs and developing all kinds of brain computer interfaces applications out there that are already helping people today. What do you think are some of the hurdles of any that’s preventing this adoption to go broader, and for people to understand them better, and to keep engagement and retention outside of the research and lab settings, but in consumers’ hands?
[00:22:41] RA: That’s an amazing question and it’s something that we think a ton about here at Neurable. We’ve actually identified five key areas that we believe that consumer brain computer interfaces have had a significant amount of shortcomings in. The first one is function. Typically, when people buy these devices, they just don’t work and that’s been one of the biggest complaints that we’ve seen from consumers. They buy it, they try it, and then it ends up being in a shelf, because it’s not something that gives them consistent and reliable signal. The second one is cost. This is a lot of where OpenBCI is focused on. Current brain computer interfaces are at least if you want really high-quality sensors are very, very expensive. They can cost upwards of $20,000. Societal fit, when you’re talking about systems, like brain computer interfaces, they either tend to be really giant nets with gel and sensors everywhere or headbands.
But the problem is that, in both of those aspects, it’s not something that somebody would wear every single day. The other one is comfort. I mean comfort from two perspectives. One is, it has to look comfortable, but it also has to feel comfortable. Those really big gel systems, they’re not comfortable. You can’t wear them throughout the entire day. But at the same time, anything – we’ve done a ton of consumer research. We found out that even if you provide people soft electrodes, as long as they look spiky, consumers just don’t want them. So they have to also look comfortable. Then on top of that is the user experience, the brain computer interface has to be seamless, at least for a consumer. It has to be something they can just put on and use. It can’t be something that they have to spend five minutes fidgeting with it, trying to figure out, why doesn’t it work, et cetera, et cetera. Those are the five key areas that we’ve been focusing at Neurable to try to solve with this product, Enten that we’re bringing to market.
[00:24:31] PC: Right. Thank you for breaking that down. They data training is a big part of the user experience onboarding, to train the algorithm to understand how your signals translate into your intention. Can you walk us through a little bit of that part, the onboarding part and data training and how this can truly be a seamless experience versus this doesn’t work and I have to train again, and it takes too long, et cetera? What’s your effort in this area in making sure that people onboard easily and want to stay there?
[00:24:59] RA: There’s kind of like three roadblocks in that area of user experience. Roadblock number one is the calibration. You don’t want people having to spend a significant amount of time calibrating these. If you take them off, or you’re going to have to recalibrate them, et cetera. The second one is actually positioning. There’s hair in different areas, right? If you’re using spiky electrodes, you have to move them through the hair. Actually, our VR system had these knobs. We actually had to move the electrodes to the hair. That can’t take so long. Then the third one is, once you actually have it working, is the response rate, is the way that you connect with it via software an enjoyable experience as well too? This has been, what I would say the greatest challenges to overcome. Part of the reason that we’re doing these ear cups is they actually connect to areas behind this hairline, where your ear is so that there’s no hair. You get really good clear signal, something that you can put on and all you have to do is push the hair away a little bit. That’s been a key area.
The other thing too is, we’ve collected now, if you consider our laboratory work 1000s of people worth of data, if you consider just the data in these last two years, we’ve collected 800 people and just these last two years for this product, right? Through that. we’re able to create a system that requires no calibration, which is a huge step forward. Then from the software side, well, that’s what our bread and butter is. We’ve always tried to create software that is really intuitive for people to use and that’s something we’re bringing to this product as well.
[00:26:31] PC: Right. Absolutely. Talking about brain data, right, like in any machine learning algorithm, the more data there is, the better advancement you will make. And perhaps, brain data is the kind of the next frontier of big tech competition. I think that’s where a lot of people are shut down immediately, saying, “BCI sounds ultra-scary. How far can you go in invading personal privacy and harvesting personal data?” What are your thoughts around that in navigating, or making sure that we build a future where BCI systems are duly designed in a way that personal data and privacy are protected, so that people can alleviate some of those fears and concerns and thinking about this feature?
[00:27:14] RA: Yeah, absolutely. What I would say is, in this situation, that people are also right, you should be worried, right? But that’s primarily more for invasive systems. When you’re talking about noninvasive systems, you can’t really pick up very fine information. Really what you’re picking up is changes in electrical activity. That’s just one state versus another. Is this person focused right now or distracted, right? Does this person want to click on something or not click on something? It’s very, very high-level pieces of information. But I do think that as a company, and I hope that other companies as well too, that are pioneering this area, really understand that there’s a social responsibility to it. The rules and systems that we create now, will also impact companies in the future, both noninvasively and invasively.
At least for us, we don’t make money off of people’s data. We don’t sell data. We believe that the person owns the data. And then on top of that, we had to deidentify everything. I believe that by setting those standards now, when consumers look at other products, if those aren’t also the standards, they’re not going to want to buy them. That’s really how change happens, unfortunately. You can’t just write something and people will follow it. That’s already been done in the neuro ethics side numerous times and companies don’t necessarily follow that pathway. It’s what are consumers willing to vote with their dollars with. We find it as an incredible responsibility for ourselves to make sure that we create a standard that we believe others have to follow in order to also be able to sell products.
[00:28:44] PC: Right, and that’s what’s so exciting about being at the frontier, creating new technologies, even with all this uncertainty, that looking to the future. But while we have a seat at the table and defining what these new technologies are supposed to be used, there is this roll, bigger impact at the table for you to cast on developing this technology and where it might go. I think personally, that’s what really resonates with me, and having a seat at the table, defining some of these exponential future technologies and making sure that social responsibility and a lot of these concepts and ideas are ingrained at the design level. So that this way we’ll go into the right way and advancing versus creating detrimental effects. Because ultimately, all technology can be a double-edged sword and people are always fearful of new technologies.
Look at a very clear one depiction of the dystopian future of VR, or any technology that make people extremely scared of what that might become. I think brain computer interface brings it to the most extreme level of people imagining what that could be. I think most people’s response today would be, “No, thank you. I very much don’t need this.” But to understand all these different potential applications that can be constructive and productive to people’s everyday life and not to mention improving lives in a very tangible level is extremely exciting.
Now, onto this sort of entertainment side of things. We also see Gabe Newell of Valve talking about how BCI is a very important part of entertainment and gaming’s future and how they are experimenting, BCI systems with OpenBCI and other things. Now, what are your thoughts around that area in using what we’ve talked about today in the future of entertainment and gaming?
[00:30:25] RA: Yeah. No, I think Gabe has a good point when he speaks about how brain computer interface is an important part. I think that we’re just starting to really see the tip of the iceberg of how brain computer interfaces can be applied. Really, I think some of the earlier use cases are going to be for learning and education, understanding whether a person is focused or is not focused. Also, for the entertainment industry, understanding what sections of the game create, what type of response, whether the person is engaged or not. People think it’s exciting that you can move stuff with your brain. And for most people, you can also just click a button, right? So like, that’s not necessarily – that’s going to become really exciting and cool and useful once VR and AR become so mobile that you just – you want your controller and everything to be in one headset. But in the short term, it’s how do we provide value and understanding an individual so we can create better products for them so we can understand them better, so that we can have empathy, right? Empathy training could be something incredibly useful for brain computer interfaces. Those are kind of the shorter-term value propositions that exist with the technology.
[00:30:25] PC: But some of these really inspires imagination of how these brain data or brain computer interfaces used during users gaming session can even provide real time change to the contents in response to the user’s feelings and their mental state. How real is that and how far do you think we are from actually making something like that, a real time feedback to content possible?
[00:31:51] RA: Yeah, I mean, at least here in Neurable HQ, I feel we’re pretty close. If we can understand if you’re focused on something, then we’ll know when to trigger jump scare. If we know that you’re distracted, that’s probably not the best time to try to present useful information to the player. If we know that the person is really engaged with the task that they’re doing right now. They’re being attacked by hordes of enemies, or they’re solving this puzzle, and they’re really engaged. Now is not the time to interrupt them. Now is not the time to give them the answer. Or if they’re frustrated in a specific area, maybe this is the time to introduce the solution or to introduce an alternative way to solve it, and to make games more accessible, et cetera.
It sounds like it’s science fiction to kind of do all these things. What I would say is, the hard part is doing it in a form factor that can be used every day. These types of systems that I’m discussing right now have been shown in laboratory settings for quite a long time. It’s just how do we build applications on top of them, so that they integrate with our software, they integrate with our entertainment. Then in order for that to be useful, you also needed to work in a form factor that people will actually use.
[00:33:00] PC: That makes so much sense now. It’s easier for me to imagine what that could look like in real time content feedback and adjusting your environment and game design. No wonder why that Gabe said there will be a mistake for game developers to ignore the development in this area. That’s really exciting. What about the commercial and productivity enterprise area? What do you think businesses of all sizes should know about leveraging BCI today in helping their employees?
[00:33:31] RA: Yeah. I would say a few different directions to that, first of all, is helping empower your employees. This is one of the first use cases for the technology that we’re building. Imagine giving this technology to your workforce and having them understand, this is when I’m the most effective, especially now that we’re working so asynchronously, be able to understand your own habits. You understand how distractors impact you. That information becomes empowering for the individual so that they can make changes in their schedule, in their day or how they do work. Maybe this is the moment I take meetings, or calls and then I know that this is the best time during the day I do my creative work. I’m going to put Do Not Disturb on right here and just crank things out.
Another is in education. Imagine – this is actually some of the work that we’re rolling forward fairly soon. I can’t talk too much about it, but a potential partnership that we’re closing up soon, it enables us to understand learning. Attention is directly correlated to retention of information. How do we understand a person’s attention through a VR simulation or if they’re going through a classroom setting, and then be able to say, “Okay. This is this is the student or in the case of an adult, this is the adult learning this, this task that retained at the best. This is the one that had the most trouble retaining. Perhaps we should have them go through the course again, or talk to a teacher in case they were confused about a specific area. So we can really start to create customized learning for an individual. Those are kind of like two key areas that we really see a significant benefit in.
[00:35:03] PC: That’s so cool. Brain computer interface, today, we really talk through a lot of the basics on the science level, the original inspiration of application from medical, mostly medical fields and how all these recent efforts, including the one that you’re spearheading is looking to bring this into everyday life of everyday people. We’ve looked at different applications of entertainment, gaming, as well as productivity, empowering people to make more of their time, but through understanding their brain data. We also talked a little bit about the fears around what this technology can do, where it would go and really share some optimism, that techno optimism for the future. I really appreciate you spending the time with me today to talk about this topic, which I’m super fascinated by. I hope that our audience today has learned something as well. Thank you, Ramses. We look forward to getting together again soon.
[00:36:02] RA: Sounds good. Take care.
[00:36:03] PC: Thank you.
[OUTRO]
[00:36:03] PC: Thank you for listening. Please subscribe and share this podcast with a colleague or friend that you think could use some good vibes. Learn more at vive.com and follow HTC Vive on social media. See you next week.
[END]