"Learning by doing" is perhaps the oldest model for effective learning experiences, but we are only beginning to find out how it works. There is an expectation that optimizing learning experiences by using virtual reality technology will facilitate transfer from one training modality to another (games and simulations) and to the job.

Michael Casale, chief science officer at Strivr, is a cognitive neuroscientist. His research and his work aim to expand our knowledge of the biological underpinnings of learning and memory, and of the processes and pathways of learning transfer. I recently spoke to Dr. Casale about the role of neuroscience and VR in immersive learning.

Bill Brandon: What have we learned from neuroscience in the last two years that is changing the way that we use virtual reality in learning experiences?

Michael Casale: The really interesting things are on the bleeding edge of the technology itself. A lot of that comes from academic environments. We're seeing more and more interest there in using virtual reality, not for the sake of using virtual reality, but in understanding virtual reality as a medium to better understand behavior.

As a general statement, virtual reality is a research tool in the behavioral sciences. We're seeing more neuroscience research using virtual reality as well. For researchers who are trying to understand what's going on in the brain, it's the same principle. They can do that in a more valid, more relevant way. What we're finding is that there are very similar types of brain networks activated in these virtual environments.

I don't know that we're learning anything new, necessarily. But we're validating these concepts that were just starting to trickle out five or 10 years ago, and that work continues to grow. The simulations we create in virtual environments, if they're created correctly, are able to produce very similar types of brain activation to what we see in real-world environments. That's a critical point because having the biology to back that up is really important. Obviously, it's a little bit more difficult to do that work in applied industry settings.

I would love to be able to run more types of these experiments with our customers. Obviously, we have some hard constraints in terms of what kind of experimentation we can actually do, but the good news is we do have some of these natural experiments that—maybe not necessarily directly from a neuroscience perspective but from a behavioral perspective—we can look at the kind of outward behavior that someone has when they're in a VR environment. We can look at that and compare it to what happens when they're not in the VR environment, and we see similar types of conclusions. Maybe we don't know that exactly from the brain science perspective but we know that from the behaviors that we can observe. The way people react to these virtual environments is much more consistent with what we are noticing or with what is going to happen in the real world versus if they're just sitting there passively in front of PowerPoint, looking at multiple-choice questions, etc.

So that's one big thing. I'd say the other big thing is maybe not necessarily learning, but the beginnings of what we hope to learn. We're seeing new technologies and I think of the VR device that came out recently, the HP Reverb G2 Omnicept Edition headset. This is a really cool device. HP geared their VR headset toward researchers because they have the ability to capture brain data. We can look at things like physiological outputs, what's happening when you are excited, or when you're engaged, or when you're acting. We can see what someone's doing with their eyes, that's one kind of output of these brain activations. You can see the eyes are doing certain things. Those are all directly related to things that are happening in the brain. We can make really strong assumptions that when someone's eyes are doing certain things they're anxious, or they're enjoying something, or they're learning, or they don't know something, etc. So that's pretty powerful.

The Omnicept also has the ability to look at other types of physiological signals, again, what we would call direct correlations of what's happening in the brain. We can look at heart rate. We can also look at what's happening with your face. It turns out there's a lot of really interesting research that shows us when your face muscles are activated in certain ways, that means you're in certain emotional states. So now we can start to say someone's in a particular emotional state, not just by looking at their eyes but looking at their heart rate, [and] at the very small movements in their face.

Then there's this whole other world that people are starting to explore, which is looking more directly at what we call the EEG (electroencephalogram) signals. These are basically electrical activity in the brain that we can measure simply by putting sensors on the outside of the head, very simple sensors that are very un-intrusive. The combination of those sensors allows us to really start to literally get inside of someone's head. In some moments we can see how someone feels about something. From a learning perspective we can see, are they experiencing a moment of uncertainty? Do they feel confident that they know? It really allows us to better understand just from a knowledge perspective, does someone have a good grasp of this material? Do they feel comfortable in this situation?

All of this is particularly important as we're starting to look at training that involves communication in the workplace. How do you start to assess and understand—are individuals feeling comfortable, confident in soft skills and social skills, or do they need more training? This is something that we're starting to explore more heavily now. That's incredibly exciting and the really cool thing is how companies have easily embraced VR-based immersive learning for soft skills in the last two years. At Strivr, we’ve been working more and more with customers, such as Bank of America, Sprout Farmers Market, and others, to incorporate more soft skills training—including scenarios in DE&I training and more empathetic customer interactions—into how they onboard and train employees, at scale.

BB: What I'm hearing is that you're getting is more confirmation and maybe some contradiction in terms of what you thought would be the case for VR. In other words, nothing that's upsetting anything, it's just confirmation.

MC: I wouldn't call it a contradiction, but I wouldn't call it confirmation either. It's really nice in the research world when all of the people in a scientific study have the same kind of data output, the same reaction—but that almost never happens. And what's interesting is that you start to understand the diversity among individuals when it comes to learning behavior.

The data we’re collecting from the behaviors that we're seeing in VR is something you can't easily obtain through any other medium. We can start to better distinguish individuals who really know the material, who really feel comfortable, versus those that may not feel as comfortable, versus those that don't know anything. It's really understanding these slight nuances to create more adaptive or customized training, so that you're starting to gear the right training for the right people, at the right time. And that can only happen if you know what's going on in terms of someone's performance in the way that we can now with the data, that we really couldn't before. That's getting us a lot closer to the ground truth of how someone's going to perform following training conducted in the workplace.

BB: Do the new devices give you a way to provide feedback to the VR system? From the physiological reactions that the new devices can detect, is there a way to do that loop back in adaptive training? We normally do that with questions and responses. If you know that 90% of the people respond correctly, then things are going the way you hoped. If only 10% respond that way, then maybe you have some work to do. Is there a feedback loop yet?

MC: Yes, you're thinking about this correctly. It's the same principle. Once you produce the behavior and the response, etc., you can then take that information in, hypothetically, in real-time, and start to use that to say what comes next. For example, if someone's struggling with a piece of material, they're not doing so well. What comes next is something a little bit easier. If they're really doing well and it's too easy and they're almost to the point where they're bored, they're getting disengaged, we know what to do next: make it a little bit more difficult, put in a new situation.

The idea is that the data that we make those decisions on, I would offer, is much better, much more valid, much more representative of what that person is going to do in the real world. What we're going to adapt for that person is going to be much more useful. The decisions are only as good as the data that you have, and if we have better and more insightful data to tell us what's closer to the ground truth, we’ll have a much more effective system to create more adaptive training modules.

To date I haven't seen this implemented at scale, except for some incredibly interesting work in the field of mindfulness. It's not a stretch to think it could easily be implemented in the VR training space as well. We haven't done that quite yet, but it's a solvable problem. This is a concept that we could see a year or so from now, and at Strivr we’re always looking for innovative features that could be rolled out by learning and development professionals in the imminent future.