Computer CPUs and GPUs are becoming very fast. Game engines are becoming very powerful. Virtual reality (VR) is becoming better. Technology has progressed to the point where serious game designers and eLearning professionals can make incredibly realistic games. But should we? Do the quality of the graphics and behavioral effects matter when it comes to serious game design? Or is the quality of the learning experience enough?
As Marshall McLuhan, author of Understanding Media and The Medium Is the Message, noted, “Anyone who tries to make a distinction between education and entertainment doesn’t know the first thing about either.”
Why simpler games are better
The main argument for designing basic serious games (as opposed to elaborate and visually complex serious games) is simple yet strong. While serious games need to be entertaining, entertainment isn't the primary objective. Learning is. Thus, we should keep the graphics, gameplay, and game design simple and instead focus our limited resources (i.e., money, time, effort) on our learning objectives, such as developing job-specific, decision-making skills. If we do have bigger budgets, instead of building one gorgeous game, we can develop five simple games, available in multiple formats, platforms, and operating systems.
A case study discussed in this article explores the balance of providing too much realism vs. not enough in military medical simulations. The conclusion: Don’t deliver [a visual or behavioral effect] because you can … deliver it because it’s needed. This allowed the game designers to focus on the minute-by-minute decisions required of emergency medical professionals in battlefield environments, which resulted in a more valuable training experience.
Why fancier games are better
If it doesn't take much extra effort to make a game more realistic looking, why not do it? As long as we avoid the uncanny valley (i.e. realistic-looking avatars are creepy), why not go to town and make our games as elaborate and visually complex as we want? In other words, if we’re making a game anyway, let’s make it beautiful.
The reason why we should do this is that presence—feeling like you’re really there—enhances player experience. Visual and behavioral effects are often helpful in creating presence. Presence helps players completely lose themselves in a game, fully engage intellectually, and emotionally invest in the characters, actions, and educational content. All this will ostensibly accelerate learning and increase retention, however little formal research has been done to prove or disprove this theory.
In this earlier Learning Solutions article, I discussed seven factors for creating visual presence in VR: locomotion, tracking, latency, persistence, resolution, field of view (FOV), and comfortable, cordless headsets and controllers. Most serious games are not made in VR, but the first six principles also apply to regular video games (i.e., those viewed on flat, 2-D screens instead of in VR headsets). None of them require realistic avatars, elaborate scenery, or stunning graphics. However, without some degree of presence, any video game is more likely to fail.
In that same article, I also referenced the four illusions of VR, as conceived by Jason Jerald in The VR Book: Human-Centered Design for Virtual Reality. They include the illusion of being in a stable spatial space; the illusion of self-embodiment; the illusion of physical interaction; and the illusion of social communication. The four illusions suggest more attention should be paid to detail and realism, but that’s also assuming the game is in VR and is played primarily for entertainment value. Experiencing these illusions is partly why people play VR games. The four illusions often aren’t as critical to the experience or the educational objectives of a 2-D serious game, however they still matter. Serious video games must provide at least some degree of stable space, self-embodiment, physical interaction, or social communication in order to be fun, engaging, and effective.
If your organization has legacy training games, i.e., serious games your organization created many years ago, you may want to consider updating them to keep up with modern games. Young employees who regularly play fast-paced, visually-stimulating, highly social Xbox and PlayStation games at home don’t want to play slow-loading, text-based console games at work. Those employees are likely to become bored, learn less than they could, and perceive the organization as being behind the times. This is the video game equivalent of training a department of literature graduate students with children’s books. It might do the job, but there are likely much better solutions.
For example, the above article about medical military simulations also explores how modern commercial first person shooter (FPS) games are more realistic, more engaging, and potentially more instructional than the FPS games the U.S. military adopted and developed many years ago. Some of these legacy games are still used to train soldiers in combat, communication, and leadership, even though many recruits are already adept at playing far more advanced commercial FPS games.
Focus on learning objectives
Ultimately, we must carefully consider how much realism we include in our serious game design. While we can’t make our games too basic, it’s better to focus more on our intended learning objectives than the quality of the graphics and complexity of the gameplay. If we get the game mechanics right and the game succeeds at its intended purpose, we can always reskin the game and make it more visually appealing later on.
From the editor
Want more on best practices for developing serious games? Be sure to attend Andrew Hughes’ session at Learning Solutions Conference and Expo 2019, “Best Practices for Developing, Implementing, and Supporting Serious Games.”