I was recently asked what I thought would be the most important technology for the next decade. Now, I think technology is second to us. That is, what I think will have the biggest impact is if we pay proper attention to how we think, work, and learn. However, if we’re talking technology, I think semantics—that is the meaning of content—will be the most important area.
Overall, predictions are a rather silly endeavor. For one, I’ve heard it said that predictions get undone when a trend takes an unexpected turn, owing to an unforeseen event. Also, things change slower than you think. I recently quoted Alan Kay: “The best way to predict the future is to invent it” to say “I’d rather work on what I want to happen.”
And I’ve written about this before, but it’s worth reiterating. We need to stop thinking about ‘content’ as a monolithic entity and start thinking about content as a system of multiple parts. And we can put these parts into conjunction with context to start doing smart things.
Granularity
It starts by stepping away from our traditional approach to content. We currently develop content as a wired-together set of pages, videos, etc. We might even combine models and examples on one screen! Yet we can and should work differently.
Yes, our tools right now do require us to develop separately and combine into one file. We can use content management systems to track all the elements, but it’s delivered as one monolithic chunk.
Yet when we look at what serious content folks do, e.g., web marketing, it’s very different. They have finely granulated content, which is pulled together by rules. For instance, when you visit a shopping site, there’re likely to be recommendations of what other folks have looked at, and similar types of items, and more. Those aren’t preconceived, they’re calculated and served on the fly. The same thing with web ads; they pull together ads based upon your previous actions.
The key here is carving up content and tagging it as to its characteristics. Then, rules can pull content that meets certain types of characteristics relevant to what’s happening. The rules encapsulate a mapping from content to situation.
The key element is creating a content model where you identify the content chunks you want. It includes all the different types of content about things, and suitably full set of descriptions, as minimally as possible. We’re talking both depictions of all the (types of) content needed, and careful descriptors to distinguish content uniquely.
One issue is granularity, and my heuristic approach is to say what’s the smallest thing you’d give to one individual versus another. On a principled basis, I argue that the right level of scope is to create a single unit meeting a learning or performance role. For formal learning content, it would be separating things like introductions from concepts, examples, and practice. For performance support, you’re talking different job aids. And it can be full articles or series episodes for formal learning.
Context
We can be meeting a variety of needs with this content. We can start breaking up the content and streaming it out over time. We can develop a bit of redundancy and start adapting what’s seen based upon learner performance. And we can start doing ‘smart’ push, by context.
Increasingly, our contexts aren’t black boxes. With application programming interfaces we can get info about what a person is doing in a particular application, and deliver specific help. We can know where a person is, and what they should be doing.
If we create context models as well as content models, and models of our users, we can start putting these together. Knowing a sales person is visiting a site suggests different content than if it were a support technician. The richer the model, the more power is on tap.
When people are asking for help, we obviously can try to provide just the right thing. But now we have the opportunity to proactively offer support based upon task or role and learner’s experience. That’s powerful.
Meaning
One further advance to semantics is the ability to parse what’s going on. There’s lots of talk about data analytics, which is basically all about numbers. But Tom Reamy, in Deep Text, makes the case for mining information from text. There are a number of opportunities.
For one, we can auto-parse content (and this includes converting audio to language, whether in audio or video files). This means that we can automatically do some ‘tagging’ of content as to what it’s about.
We can similarly parse queries and go beyond just search to create mixed initiative dialogs where the user and the system work together to match content to need. It’s the old query-by-example, refined.
And we can start mining user actions and content generation to build the models, or at least flesh them out. Just as we can have a mixed-initiative dialog to find content, we can use mixed initiatives to build our models and populate them.
We’re looking to meet needs from ‘just in time’ help to accomplish one thing—through slow development over time for specific goals—to on-demand, self-development resources. We can match the type of content (job aid, article) to the need (just in time, self-development) and to the individual (level of skill, role, or task). We’re going to a hybrid system that develops a performance ecosystem of resources organized around the learner, not the business silo.
It’s too easy to think ‘tactics’, e.g., ‘develop this content for this need’. It takes more work to conceive a ‘content suite’ for a variety of needs. But this is a system, a platform approach, instead of a one-off. It’s more upfront work, but the payoff is in flexibility.
There are entailments. It takes a systems approach to content. You need a content strategy about what needs to meet and how to deliver. You need content engineering to develop the model and the approaches to populating and delivering it. And you need content management to govern the lifecycle. But this is the future.
Ultimately, we’re taking an intelligence augmentation approach. We’re matching the increasing power of technology with human meaning and needs. We’re aligning technology with how we really think, work, and learn. And that’s the right direction. Further, that’s the change I’m eager to precipitate because I think we’re on the edge of being able to make a significant shift in how computers can help us.