Using feedback to create high-quality learning content

Few things are certain – except “death and taxes” – as the saying goes. Uncertainty makes many things hard to predict. This is especially true for predictions relying heavily on a combination of people’s knowledge, skills, and intentions. Predicting what content will be successful for learning and skill acquisition falls squarely in this category. When things are difficult to predict, feedback is essential.

One of the three core principles of Emergn’s approach to improving the way people and companies work is to discover quality with fast feedback.
 
This article sets out to explain how we apply the principle to deliberately seek feedback from learners and subject matter experts to produce high-quality content for great learning experiences.

Valid feedback or opinionated views

Feedback is technically “information that is returned to a machine, system or process.” The information coming back can be used in two different ways. First, to fill a gap between expected and reached outcomes, or “do the things right” to achieve a set goal. This is single-loop learning. Secondly, to change the goal itself when we learn how attractive it is, is double-loop learning. You can learn more about double-loop learning by Professor Chris Argyris famous thermostat explanation.

Collecting feedback can be as quick and easy as simply asking the question: “what do you think?”. Ensuring that the feedback is valid, on the other hand, is much more complex. It is a field that engages many psychologists and economists in researching our cognitive biases. We use everyday expressions such as “Turkeys voting for Christmas” and anecdotes like “The Mom Test” to point out the challenge of feedback being valid.

Many people often fail to provide valid feedback, disguising their opinions as feedback. Feedback is information provided to someone about an experience with the aim of improvement, whereas opinions are views or judgments about a particular subject, issue, or situation. For us, this challenge is best summarized in this comic from XKCD.

XKCD’s TornadoGuard comic

Feedback from educational experiences sometimes looks as arbitrary as the comic above suggests as learners come with different expectations and objectives.

Experimentation with test groups and early adopters is the best way to close the double learning loop; helping us understand “is this the right thing for meeting the learning outcomes?”

While effective in getting feedback quickly, experimentation is a technique for information gathering, commonly using ad-hoc processes and temporary artifacts. To produce high-quality learning content at scale, experimentation results need to be sustained via a working setup with feedback loops deliberately built in throughout the production process.

Challenges with instructional content

Different people focus and appreciate different properties. Failing to account for these differences in feedback can easily lead to false conclusions on the quality of the learning content. Prior knowledge is also a major determining factor in the value of content, it even has its own bias in “the curse of knowledge.

The success of corporate learning is in its ability to transfer skills and knowledge to practitioners effectively. For that to work at scale, the content needs to be easy to find and engaging to encourage more learning.

Any cursory internet search will return a plethora of best practices for instructional design and learning content creation. Great examples include 5 from LearnDash, 9 from WorkRamp, 12 from ELM Learning, and 15 from the Instructional Design Company.

Our best practices are very similar, we aim to create attractive learning content with:

  • Syllabi based on micro-learning objectives and modules
  • Personalized learning journeys based on a skills evaluation
  • Storytelling that keeps integrity with concepts and taxonomy
  • Consistent use of images, models, and key messages

All while increasing the throughput and managing the costs of production.

The main challenges faced by our learning design studio were with the consistency and integrity of the content. We apply the feedback from our users by continuously adapting the learning content for the experience to be personalized and engaging and for the content to be memorable and applicable.

In video content production, we find late frequent changes particularly costly. Between our forum moderators, instructors, and consultants, there were frequent disagreements about the most appropriate content. But what we all fully agree with is that the best practice is getting feedback. Or as ELM Learning succinctly calls it, the practice of Test, Test, Test.

Constructing feedback loops for learning content

This section describes how we found our way for learning designers to balance the feedback of experts and users while keeping up the pace of development and managing costs. To ensure fast and valid feedback, we based our process on our experience of feedback loops in software development. We leveraged a formula for the value of documentation from the Agile pioneer Scott Ambler and applied the important distinction UX experts make between pragmatic and hedonic usability.

Leveraging feedback loops from software development

In product management, quality is building the right thing and building it right. We need feedback to discover the qualities of the products as they are developed and operated. The best feedback is the one that provides double-loop learning; not only provides insights into how development progresses but also lets us learn how the product is used.

Many things can be learned from the management of software development – how to structure processes for fast feedback is one.

Feedback from multiple sources for multiple reasons

For feedback to be valuable, it needs to be valid. But it also needs to be obtained timely and efficiently. If it comes too late, it makes adaptations costly or near impossible. If it’s too little, it might not be representative. If the effort to obtain feedback is too large or too costly, it might lead to not listening to feedback at all, which increase risks.

For a long time, the development of software has been a very complex and expensive undertaking. The cost of correcting defects discovered late in the process has led to the elaborate design of feedback loops to ensure predictability and cost control.

The main feedback loop is closed when information about the software comes back from the user, i.e., when an idea or concept has turned into cash. That’s when double loop learning happens – we ensure the software is built right and that the right thing was built.

An effective setup of feedback in modern software development includes:

  • End-user involvement in early testing cycles
  • Nested loops so that if a defect slips through one loop, it still can be caught by a later loop
  • The software engineers’ code is integrated continuously, or at least daily
  • Progress is visualized by demonstrating the work to stakeholders on a regular cadence, at least every two weeks
  • New versions are released frequently to minimize overhead for planning and management
Figure Illustration of common feedback loops in software development
Illustration of common feedback loops in software development

Feedback from users

Ultimately, it’s the feedback from users that matters – for a product or service to be successful it needs to be used. For customers and users that can choose between products and services, they also must find your product attractive compared to the alternatives.

Getting early feedback from users during the development phase, e.g., using demo versions has proven extremely valuable in reducing the risk of building the ‘wrong’ thing.

Similarly, for learning experiences, there are two broad categories of users determining the success of learning: learners and instructors. Taking their feedback into account during content creation is equally valuable.

Feedback from experts

As evident from the Tornado app comic above, user feedback on functionality is important but rarely sufficient. Without feedback from experts in security and maintenance, you risk developing a solution which is unsecure and costly, or even impossible to maintain.

For learning content, the experts providing feedback would be subject matter experts in the topic and instructional designers.

Cruft in documentation

Software development used to be a very document-driven process. A lot of time and effort was wasted writing documentation and manuals which were never used.

One source of inspiration for our solution comes from Scott Ambler and his strategy for documentation. In his work on Agile Modelling from 2002, he identified communication as the fundamental challenge of documentation, not the contents. Ambler presented this formula to maximize the value of documentation production as a “cruft.”

Figure Scott Amblers CRUFT formula for value of document
Scott Amblers CRUFT formula for the value of the document

He reasoned that if one of the probabilities were very low, the document’s content would be near worthless. On the flip side, he suggested testing it with readers for trust and comprehension before spending too much effort completing it.

Applying this concept in our learning material and content for skills acquisition, we should test our content explicitly to be understandable and trusted. To be trusted, it also needs to be both valid and relevant to the user’s learning outcomes.

Pragmatic and hedonic usability

Hedonic usability refers to the emotional qualities; the feelings a product or service evokes in the user. At Emergn, we strive for our content to provide new perspectives – our ambition is for learners to experience our material as thought-leading. We also want to make our learning and brand attractive. There is a lot more science behind attraction than we can cover here. AttrakDiff presents a good model based on Marc Hasselzhal’s research on this topic.

Pragmatic usability is the task-oriented qualities of a product or service, how it does the job. In addition to being understandable, valid, and relevant, we design our learning content explicitly to be coherent to work at scale.

Figure Key qualities for learning contents
Key hedonic and pragmatic qualities we aspire to for learning contents

For more in-depth descriptions on the difference between hedonic and pragmatic usability, this article from MeasuringU is a good starting point.

How we apply feedback loops to our learning content

Our learning design studio develops learning content based on product management practices. We build learning experiences incrementally, managing the flow of work from an initial idea until the learning content is live and ready to meet our learners’ objectives.

High-level-process-flow-for-learning-content-development
High-level process flow for learning content development

Schematically, the learning designers mature, refine the ideas to concepts, derive outlines from the concepts, and from the outline, produce mockups and prototypes. We use prototypes to test with early adopters.

Testing outlines, prototypes, and final content

For our studio to close the feedback loops as fast as possible, we ask both users and subject matter experts (SMEs) for feedback on the content outlines. We expect our users to provide insights into the pragmatic relevance of the content. And we expect our SMEs to provide feedback on the thought-leading qualities of the content.

We have found that the fidelity of our prototyped learning content is sufficient to provide valid feedback on the pragmatic qualities of validity, coherence, and comprehension (understandable). We trust our SMEs to give feedback on the first two, but there’s no substitute for feedback from end-users when it comes to validating that the content is understandable.

Feedback loops from SMEs and Users

For validating the hedonic quality of attractiveness, the content needs to be of very high fidelity and production ready. There are many more details and intricacies of attractiveness that we won’t cover here. 

The table below provides an overview of the key questions we pose to our users and SMEs.

Key-questions-we-pose-to-validate-learning-contents
Key questions we pose to validate learning contents

To ensure the quality of learning content, feedback loops for both, end-users and SMEs are important. They both play crucial parts; their feedback needs to be interpreted and used differently to be valid. Our recommendation is to solicit feedback on pragmatic and hedonic properties in separate loops.
 
What feedback loops do you have in place to get fast feedback on learning content?

For more articles on the principles of Value, Flow, and Quality, and Emergn Learning services, head to our Insights page. Or download a copy of Emergn’s Survey Report ‘The Pursuit of Effective Workplace Training’ here.