Pages

1.09.2007

The politics of quality

The January question on the Learning Circuits Blog asks:
What are the trade offs between quality learning programs and rapid e-learning and how do you decide?
The decisions to make a big investment in quality learning programs depends on achieving a consensus. The delivery side may favor quality because it meets their professional standards, builds their stature in the organization and keeps them busy. This side opposes rapid approaches for the same reasons.

The receiving side may oppose quality approaches because it's too expensive, it misses the window of opportunity or it becomes obsolete too quickly. This side favors rapid development for the same reasons. These opposing interests typically result in a stalemate. Neither side wants to sacrifice their "right answer" or sees a way to gain from the other side winning.

Imagine this stalemate is an epistemic game. The delivery side includes SME's, instructional designers, trainers, and outside vendors/ contractors/ consultants. The receiving side includes trainees, immediate supervisors, coworkers and senior executives who all pay for the costs and experience the outcomes. There are four stances to take and three tradeoffs to make between them. Consensus is achieved by both sides making the same tradeoffs. Moving to a higher, consensual tradeoff wins.

In a worst case scenario (0,0), it's easy to reach an agreement on no ID at all. This is agreeing to make a tradeoff between rapid eLearning and no instruction, where nothing ends up being better than something. "Let's not and say we did some training." This makes sense in situations of extremely turbulent changes, no established SME's, new expertise getting formulated weekly and rampant informal learning occurring naturally.

Rapid eLearning can be agreed to (1,1) when large numbers need the information immediately, the instruction has a short shelf life and the competencies require hands-on practice to work out the details. This occurs where SME's have emerged, the expertise has stabilized, and informal learning runs the risk of dangerous errors or costly mistakes.


It's easy to agree to quality learning (2,2) when the instruction has less to do with expert information and more to do with taking insightful action, solving recurring problems and addressing situations resourcefully. This occurs when SME's lose their importance and the learned expertise emerges from interactions, simulations and coaching. The role of instructional designers rises to preeminence here.


Usability can be agreed to (3,3) when quality learning programs are too formal, structured and controlled. This occurs when individuals need to think for themselves, be trusted with increased responsibility and get their upper management to function as servant leaders, mentors or listeners. Learners teach themselves what they need and experience the tools provided as a support system for their self-directed efforts.


The most common stalemate involves opposing tradeoffs. The delivery side is debating between usability and quality (3 vs 2). The receiving side is making a tradeoff between no training and rapid (0 vs 1). The two sides have nothing in common and much distance between their stances, like the first stage of any new collaboration.

What usually occurs is a search for the lowest common denominator. It appears too costly to reach a higher consensus. Selling quality looks like an uphill battle and a tough sell. The phenomenal growth of rapid eLearning partly reflects this failure to achieve more value for the receiving side's investment of time and money in quality learning programs. Game over. Return to 0,0 !

A consensus in favor of usability (3,3) is a very different story. The approach sells itself. The value is intrinsic to the receiving side and idiosyncratic to each person. The learning program solves their problems, meets their needs and serves their interests. The quality is in the eyes of the beholder, rather than the professionals, evaluation schema or design models. The strategy is learner centered and customer driven. It avoids the problem of too much sophistication, feature creep or making enemies of the slow learners. It works with the receiving side instead of against them. Development is collaborative. The result is rapid quality, the best of both -- instead of the tradeoff between them. Winner!

2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. tom:
    Great analysis and nifty charts. As I read your post, I kept coming back to the key concept that I think of when I've worked with Johari Window analysis for interpersonal, team or organization relations. That concept is that for optimal performance when more than one individual or one group is involved, you have to has as much openness and trust as is possible.

    For organizational learning this means we need to open about what we can and can't do, transparent in our work. It also means we need to be knowledgeable about our business partner's work and ways of working and open to their needs and expectations.

    I think this jives with what your saying.

    ReplyDelete