Bloom’s for eLearning

Like so many learning professionals, I reach for Bloom’s Taxonomy when writing learning objectives. However, these days I only design asynchronous eLearning. That makes using Bloom’s trickier. 

I need to mostly use assessment verbs that a computer can consistently score accurately. It’s difficult to use any assessment method that will elicit a range of correct answers from a learner.*

This is true even for very short answers. Think about a fill-in-the-blank question, for example. Let’s say their answer matches the spirit of the correct answers listed, but because of the way it is phrased, the computer marks it as incorrect. This is very frustrating for our learners, most of whom are very competitive salespeople. They want to get every answer right, even on the formative assessments. 

This brings us back to Bloom’s. How can the computer accurately assess if a learner can discuss a topic? Relate, tell, and sketch are similarly problematic. You might be able to use them in the classroom, or even in an instructor-led webinar, but in an asynchronous eLearning? Not so much. 

I realized I had to come up with a revised Bloom’s when all of my learning objectives started with identify, select, or choose. I sat down with my team of curriculum developers and a list of Bloom’s verbs. We brainstormed all the ways we could possibly assess in an asynchronous eLearning course.

Here is the list we came up with. Each verb is paired with a type of eLearning assessment. The number next to the verb is the level in the Revised Bloom’s.

I hope you find it useful. If you have any more that we didn’t think of, please add them in the comments.

Happy designing!

*Obviously, I can provide a space to type in an answer, and the learner can then compare the answer with an expert’s, but with my audience it is best to use that method sparingly. 

Special thanks to Nick Calabrese, Jamie Baker, and Abby Waterman for allowing me to share our list. 

Type of AssessmentVerb
Multiple Choice (Choose one)
Multiple Select (Choose many)
1. Recall
1. Locate
1. Choose
2. Identify
3. Select
3. Solve
4. Distinguish
4. Diagnose
5. Assess
5. Evaluate
5. Rate
5. Score
Matching1. Define
2. Sequence
2. Arrange
2. Put steps in order
2. Compare
2. Organize
2. Match
3. Apply
4. Distinguish
Fill in the blank1. Define
1. Name
1. Complete
4. Relate
Card sort2. Compare
2. Differentiate
2. Organize
4. Categorize
4. Classify
Conversation scenario1. Identify
1. Choose 
3. Select
3. Solve
3. Respond
Software Simulation in Storyline2. Demonstrate
2. Navigate
2. Organize (drag and drop)
Choose correct icon/image 
(from labeled image or simulation in Storyline)
1. Select
1. Choose
1. Identify
1. Recognize
2. Demonstrate

The Seduction of Technology

In the field of eLearning, technology is continuously changing, and in many cases, improving. It is easier and easier to create a polished, interactive course that appeals to learners. And that’s great, right?

But there’s also a danger here. 

It’s easy to get so excited about all the new bells and whistles that we look for ways to shape the learning around those, rather than focusing on instructional methods. As we learned from the work of Ruth Clark, the engine that drives effective learning is the instructional method used – and that is true regardless of the technology or format. 

For example, concepts require a definition, examples, practice with feedback, and a test to see if the objectives were achieved. The same instructional methods work to teach a concept whether that be in a textbook, a handout in an instructor-led class, or a cutting-edge eLearning course.

Recently, my team moved from building most of our courses in Captivate to working primarily in Rise (with the occasional custom interaction built in Storyline.) I hadn’t done much with Rise before this, but there are many things to like about it. It’s quick and easy to use. It has significantly sped up development time. It has a clean look our learners like, and it is automatically optimized for the mobile experience. 

For these first few weeks, I’ve been playing with the different types of blocks and figuring out which ones are best for the different kinds of content I am teaching. It’s pretty exciting to find a new, attractive, and interactive way to present something – especially if the content is on the drier, more technical side.

But last Thursday, I had a realization. 

I was talking with some friends who are also Instructional Designers, though they work for a large non-profit. A lot of their work revolves around compliance and most of mine is technical and sales-focused. 

Still, training is training, and we are all trying to design engaging, effective learning experiences, and then figuring out how to check the real-world impact of that training (which is a topic for another day.) 

We were sharing useful books we’ve read, and I was talking about Richard Mayer’s Multimedia Learning

In particular, I referenced his coherence principle, which states that people learn more when extraneous words, pictures, and sounds are excluded. For example, you shouldn’t include pictures that don’t further the content simply because they are attractive.

That pulled me up short. Obviously, I already knew this principle. But then I got onto Rise, with its lovely embedded graphics and amazing 5 million+ image library. Had I violated this best-practice principle because I was excited to make things pretty? I had to go home and check my course.

It was a useful reminder. Yes, let’s enjoy the benefits of new learning technology. But let’s also not forget what makes a course effective. Regardless of format, I need to build my courses on a solid foundation of effective instructional design – and not get distracted by the shiny and the new. 

Notes from the Land of Non-Font People

People who have an eye for graphic design (AKA not me) have very strong feelings about fonts. They seem to have the sort of strong feelings I reserve for things like presidential elections, new books from a favorite author, and the correct things to add to chocolate-chip cookies. (Raisins are an abomination. Just saying.)

If you want to test if a friend or acquaintance is in the graphic design camp, just mention Comic Sans and see how they react. (Go on, try it! It’s fun!)

When it comes to fonts, I’m aware that they exist, and that some are easier to read than others. That’s about it. But as an Instructional Designer, visual choices such as fonts can have a big impact on the presentation and effectiveness of your module. If you, like me, are clueless about fonts, here are some helpful tips I’ve learned. 

Make sure your font choices are:

1. Consistent: Your project should have one or two fonts at most. More than that will make your module seem fragmented and unprofessional.

For consistency, use one type family. For example, you could use Minion and create emphasis through variations (ex. italics or bold), type weights, widths, and even small caps. Some types have more variations available than others. Types with lots of variation are called extended type families. Examples include Bodoni, Helvetica, Lucida, and Myriad Pro.

2. Appropriate: Different fonts have different personalities. If you think about it, this is obvious. You probably wouldn’t use the same curling, scripted font that would be appropriate on a wedding invitation for an engineering manual.

But the personality differences are not always evident to us non-font people. Here’s a chart to get you started. The point is, choose a font that suits your content and audience.

FontPersonality
Georgiaformal, practical
Times New Romanprofessional, traditional
Courier Plainplain, nerdy
Arialstable, conformist
Tahomayoung, plain
Century Gothichappy, elegant *

3. Legible: In eLearning, the most important element of your font choice should be legibility, especially if your audience is going to be viewing your module on mobile devices. Here are some elements to consider. 

a. Serif vs sans serif: When I hired a graphic designer to help me design my website, his first question was, “Serif or sans serif?” and I think my response was a polished, suave, “What or huh, now?”

This decision is less intimidating than it first appeared to me. A serif font has little feet, and sans serif doesn’t. For example, Baskerville and Courier have serifs, and Futura and Gill Sans do not.

When working with print on a page, the conventional wisdom is to use sans serif fonts for the big words – headings and captions, and serif fonts for paragraphs. The little feet make it easier for the eye to track along multiple lines of text. 

b. Superfamilies: Of course, if you are combining two fonts – one for headings and the other for body text, you want to make sure they go together. Think about their personality and how their shapes go together.

If you, like me, have a hard time seeing or even noticing the shapes and personality of different fonts, you can find lists of fonts that go well together, like Caslon and Myriad, or Palatino and Tahoma.

If you want to make your life even easier than that, you can look at superfamilies of text. These include both serif and sans serif fonts that are designed to be used together. For example: Museo and Museo Sans, Fontin and Fontin Sans. 

c. Reading on a screen: Many people agree that sans serif typefaces are easiest for reading on the screen. Or you could go with Verdana (sans serif) and Georgia (serif), which were both designed for the screen. 

The important thing is that you are choosing your fonts thoughtfully, with your end result in mind, just like you design everything else in your module. Just because you come from the land of non-font people, you are not doomed to make poor font choices that will reduce the effectiveness of your eLearning module. 

Sources:

Malamed, C. (2015). Visual design solutions: principles and creative inspiration for learning professionals. Hoboken: Wiley.

*Font personality table from: Duarte, N. (2008). Slide:ology: the art and science of presentation design. Beijing: OReilly Media, p. 143

The View from the Learner’s Chair

It’s easy to get caught up in what we are trying to communicate, especially if our content is complex. We get focused on the essential information, how we are going to structure it, what’s the best way to chunk it to avoid cognitive overload, how we are planning to assess learning, and so forth. Yet this rather myopic view is focused on the teaching, and not on the learning.

That may sound like a small distinction, but it makes all the difference. Being focused on the view from the learner’s chair can transform an ordinary course into one that is engaging and effective.  

Focusing on the learner’s point of view is just another expression of what should be central to our design process – knowing our target audience and trying to get into their mental frame. 

To design an effective learning experience, we need to know (amongst other things) what our audience knows and does not know, how they will react, what they need to be able to do, and what their common errors or misconceptions are likely to be. 

Learner-centered design takes this farther, thinking about the learning interface and what is going to make for a smooth, enticing, and enjoyable learning experience. It removes barriers to learning. It makes explicit how this course will benefit the learner, and what the learner will be able to do as a result of the course.

I’ve had a personal experience as a learner these past two weeks which has brought home to me the importance of this way of thinking, as a designer and developer. Last year I bought one of the big authoring tools. I won’t name it, but you’ll know it. Let’s call it Program A. It has an excellent on-boarding process, an intuitive interface based on PowerPoint, and an involved and helpful learning community. Program A is simple to use but has lots of power and capability for customization. 

Two weeks ago I decided to expand my skills and I bought the other major authoring software. Let’s call this one Program B. Program B can do everything Program A does, and it certainly has lots of power and space to customize what you are creating. But learning how to use Program B is a completely different experience. Why? Because it wasn’t designed with the user in mind. 

I’ll give you an example. I’d created a sample project, and I wanted to publish it as a SCORM-compliant file. This is a basic functionality that most, if not all, eLearning developers need to be able to do. In Program A, you go to the big Publish button on the task bar, hit LMS, and it automatically outputs a file ready to upload to your LMS.

Not so with Program B! You have to go to quiz—>quiz preferences—>quiz —>reporting, and select the box to turn on reporting in order to get a SCORM-compliant file. Then you have to go to publish. Is it able to create a file for an LMS? Yes. Is it obvious how to do so for the new user? No. Well, not for me, at least.

I can see the logic of the way Program B does it. Program B’s designers are creating a very complex program, and trying to be original – not based on something like PowerPoint. Just as we are trying to take complicated content and chunk and sequence it to make sense, they are figuring out their own organizational systems, based on logic. Why would you want to load it to a LMS? Why, to track people’s quiz scores. Therefore, it should be a quiz setting. 

But as a user, I’m not thinking like that. I’m thinking I want to publish it, and I’d like my settings under the big publish button. 

So that made me wonder: when I am designing eLearning, am I doing it in a way that is logical to me but might not make much sense to my learner? Am I working like Program A, or like Program B? Which one is more likely to provide an effective learning experience? 

What do you do in your learning design to take your learner’s point of view into account? Please share in the comments. 

How good is your judgment, really?

Imagine a doctor, let’s call him Dr. Bob, has made a medical error that harmed a patient.

Normally, the hospital might have a meeting where they discuss the error and think about ways it could have been prevented. If you are a doctor listening to this, you’d probably be thinking, “Well,  I wouldn’t have made that error.”

But what if, instead, we could give all the doctors a game based on the facts of the case, and each doctor had to play from the perspective of Dr. Bob? The new doctor would be given the medical history, be able to order tests, and decide on a diagnosis.

Would she, in fact, avoid that error? Or might the new doctor make the same assumptions and mistakes as Dr. Bob? And how powerful a learning experience would that be?

That’s what we’re offering here at Brain Spark Learning. Serious games, based on real situations, that allow people to explore different paths and make life or death decisions – without actually killing anyone.

Engaging. Emotional. Memorable. They way learning should be.