eLearning: Become a Pedagogical Agent

by Jennie Ruby View our profile on LinkedIn

If you've taken any of our Adobe Captivate, Adobe Presenter, or Articulate Storyline classes, you are probably aware that these programs provide a selection of screen characters–cut-out pictures of professional actors in business, medical, or business-casual clothing posed as if they are talking to you. They are intended for use as a kind of avatar of the trainer.

There is research that shows that using a screen character as a pedagogical agent or learning coach, who speaks informally and appears to be giving the lesson, increases learning. (My reference for this is Ruth Colvin Clark and Richard E. MayereLearning and the Science of Instruction.)

Over the past few weeks, I've had multiple students ask how hard it would be to use themselves as the learning coach. Believe it or not, becoming a pedagogical agent is easier than you think.

 
Put Your Picture into the Lesson. Place a professional head shot of yourself, your trainer, or expert on the introductory slide (including job title, credentials, etc.), and then have that individual record the audio narration for the project.
 
Create your own screen characters. Photograph your expert on a green screen background for a full set of screen characters in various poses. The IconLogic Blog has a whole series of articles on how to do this:
 

Create cartoons of yourself or your in-house experts. You can use the images over and over in on-going training videos. Here is one article to get you started: Using Bitstrips Characters.

If you don't have specific, known individuals in your company to act as your learning coaches, you are not stuck with the same four or five actors that come with your software. You can purchase additional screen characters from The eLearning Brothers. Or you can just make good use of some inexpensive clip art. By trimming out the background in ordinary office photographs, you can get some nice effects.
 
Whether you use generic actors or your own home-grown experts, screen characters are an excellent way to add the personalization, engagement, and local feel that will bring your eLearning to the next level.
 
Once you have your screen characters, how do you know what to make them say? Join me for an afternoon mini course on writing voiceovers to find out.

Adobe RoboHelp: Create Merged Help

by Willam van Weelden Follow us on Twitter View our profile on LinkedIn

Merged help is the process of combining outputs from multiple RoboHelp projects into a single help system. While the content is created from multiple projects, your users see a single, integrated help system.

Over the next couple of weeks I will teach you how to create merged help for several output formats. Since RoboHelp's layouts work differently, I will go over each layout in turn. 

Why Merge Help?
Generally speaking, if any of the following items are true in your environment, merging may be for you:
  • You have a very large project (thousands of topics). Splitting the project into smaller projects may make maintenance easier.
  • Multiple writers work on separate parts of the documentation and you don't have source control. Without source control, only a single author can work in a project at the same time. Having multiple smaller projects makes collaboration without source control easier.
  • You need to update parts of the help separately from other parts. If you have a single project, you create an output for the entire project. You can publish only changed files, but you can't update only a single part. With merged help, you can.
  • You have modules that are reused in different products. With merged help you maintain a single version, and reuse that.
Which Outputs can I Merge?
You can merge the following outputs:
  • Adobe AIR
  • Microsoft HTML Help (CHM)
  • FlashHelp
  • Multiscreen HTML5
  • Responsive HTML5
  • WebHelp
Master Project and Child Projects
When you merge help, you always have one master project and any number of child projects. The master project is the glue that holds everything together. When you generate your output, the master project makes sure that your help system is shown as an integrated whole.

Your master project is a regular RoboHelp project. You can use any features you want in the master project.

Merged Microsoft HTML Help

Generate a CHM file for every child project. (Using the Single Source Layouts pod, generate Microsoft HTML Help.) Then open the project that is to be the master project.

Open the layout's table of contents and click New Merged Project.

Adobe RoboHelp: New Merged Project tool. 

On the HTML Help tab, click the browse button (the yellow folder) and open the CHM file of the child project you published.

Adobe RoboHelp: Merged Project dialog box. 

Click Yes when prompted.

Adobe RoboHelp: Click Yes to the alert dialog box. 

Click the OK button to merge the CHM file.

Adobe RoboHelp: Child project ready to add to a master project. 

The child project will appear in the Master project's TOC. 

Adobe RoboHelp: Child added to the TOC.

Save your project and generate the layout. 

Adobe RoboHelp: Merged projects  
All that's left to do is deliver both CHM files as your help system.Whenever the child project changes, generate the CHM from the child project. Replace the CHM in the master project directory and generate your master project. You can also replace the child project CHM in the output directly.

 

***

Looking to learn RoboHelp? We offer a live, two-day online RoboHelp class once a month. Feel free to contact us to learn other ways to meet your RoboHelp training requirements.

Adobe Captivate: Six Ways to Use Voiceover Scripts

by Jennie Ruby View our profile on LinkedIn
 
I often point out in my classes on writing eLearning voiceover scripts that a script is necessary so that when you record the audio you don't skip anything, don't stumble, and don't say "um." However, using a voiceover script for eLearning is way more useful than just that.
Let's say for example that your eLearning project will be developed in Adobe Captivate. Captivate allows you to type–or copy and paste–the script into Slide Notes, similar to the slide notes you might be familiar with in PowerPoint. From there, you can use the notes in several different ways.

Adobe Captivate: Slide Notes 

First, just as in PowerPoint, you can create handouts that print the Slide Notes along with an image of each slide, like this:

PowerPoint notes. 

Second, if you are going to record the voiceover yourself, you can display the notes in the recording dialog box, like a miniature teleprompter, for your ease in recording the audio. At the bottom of the recording window, click the Captions & Slide Notes button to display the notes.

Captions & Slide Notes 
The slide notes. 

Third, if you are hiring voiceover talent to record the audio, you can provide the script to that professional, slide by slide, so that he or she can record the audio for each slide separately.

Voiceover scripts 

Then, fourth, once you either record the audio yourself or import the recordings from your voiceover talent, you may need closed captioning. Once you have pasted the voiceover script phrase by phrase into the Slide Notes pane, you can create the closed captioning just by clicking a check box.

Adobe Captivate CC's. 

And if you have accurately divided the script into phrases as shown above, it will automatically be synchronized with the audio. Below, you can see the yellow markers indicating the closed caption that goes with each audio segment.

Closed captions synchronized 

Fifth, suppose instead of hiring voiceover talent and instead of recording the audio yourself, you decide to go with Text to Speech. Since Captivate comes with several high-quality computerized voices from NeoSpeech, this is a viable option. Just as with the closed captioning, creating the Text to Speech from the Slide Notes is very easy. In the Slide Notes pane, you click the TTS check box.

Text to Speech

Then you open the Speech Management dialog box, where the Slide Notes are automatically imported, click the Generate Text button at the bottom, and you've got your voiceover audio.

Speech Management dialog box 

And as before, to get closed captions with that, you just click the Audio CC check box.
Sixth, and finally, if you are creating accessible eLearning that is 508 compliant, then the final thing you can do from that one voiceover script is automatically import the Slide Notes to the Slide Accessibility dialog box. This contains the text to be read by screen readers, for those accessing the training through audio only.

Slide Accessibility 

So, let me count them up–yep, that would be six (6) ways to use a voiceover script to help in the development of eLearning with Adobe Captivate. By starting with a good voiceover script, you not only create a clear and well-planned audio, but you also save tons of work by using the script to automatically generate any or all of these aspects of your eLearning project.
Are your scripts up to the task? Join me for my afternoon mini course on how to write a good voiceover script.
***
Need more help with your script? Look for our hourly consulting service. We'll help you evaluate, substantively edit, or rewrite your voiceover script to make sure it is up to par.

Adobe Captivate: Slow Down Your Speech Agent

by Kevin Siegel Follow us on Twitter View our profile on LinkedIn View our videos on YouTube

Using Captivate's Text to Speech feature allows you to quickly convert written text to voiceover audio. It's an awesome feature. However, we recently had a client who felt that Paul (that was the Speech Agent we used for the project) spoke too fast. The client wanted to know if we cloud slow him down a bit.

While you might think that controlling the cadence used by the Speech Agent was beyond your control, it's actually really easy. Prior to converting a slide note to speech, just add a bit of code (known as Voice Text Markup Language or VTML) to the text. 

 
For example, if you want a Speech Agent to say I am an awesome person, all that you would normally have to do is write the text in the Notes window, click the TTS check box and then click Text to Speech.
 
 
 
 
In the Speech Management dialog box, select a Speech Agent and then click Generate Audio.
 
 
 
If you feel like the resulting voiceover audio is too fast or too slow, you can change the speed. In the slide Note, add the following code in front of the text: <vtml_speed value="50">. At the end of the text, type </vtml_speed>.
 
 
Click the Text to Speech button and regenerate the audio (the existing audio will be replaced with the new audio file). You'll find that the agent's speed has been cut in half (thanks to the 50 you added as part of the VTML code). You can experiment with the speed values until you find a speed that works best for you and/or your client.
 
If you'd like to learn more about VTML or see more tags, review the users guide for the VTML Tag Set by clicking here.
 

***

If you'd like to learn more about eLearning, come hang out in my next eLearning basics mini course. And if you'd like to learn more Captivate, Presenter, or Storyline, we've got a great collection of live, online classes for you

Adobe Captivate: The Cure for Blurry Zoom Destinations

by Kevin Siegel Follow us on Twitter View our profile on LinkedIn View our videos on YouTube
 
Zoom Areas are typically used to emphasize an important area of a slide background. They are especially useful if you want your learner to automatically get closer to a specific area of the screen.

To insert a Zoom Area, click Objects on the Main Toolbar and choose Zoom Area.

 

Zoom Areas consist of two parts: the area of a background that you want to highlight (Zoom Source) and where the zoomed area of the background will appear (Zoom Destination).

 

In the image below, I have positioned and resized the Zoom Source over the area of the slide background that I want to get larger.

 

Then I positioned and resized the Zoom Destination on the slide. Remember, the Zoom Source won't move or resize when the lesson is viewed by the learner… that's the job of the Zoom Destination.

Right away you can see that there is a problem with the image in the Zoom Destination. Because a Zoom Area simply enlarges the Zoom Source, and I've resized the Zoom Destination quite a bit, the image in the Zoom Destination is blurry.

To fix the problem, you'll need a larger version of the image shown within the Zoom Source. In this case, I have the original photo of the handsome male model shown on the screen (in addition to being much larger, it has also been cropped similar to the image in the Zoom Source).

To swap out the blurry image in the Zoom Destination with the better image, double-click the Zoom Destination to open the Properties Inspector. On the Properties Inspector, click Add new image.

 

Click the Import button and open the larger version of the photo.

Compare the Zoom Destination below with the version above. The quality of the Zoom Destination image is much better.

If you would like to see a free demonstration of this concept, check out the IconLogic YouTube channel.

***

If you'd like to learn more about eLearning, come hang out in my next eLearning basics mini course. And if you'd like to learn more Captivate, Presenter, or Storyline, we've got a great collection of live, online classes for you.

Adobe RoboHelp: List Images

by Willam van Weelden Follow us on Twitter View our profile on LinkedIn
 
When you create a list, there are several list styles you can use in RoboHelp: bullets, squares, lowercase alphabet, and numbers. But RoboHelp brings one more great feature: you can use images instead of bullets.

Create a List Style with Custom Images

  1. On the Project Manager pod, double-click your style sheet to open the Styles dialog box.
  2. Right-click List and choose New.
  3. Enter a name for the list style and press [enter].
  4. Click the Create a bulleted list button.
  5. Select the list images option.
  6. Click the browse button to open the Image dialog box.
  7. Select the image you want to use as a bullet and click OK.
  8. Then click the OK button to save your changes.

Apply an Image List Style

  1. Open or create a topic, and then create a regular bulleted list.
  2. Select the list.
  3. Right-click and choose Bullets and Numbering.
  4. Go to the Custom tab and select your list style in the left section.
  5. Click the OK button to apply your list.

***

Looking to learn RoboHelp? We offer a live, two-day online RoboHelp class once a month. Feel free to contact us to learn other ways to meet your RoboHelp training requirements.

eLearning: More Reflection

by Kevin Siegel Follow us on Twitter View our profile on LinkedIn View our videos on YouTube

Last week I wrote about how you can use Reflector to create software simulations and demonstrations from your mobile device.

I received emails from several people who, having read the article, downloaded Reflector and attempted to create a simulation using Adobe Captivate or Articulate Storyline. While most folks were successful in reflecting the mobile device onto the computer, several people reported that none of the actions they took on the mobile device were captured by the eLearning tool. 

As I mentioned in last week's article, the reflection of the mobile device you see on the computer is passive; you cannot control the reflection with the computer. Instead, you can simply see what's happening on the mobile device through your computer.

During the simulation recording process, if you're creating a software simulation, programs like Captivate and Storyline only capture the screen if you click your mouse (or manually create a screen capture by pressing the appropriate keyboard shortcut). Since you're not clicking anything on your computer (remember, the computer is simply showing you a reflection of the mobile device), neither Captivate nor Storyline will capture anything.Unless…

On your computer, start the recording process using Captivate or Storyline. Just prior to performing an action on your mobile device, click the reflection on your computer to create a screen capture. Next, on the mobile device, perform the action. Back on the computer, click the reflection again to create a second screen shot that shows the result of whatever it is you did on the mobile device. Continue this process over and over until you have completed all of the processes you wish to simulate on the mobile device.

If clicking over and over again to create a simulation sounds too difficult, all three of the top eLearning development tools (Camtasia Studio, Captivate, and Storyline) excel at creating videos. If you create a video of the reflected mobile device, everything you do on the mobile device is captured because the video isn't creating individual screen shots, it's capturing everything you do without discrimination. 

***
If you'd like to learn more about eLearning, come hang out in my next eLearning basics mini course. And if you'd like to learn more Captivate, Presenter, or Storyline, we've got a great collection of live, online classes for you.

Adobe Captivate: System Time Variables

by Lori Smith View our profile on LinkedIn
 
Last week I showed you Captivate's date variables. This week, let's take a look at the Time variables, the System Information variables, and a handy little variable called cpInfoMobileOS.

Adobe Captivate: Time Variables
 
Specifically, I am going to address the variables listed in the table below using 2:15 PM and 32 seconds as my example. 

Adobe Captivate: Time Variables

Perhaps you do not like military time (24 hour) and instead you want to use 12 hour time. A little advanced action can easily convert military time to 12 hour time. I have also created two user variables to help out: am_or_pm and myHour.

Adobe Captivate: Variables in use. 
 
When added to a text caption as shown below, the variables will display 2 PM.

Adobe Captivate: Variables added to caption. 
 
Now let's talk about the cpInfoEpochMS variable. It can be used to determine the play time for a lesson (or part of a lesson). By subtracting the value of the variable at the beginning of the lesson from its value at the end of the lesson, you can calculate the lesson's play time down to the millisecond. In the image below I have created a couple of Advanced Actions that make use or CpInfoEpochMS and a few user variables that I created:startTimeendTime, and timeElapsed.

First, you need to capture the lesson's start time using this Advanced Action:

Adobe Captivate: Lesson start time Advanced Action
 
At the desired point in your project, capture the time and calculate the timeElapsed.

Adobe Captivate: Time elapsed Advanced Action.
 
Last but not least, let's cover the cpInfoMobileOS variable. It's only job is to indicate if the learner is using a desktop computer of a mobile device (iPhone, iPad, etc). If you have certain elements or slides in your project that you want to behave differently depending upon the learner's device, you can use this variable in a conditional Advanced Action to create the desired behavior.
 

***

Looking to learn Adobe Captivate? We offer several Captivate classes. Feel free to contact us to learn other ways to meet your training requirements.

Adobe RoboHelp: Embed a YouTube Video

by Willam Van Weelden Follow us on Twitter View our profile on LinkedIn
 
Last week, Kevin taught you how to embed YouTube videos in your eLearning projects. This week I'm going to keep with the YouTube theme and show you how you can embed YouTube content into your RoboHelp project.

  1. Go to YouTube and locate the video you'd like to use.
  2. Click the Share button below the video.
    Adobe RoboHelp: Share button.
  3. Go to the Embed tab and copy the embed code.
    Adobe RoboHelp: Embed Code..
  4. In RoboHelp, open a topic and switch to HTML mode.
  5. Paste the embed code in the topic.
    Adobe RoboHelp: Pasted Code.
  6. In the src attribute, type http: in front of the url.
    Adobe RoboHelp: HTTP typed in the topic.
  7. Save your topic and generate your output.
    Adobe RoboHelp: Video embedded.

See also: Embedding Captivate HTML5 output in a RoboHelp project.

***

Looking to learn RoboHelp? We offer a live, two-day online RoboHelp class once a month. Feel free to contact us to learn other ways to meet your RoboHelp training requirements.

eLearning: The Origins of a Training Methodology

by Kevin Siegel Follow us on Twitter View our profile on LinkedIn View our videos on YouTube
 
I've been creating eLearning since the early 1990s. Back then, eLearning (or E-Learning, pick your style) was known as CBT (Computer Based Training).

My first attempts at creating software video training was with a program called CameraMan. That program was ahead of its time, allowing you to capture your mouse actions, add captions and audio, and then publish into a video format that could be viewed on most computers. It was awesome software for the times but crashed a lot and had very few options. It was pretty much a record-pray-publish kind of tool.

When TechSmith released Camtasia I gave CameraMan the heave-ho and began producing slicker content in half the time. Then RoboDemo came out (RoboDemo later became Adobe Captivate) and I was quick to add that to my toolbox.

Today you have plenty of options when it comes to developing eLearning, including Camtasia, Captivate, Storyline, and two Presenters (one from Adobe, the other from Articulate).

While developing content for my Getting Started with eLearning mini course, I became curious about the origins of eLearning. As I mentioned above, I began developing eLearning 20 years ago. Surely that makes me one of the more senior eLearning developers around. It turns out I'm am just a young pup when it comes to eLearning. In fact, at a recent conference I met a person who said she started developing eLearning 30 years ago. 30 years ago? Wait, wouldn't that be the 1980s? Sure computers were around in the 80s… I was an early Mac adopter and I remember PCs with early versions of Windows (heck, I used DOS and floppy disks when they were still floppy). Those early computers struggled to do just about anything beyond word processing. How could anyone have developed eLearning on those early systems?

At the same conference, I met another person who said he was creating eLearning in the late 1970s. And that got me thinking… just how far back does eLearning go? And who was the first person or company to provide eLearning?

It turns out that eLearning really got going in 1953 when the University of Houston offered televised college credit classes. A few years later, the first adaptive teaching system (named SAKI) went into commercial production. Basically, with this system, the course got more challenging as learners improved.

When I was creating eLearning in the 1990s, the eLearning content I published consisted of video files that were huge. The files wouldn't fit on a floppy disk and computer hard drives weren't very big. Thankfully, DVDs were available (expensive, but available). The content I published was burned to a DVD, and then I hired a DVD replicating service to mass produce my content.

With the ever-growing popularity and power of the Internet and cloud computing, the need for DVDs went the way of the dinosaur. While many people think the Internet got started in the late 1990s, it came along far earlier. In 1969 the U.S. Department of Defense commissioned the Advanced Research Projects Agency Network (ARPANET), which became the Internet as we know it today.

In the 1970s, a company started delivering live training over corporate networks in what they dubbed "virtual classrooms." And in the 1980s, the first CBTs were rolled out. In my discussion with the conference attendees I mentioned earlier, they revealed that those first CBTs were little more than teaching machines. And while they were limited in scope, they were nevertheless CBTs.

What's your earliest memory of eLearning? What tools did you use back then? And when did you first notice eLearning replacing the term CBT? Feel free to post your experience below as a comment.

***
If you'd like to learn more about eLearning, come hang out in my next eLearning basics mini course. And if you'd like to learn more about the history of eLearning, the infographic below is a great place to start.

eLearning: History Infographic 
Source: Roberta Gogos, Social Media & Content Marketing Consultant.