I’ve been seeing all kinds of posts on social media lately about ChatGPT and what it means for learning experience designers. I finally decided to check it out for myself. If you’re wondering how to use ChatGPT for LXD, or even whether you should, read on! I put it to the test and share my results in this post.
(And no, I didn’t use ChatGPT to write this post.)
What the Heck Is ChatGPT?
ChatGPT is a chatbot from a company called OpenAI. It was launched at the end of November 2022. As of this writing, it’s available as a “free research preview” so the company can collect feedback and improve the tool. (In case you’re wondering, GPT stands for “generative pre-trained transformer.”)
Will it always be free? I doubt it. (I tried asking the bot this question, and it told me, “I’m an AI model and I don’t have the ability to predict the future.” It has just a hint of Siri’s sass.) OpenAI has expressed interest in monetizing the app, but we don’t yet know what that will look like. The company has opened a wait list for a paid version called ChatGPT Professional, so maybe that means they’ll keep a free version around. Time will tell.
The tool is super simple to use. Just type a question or set of instructions into a prompt field, and ChatGPT will take it from there. Because it’s a chatbot, you can improve your results by continuing the conversation and giving the AI more information.
In my tests, ChatGPT produced learning objectives, test questions (including scenario-based test questions), summaries, a poem, a gay love song (per my daughter’s request), and even an outline and ideas for a novel I’ve been kicking around.
There’s no question that this tool can do a lot. But just how good is ChatGPT for LXD? Should we be afraid that it will replace learning experience designers?
(TL;DR: It’s pretty good, but not good enough to replace us.)
Let’s look at a few examples.
ChatGPT Test Results
Because we in L&D must often create compliance training that’s based on federal regulations, I decided to feed ChatGPT some regs and see how it fared.
Test 1: Creating a Plain Language Summary
First, I copied and pasted some wording from federal regulations related to safety training requirements and asked ChatGPT to summarize the text in plain language at an 8th grade reading level. Here are the results:
You can compare this summary to the original language of the regulations here.
I think ChatGPT did a great job of summarizing the text in a format that’s easier to read. But it falls short of what I requested. When I plug this paragraph into Word to get the readability statistics, it tells me that it’s at a college reading level. (You can learn about using Word’s readability statistics in this post about how to use plain language.)
The Hemingway Editor puts this paragraph at a 12th grade reading level and flags the last two sentences as “very hard to read.”
One thing I learned while testing ChatGPT is that if you don’t get the results you want on your first try, revise the request. So, I asked it revise the summary at an 8th grade reading level and break up the long sentences so they’re easier to read. This time, it generated a more readable summary. It was still at an 11th grade level according to Word. But it gave me a useful starting point to revise further.
ChatGPT's Revised Summary
(11th Grade Level)
OSHA regulations 1926.21(b) states that employers must make sure their employees are trained in safety and health.
- Training on how to recognize and avoid dangerous conditions at work.
- Instructions on how to handle harmful substances safely.
- Information on how to protect themselves from injuries or illnesses on the job.
- Additionally, employers must provide training on how to handle flammable liquids, gases, and toxic materials safely and follow specific guidelines.
My Additional Revision
(8th Grade Level)
OSHA regulations 1926.21(b) states that employers must train their employees in safety and health.
This includes training and information on how to:
- recognize and avoid dangerous conditions at work;
- handle harmful substances safely;
- protect themselves from injuries or illnesses on the job;
- handle flammable liquids, gases, and toxic materials safely; and
- follow specific guidelines.
Test 2: Writing Learning Objectives
Continuing the conversation, I asked the bot to write two learning objectives based on the OSHA regulations. Again, it generated what I’d consider a starting point. The first attempt was clumsy and way too wordy. The second attempt, after I asked it to use only one verb per objective, was better, but not great. The first revised objective isn’t bad, but the second still needs some work.
ChatGPT's Learning Objectives
Upon completion of training on OSHA regulations 1926.21(b), employees will be able to identify and avoid unsafe conditions in their work environment and understand the regulations applicable to their specific work environment in order to control or eliminate any hazards or other exposure to illness or injury.
Upon completion of training on OSHA regulations 1926.21(b), employees will be able to handle and use poisons, caustics, flammable liquids, gases, or toxic materials safely, understand the potential hazards associated with these materials and how to protect themselves from injuries or illnesses on the job, as well as be familiar with the specific requirements contained in subparts D, F, and other applicable subparts of this part.
ChatGPT's Revised Learning Objectives
- Explain the employer’s responsibility according to OSHA regulations 1926.21(b)
- Recognize and avoid unsafe conditions and the regulations applicable to their work environment as outlined in OSHA regulations 1926.21(b)
Test 3: Writing Test Questions
Next, I asked ChatGPT to write multiple-choice test questions about the OSHA regulations. Here is one of the questions it wrote.
Test Question #1
According to OSHA regulations 1926.21(b)(5), what should employees required to handle or use flammable liquids, gases, or toxic materials be instructed on?
a) How to handle and use these materials safely
b) How to make coffee
c) How to use office equipment
d) How to make fire.
Clearly, there are some throwaway answers here, and the longest answer is predictably the correct one. But the question itself could be a useful starting point. I do wish it were better at writing distracters, because those are the hardest part of writing multiple-choice questions!
As part of a different conversation, I also asked ChatGPT to “write five scenario-based multiple-choice questions and an answer key for the objective, ‘Apply effective mentoring strategies to common workplace challenges.'”
Test Question #2
You are a mentor to a new employee who is struggling with time management. What is the most effective strategy to help this employee improve their time management skills?
a) Ignoring the problem and hoping it will go away
b) Pointing out their mistakes and criticizing them for their lack of organization
c) Providing specific and actionable feedback, offering resources, and setting clear expectations for goals and deadlines
d) Micro-managing their every move
It generated five different questions with five different scenarios—one of which is shown above. As with the previous example, the answer choices left a lot to be desired, so even though it ignored my request for an answer key, I clearly don’t need one. (Is there any doubt which one is the correct answer?) So the questions would not be usable as written. However, the short scenarios the tool generated were pretty good. So again, it provides a useful starting point.
Test 4: Writing Scripts
I asked ChatGPT to write several scripts, and it came through every time, with everything from eLearning scenarios to training videos and even a TED Talk. Were they perfect? No. But they were surprisingly good.
As one example, I asked, “Write a conversation between a nonbinary person and a coworker who misgenders them. The nonbinary person corrects him, but the coworker argues.”
ChatGPT generated the following conversation. It didn’t give the characters names, but it would have if I had specifically requested that. (More on that later.)
ChatGPT Script Example
Nonbinary person: Hey, can you grab that report off my desk for me?
Coworker: Sure thing, ma’am.
Nonbinary person: Actually, I go by they/them pronouns.
Coworker: Oh, I’m sorry. I didn’t know.
Nonbinary person: It’s alright, it happens. But it’s important to use the correct pronouns for everyone.
Coworker: I understand, but it’s just hard for me to remember. Why can’t you just pick a gender?
Nonbinary person: I am nonbinary, which means I don’t identify as either male or female. Gender identity is not a choice, it’s a part of who we are.
Coworker: But it’s confusing, I don’t know how to refer to you.
Nonbinary person: You can refer to me as they/them, like I mentioned before. It’s a simple and respectful way to address me.
Coworker: I’ll try, but I can’t make any promises.
Nonbinary person: I understand that it may take some time to get used to, but it’s important to make an effort to respect everyone’s identities. Using the correct pronouns is a small but crucial way to do that.
Coworker: I see your point, I’ll try my best to use the correct pronouns from now on.
Testing ChatGPT's Creativity
At this point, I was feeling amazed at everything this tool could do, so I decided to specifically test its creativity. This was a concern that Dr. Indi Williams brought up in a conversation with Dr. Luke Hobson. That conversation happened about a month ago, so it’s possible the AI has improved since then.
The Great American Novel
I’ve been toying with an idea for a novel for a couple years now, so I decided to see if ChatGPT could help me out. I gave it the basic premise and asked it to write an outline for the novel. It generated a six-part outline with an ending I hadn’t considered. Then, I asked for ideas about a certain plot point that has been troubling me, and it generated six different scenarios my characters could face.
Here’s the opening line it generated (which I won’t be using.) I didn’t tell it my characters’ names, so it chose “Karen.” (Not her name.)
“The door creaked open with a reluctant groan, revealing a dark and musty basement. Karen hesitated, her heart pounding in her chest as she wondered what secrets lay hidden within.”
A Gay Love Song
I asked my daughter for an idea of what to ask ChatGPT to write, and their request was for a gay love song. Here is a sample of the lyrics it came up with. The last line of the chorus made us giggle.
In a world that’s not always kind
We’ve had to hide our hearts and minds
But when I’m with you, all is right
In your arms, I feel like I’m home tonight
Love is love, no matter who you are
I’ll shout it from the rooftops, near and far
I’m yours and you’re mine, forevermore
In a love that’s pure, true, and gay
An LXD Poem
The last test I’ll leave you with is the one in which I asked ChatGPT to write a “funny poem about learning experience design.” I’m not sure it hit the mark on “funny,” but let me know what you think.
Once upon a time, in a land far away,
A team of designers wanted to make learning play.
They sat in a room, with their laptops aglow,
Brainstorming ways to make knowledge flow.
They thought and they thought, till their brains were a-frazzle,
But nothing seemed right, just a bunch of drivel.
Then one designer had an idea so grand,
“Let’s make it a game, with a treasure in hand!”
So they built and they built, till their fingers were sore,
A game so fun, you couldn’t ask for more.
With puzzles and quests, and rewards galore,
Learning was now something you couldn’t ignore.
But as the game launched, and the users played on,
The designers soon realized, they’d made a mistake upon.
For the treasure at the end, was just a small gold coin,
And the users, they laughed, and said “That was no fun!”
The moral of this story, as you can see,
Is to make sure the reward, is something that’s key.
For without a good prize, your game will be lame,
And your users will leave, just like a quick flame.
Uses and Limitations of ChatGPT for LXD
As I’ve mentioned, ChatGPT could be very useful as a starting point in all kinds of writing. As one more example, I asked ChatGPT to write a job aid to teach people how to use the Styles pane in Microsoft Word. It wrote out 10 steps that explained how to open the Styles pane, apply styles, create new styles, modify existing styles, and delete styles.
In addition, ChatGPT can also generate code, help with troubleshooting, solve calculations, and answer questions on a variety of topics.
As you’ve seen in the tests, ChatGPT is powerful, but it does have limitations. Here are some I noticed during my tests.
- The information is not always accurate. In some cases, the inaccurate information could be harmful. It should always be reviewed and verified by a human.
- It’s not currently connected to the internet. That is, it doesn’t do an internet search to find information, and it can’t find information from links. If you need information that’s more recent than 2021, it won’t be able to give it to you. That may change after it’s no longer in beta.
- Your questions/requests are limited to 2,048 characters, so you can’t paste in whole articles or chapters for it to summarize or write test questions from. The answers it generates are also generally limited to 2,048 characters.
- It’s terrible at writing distracters for multiple-choice questions and not great at learning objectives either.
- Like other AI, it seems to be biased in favor of North American whiteness. When I asked it to write scenarios, every time, it gave the characters names like Karen, John, Emily, and Michael. Only when I specifically asked for ethnically diverse names did I get names like Ahmed and Ravi.
- As Dr. Ray Pastore pointed out in his review, it doesn’t always generate unique ideas. You’ll notice if you give it similar prompts, the paragraphs it writes might be very similar. It’s a safe assumption that other people asking the same kinds of questions would get the same kinds of results.
So, what does this mean for learning experience designers? Should we be concerned that we’ll all be replaced by chatbots?
Personally, I don’t think so. (See number 4 above.) In the end, ChatGPT is a tool we can use to make our jobs easier, but it can’t completely do our jobs for us.
If you’re wondering about whether we can legally use ChatGPT for training, I asked that question. Here’s the answer I received:
“Yes, ChatGPT can be used by companies for training purposes. The OpenAI API allows companies to access the model and use it for various applications, including natural language processing, language translation, text generation, and more. Companies can use ChatGPT to train their own models, generate text, and create interactive experiences such as chatbots and virtual assistants. Some companies have used ChatGPT for creating automated customer service, product documentation and other business-related tasks. However, it’s always good to check with the company regarding their terms of service and usage policy before using the model for commercial purposes.”
Have you tried ChatGPT? Share your experiences in the comments!
Resources for Learning More
- Try ChatGPT
- ChatGPT for Instructional Design and Learning video with Dr. Ray Pastore
- ChatGPT Is the Instructional Tool We Have Been Waiting For! article by Laura J. Murphy
- The 10 Commandments of Instructional Design (According to ChatGPT) by Dr. Luke Hobson