On a Wednesday at the end of 2022, when we were drowsily floating along in that liminal space between Thanksgiving and the December holidays, OpenAI dropped a new, free, public tool that would change the world.
Known as ChatGPT, this artificial intelligence (AI)-driven tool allows users to enter a prompt into the chatbot interface. Using OpenAI’s natural language processing model, the chatbot generates a text response. Like magic, an unseen hand neatly types out copy that precisely reflects the prompt’s tone and content.
The response around ChatGPT seemed split into two main camps: Those who welcomed this tireless source of guaranteed content…and those who started a countdown to the robot takeover.
Although AI might feel like bleeding-edge tech, it’s based on concepts and models that date back to the early part of the 20th century. In 1936, all-around brilliant dude Alan Turing introduced a mathematical model for distilling logic into a code, which he followed up in 1950 with a test for computer intelligence. In 1956, researchers Allen Newell, Cliff Shaw, and Herbert Simon demonstrated that, yep, machines could “think.”
Time marched on, and so did work in the field. In 1997, an AI computer program defeated a chess champion, and we got the first publicly available speech recognition software. Fast forward to today, and we use AI tech to help us drive cars, design PowerPoint decks, and decide what we’re going to watch on our streaming services. AI is everywhere, but your average consumer might be hard pressed to explain just what it is.
The AI of today is a constellation of many different technologies working together to enable machines to discern patterns and make logic-based decisions. This allows machines to sense, comprehend, act, and learn with human-like levels of intelligence. There are two classifications: Narrow (weak) AI, and general (strong) AI. Narrow AI can only perform specific tasks or functions. General AI uses deep learning models—built on mountains of raw data the AI parsed through to identify patterns—to replicate human intelligence.
According to Authority Hacker, 50% of customers view AI optimistically, and two-thirds of modern consumers are open to AI used to enhance customer engagements; 57% of us would rather deal with chatbots than other humans.
But we get nervous when the machines start creating new stuff, with 63% of individuals citing worries of bias or inaccuracies in AI-generated content, and only 7% of us trusting chatbots for making claims.
The wide-spread adoption of AI induces absolute terror for some. Of the 6,000 global consumers surveyed in software company Pega’s 2019 AI and Empathy study, 27% said they were concerned about “the rise of robots and enslavement of humanity.” Even if we’re not fearing robot conquest, AI nudges at a practical worry: In the same Pega study, 35% of respondents feared their jobs would be filled by AI. Learning pros are not immune to this fear.
Setting aside the machines, there’s the simple point that we don’t trust each other. We don’t know who’s manipulating the machines, how they’re doing it, or why.
Accessible generative AI applications have pushed some of us into moral panic territory. “(New technologies) often lead to excessive zeal amongst the advocates and excessive pessimism amongst the critics,” said Nick Clegg, president of global affairs at Meta, speaking at a recent global summit on AI safety.
But there’s one trait all moral panics share: exaggeration. The bicycle, the radio, the phonograph record, and the internet—each was met with prophecies that they would be the downfall of humanity but, somehow, we’re still here.
It’s true: AI has unknowns, but at least we’re talking about it. In the year since OpenAI launched ChatGPT, AI has moved to the forefront of public discourse. On Oct. 30, 2023, the President of the United States of America issued an executive order establishing new standards for AI safety and security. It won’t resolve all the fear, but it’s a clear step toward making AI more transparent and regulated.
For learning pros, it’s imperative that we take advantage of the opportunities AI affords us—that is, thesmorgasbord of new tools and techniques that need ID expertise to guide them. If you’re worried generative AI will take over your job, consider this: ChatGPT helped write the headline for this piece, but it took 30 attempts and a human brain to get us there.
For more on AI and how it can help your learning solutions, contact us today.