How AI Can Hurt or Help in the Classroom

How AI Can Hurt or Help in the Classroom

By Mark Miller

Educators and students may be able to turn the AI threat into an AI opportunity.

The artificial intelligence (AI) tool ChatGPT was released free of charge at the end of November 2022 by the research company OpenAI. The article “Think twice before using ChatGPT for help with homework” by Kathryn Hulick published in Science News Explores states that it attracted over a million users within a week. A key reason for this rapid popularity is that ChatGPT isn’t a complex compilation of computer code; it’s an advanced chatbot you converse with to generate useful content about virtually anything. Simply enter a request like “write me a brief essay about zebras” and it returns around 500 words of clearly written prose. Here’s an example:

Zebras: The Striped Wonders of the Animal Kingdom

Zebras, with their distinctive black-and-white stripes, captivate the imagination of both nature enthusiasts and casual observers alike. These charismatic creatures, members of the equid family, are native to the grasslands and savannas of Africa.

Cheaters Beware

It didn’t take long for students to realize that an online tool that provides humanlike answers, essays, reports, and more could not only help with their schoolwork; it could do their schoolwork. According to a report from Insider, a Study.com survey found that more than one in four K-12 teachers said they had caught at least one student cheating using ChatGPT.

And ChatGPT isn’t the only option. Google’s Bard, Jasper.ai, Claude, and other AI tools offer similar capabilities—and opportunities for cheating. However, ChatGPT appears to be the example most educators use to call out AI’s potential perils. The education department of New York City blocked the use of ChatGPT from its devices and networks, and so did school districts in Baltimore and Los Angeles, according to educational reporting resource Chalkbeat.

Cheating isn’t the only concern. ChatGPT content can be considered unsafe, biased, and inaccurate. “Confidently wrong” is how Casey Fiesler, an ethics and technology expert at the University of Colorado Boulder, characterizes misinformation from ChatGPT in Hulick’s piece. In one example cited, ChatGPT said that the peregrine falcon was the fastest marine mammal. Fiesler’s work also reveals the troubling fact that ChatGPT offers no sources. When asked to provide them, ChatGPT responded with false ones and presented them as valid.

If ChatGPT provides no sources, where and how does it get its information?

How It Works

Tools like ChatGPT are natural language processing applications. In other words, they communicate through everyday speech. They’re built using machine learning—a branch of AI that enables computers to teach themselves to perform tasks. Previously, computers required detailed instructions to perform a task, but with machine learning, they can develop skills on their own by analyzing large amounts of data.

Hulick explains that ChatGPT uses a type of machine learning known as a large language model to analyze vast amounts of text—around 300 billion words and other types of language examples—until it learns to predict which words are likely to occur in a phrase or sentence. It does this by creating a 3D map of words that relate to each other, like “salt and pepper” or “dead or alive.” Based on the model and reinforcement from human feedback, ChatGPT responds with an amazing array of constructions and styles, and it improves as it’s given more data.

The results are very convincing. A briefing from the science journal Nature reports that abstracts for research papers created with ChatGPT fooled a team of professional reviewers. In a test, they missed identifying over 30 percent of the abstracts generated by AI.

Here to Help and Here to Stay

Despite some dangers and disadvantages, there are ways in which AI applications like ChatGPT can aid learning. Rather than simply copying and pasting AI-generated material into their work, students can use these tools to generate ideas, jump-start research, and build writing and editing skills.

Leveraging AI in education may require changing how it is viewed. Treating it as a tutor or coach, rather than a cheating device, could help students grasp difficult subjects more quickly. They could also use AI examples to learn how to compose and structure their writing more clearly and concisely. And it’s not just students that stand to benefit. Teachers and administrators can use AI to explore topics and subjects more efficiently and thoroughly to improve curricula and lesson plans.

These approaches should always include the important step of verifying the information that AI provides because—as we’ve seen with ChatGPT—no sources are provided and some of the responses may be incorrect.

As Hulick points out, while AI tools may be new and exciting, this isn’t the first time technology has invaded the classroom. Teachers and students alike have adapted to using calculators for math and Google for finding and learning answers—the search engine was once viewed as harmful to knowledge retention by delivering volumes of facts with a single search. Ultimately, educators and students may be able to turn the AI threat into an AI opportunity by developing innovative ways to apply its capabilities to an evolving education landscape.


Discussion Questions

  • Provide an example of AI from everyday life.
  • Have you used ChatGPT or another AI tool as part of your schoolwork? Describe how and whether you think it helped you.
  • Define and discuss three different types of machine learning.

Vocabulary