Attribution
This content was adapted from “Coursework and GenAI: A Practical Guide for Students” by the University of Toronto, licensed under CC BY-NC 4.0. Copilot was used to reformat this content into a video script. The script was then reviewed and updated, and the video was created by University of Guelph Library employees. Some of the images in this video were created using Sora.
Time commitment
Less than 2 minutes
Description
The purpose of this video is to explain what large language models are, how tools like ChatGPT generate text by predicting words one at a time, and why their responses can be useful but not always accurate.
Video
Transcript
Have you ever wondered how tools like ChatGPT or other AI assistants can write essays, answer questions, or even help brainstorm ideas? The secret behind many of these systems is something called a Large Language Model—or LLM. Let’s break down what that actually means.
A Large Language Model is a type of Generative AI trained on vast amounts of text—from books, articles, webpages, and more. “Large” doesn’t just mean big; these models require enormous computing power and massive training datasets to learn patterns in language.
So, what do LLMs actually do? Their main job is surprisingly simple: predict the next word. If the model sees the beginning of a sentence like “the dog…,” it knows that “slept” is more likely to follow than “green,” because it has learned what natural language usually looks like.
But modern LLMs do more than check grammar—they consider much larger contexts, like whole paragraphs, conversations, or topics. That’s how they produce responses that feel relevant and sometimes even insightful.
Here’s how text generation works:
Imagine the model has produced the phrase “the dog jumped over the…”
At this moment, it looks at every possible word in its vocabulary and assigns a probability to each one—like “fence,” “gate,” or “hurdle.” It then picks one based on those probabilities.
This process happens one word at a time, over and over, until the model decides the response is complete.
Because each word is chosen with a bit of randomness, the same prompt can lead to slightly different outputs every time.
It’s important to remember: AI-generated content can be incorrect.
LLMs don’t “understand” the world the way humans do—they’re making their best guesses based on patterns in the data they were trained on. That means they can sound confident even when they’re wrong.
So, in short: LLMs are powerful tools that learn from huge amounts of text, predict words one at a time, and can generate impressive—but not perfect—responses. Understanding how they work helps you use them wisely to support your own learning.
License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
- Ask Chat is a collaborative service
- Ask Us Online Chat hours
- Contact Us