1.1 What Is Generative AI?
Generative AI refers to a class of artificial intelligence systems designed to generate new content—such as text, images, music, or code—in response to prompts written in natural human language.
Unlike traditional programming, where users must write in formal languages like Java, Python, or SQL, generative AI allows users to interact using everyday language. This shift democratizes access to powerful AI tools, empowering non-technical users to accomplish tasks once limited to developers and data scientists.
🧠 Key Insight: Instead of explicitly instructing a machine step-by-step, users now describe their goal, and the AI figures out how to achieve it.
1.2 Evolution of Generative AI (Timeline Overview)
Here's a brief journey through the development of generative AI technologies:
- 1960s – Early Experiments
- Chatbots like ELIZA used simple rule-based systems and keyword matching.
- These systems lacked understanding and flexibility.
- 1990s – Rise of Machine Learning
- Shift from rules to statistical methods.
- Early forms of machine learning (ML) enabled systems to learn from data patterns.
- 2000s – Neural Networks & Virtual Assistants
- Introduction of deep learning and neural networks.
- Launch of Siri, Alexa, and Google Assistant, marking the start of voice-based AI interaction.
- 2017–Present – Transformers & LLMs
- Transformers introduced (by Google in the paper “Attention is All You Need”).
- Birth of Large Language Models (LLMs) like GPT, ChatGPT, Claude, Gemini, and Bing Chat.
- These models can understand context and generate coherent, creative, and useful outputs across domains.
1.3 How Large Language Models (LLMs) Work
LLMs like GPT (Generative Pre-trained Transformer) follow a 3-step process:
1. Tokenization
- Input text is split into tokens—small units like words, syllables, or subwords.
- Each token is mapped to a numeric representation (embedding) for processing.
2. Auto-Regressive Token Prediction
- The model predicts the next token in a sequence using previous context.
- This is done iteratively: one token at a time until the full response is formed.
3. Probability Distribution
- For each step, the model computes a probability distribution over all possible next tokens.
- The temperature setting controls randomness:
- Low temperature → more deterministic
- High temperature → more creative or diverse outputs
🔁 This cycle continues until a complete sentence, paragraph, or document is generated.
1.4 Real-World Capabilities of Generative AI in Education
Generative AI is already transforming how students, educators, and researchers work. Here are key applications:
Capability | Use Case Example |
Summarization | Condensing long research papers into bite-sized summaries |
Creative Writing | Generating poems, essays, or storytelling prompts |
Conversational Agent | AI tutors for Q&A sessions or exam prep |
Text Completion | Helping students draft essays or complete assignments |
Code Generation | Writing and explaining code snippets in Python, SQL, etc. |
🗣️ Each of these use cases starts with a prompt (instruction or question) and ends with a completion (the AI's output).