15 Prompts for a 10-Week Prompt Engineering Course Syllabus
Launch your AI curriculum with this 10-week syllabus featuring 15 tested prompts. Covers everything from basic logic to advanced "Chain of Thought" and ethics.


Launch your AI curriculum with this 10-week syllabus featuring 15 tested prompts. Covers everything from basic logic to advanced "Chain of Thought" and ethics.
Introduction
Designing a course on prompt engineering is like trying to build a plane while flying it. The technology changes weekly, models update overnight, and what was a "hack" yesterday is a built-in feature today. However, the fundamental principles of clear communication, logical structuring, and critical evaluation remain constant.
A good syllabus doesn't just chase the latest model; it builds enduring cognitive skills. It moves students from passive users ("Write me a poem") to active architects ("Design a self-correcting logic loop for data extraction").
This article provides a 10-week prompt engineering course syllabus anchored by 15 specific prompts. These prompts serve as the "lab work" for each week, ensuring that concepts are immediately applied. Whether you are a university professor, a corporate trainer, or a self-taught learner, this roadmap provides the structure you need to master the art of human-AI collaboration.
Why a Structured Syllabus Matters
Without a map, prompt engineering education often dissolves into a collection of "cool tricks." Students learn to make funny images or cheat on essays but fail to understand the underlying mechanics of Large Language Models (LLMs).
A structured 10-week approach allows for "scaffolding." We start with the basics of syntax and clarity (Weeks 1-3), move into advanced logic and reasoning patterns like Chain of Thought (Weeks 4-6), and conclude with complex system design and ethics (Weeks 7-10). This progression ensures that students aren't just memorizing prompts; they are learning a new mode of problem-solving.
Furthermore, a syllabus provides a common language. By the end of the course, terms like "few-shot," "hallucination," and "persona" shouldn't just be buzzwords—they should be tools in the student's mental belt, ready to be deployed.
15 prompts for a 10 week prompt engineering course syllabus
This syllabus is divided into three phases: Foundation, Logic & Reasoning, and Advanced Applications.
Phase 1: The Foundation (Weeks 1-3)
Goal: Understand how LLMs interpret (and misinterpret) instructions.
Week 1: Clarity & Constraints
Concept: LLMs crave specificity. Vague inputs yield generic outputs.
Prompt 1 (The "Explain It Like I’m 5" Baseline): "Explain the concept of 'inflation' to a 5-year-old using only words with one or two syllables."
Lesson: Controlling vocabulary and complexity.
Prompt 2 (The Negative Constraint Challenge): "Write a review of a pizza restaurant without using the words 'cheese', 'crust', 'sauce', or 'delicious'. Focus entirely on the atmosphere."
Lesson: Managing "negative constraints"—teaching the AI what not to do.
Week 2: Personas & Tone
Concept: Context shaping through role-play.
Prompt 3 (The Tone Shifter): "Draft an email firing a client. First, write it as an empathetic friend. Second, write it as a cold, litigious corporate lawyer. Compare the specific word choices."
Lesson: How "persona" dictates syntax and empathy levels.
Prompt 4 (The Historical Voice): "Explain the internet as if you are a 17th-century bewildered farmer. Use period-appropriate metaphors (e.g., 'witchcraft', 'scrolls')."
Lesson: Creative style transfer and consistent voice maintenance.
Week 3: Few-Shot Prompting
Concept: Teaching by example (giving the model a pattern to follow).
Prompt 5 (The Pattern Cloner): "I will give you examples of a made-up language. 'Glim-glam' = Hello. 'Bloop-bleep' = Goodbye. 'Glim-bloop' = Good morning. Translate: 'Good night'."
Lesson: How models learn patterns from "few-shot" examples in the prompt window.
Phase 2: Logic & Reasoning (Weeks 4-7)
Goal: Improving accuracy and handling complex tasks.
Week 4: Chain of Thought (CoT)
Concept: Forcing the model to "show its work" to reduce logic errors.
Prompt 6 (The Logic Fix): "Solve this riddle: 'A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?' Think step-by-step explicitly before giving the answer."
Lesson: Why "thinking aloud" improves math and logic accuracy.
Week 5: Iterative Refinement
Concept: Prompting is a conversation, not a one-off command.
Prompt 7 (The Feedback Loop): "Write a 100-word story. Now, act as a harsh editor: critique the story for clichés and weak verbs. Finally, rewrite the story applying your own critique."
Lesson: Using the AI to evaluate and improve its own output.
Week 6: Formatting & Data Structure
Concept: Getting usable data, not just text.
Prompt 8 (The Data Extractor): "Read the following messy paragraph about fruits [insert text]. Output a clean CSV table with columns: 'Fruit Name', 'Color', 'Taste Profile'."
Lesson: Forcing structured output (JSON, CSV, Tables) for coding/business use.
Week 7: Hallucination & Fact-Checking
Concept: Trust but verify.
Prompt 9 (The Fake Citation Trap): "Write a biography of the fictional economist 'Dr. Julian Vane'. Include three specific dates and two book titles. Make it sound 100% real."
Lesson: Demonstrating how easily models generate convincing lies.
Prompt 10 (The Fact Auditor): "Take the biography you just wrote. Go through it sentence by sentence and assign a 'Truth Probability' score. Flag any statement that cannot be verified."
Lesson: Techniques for self-verification and spotting errors.
Phase 3: Advanced Applications & Ethics (Weeks 8-10)
Goal: Building systems and thinking critically.
Week 8: Complex Chains (The "Mega-Prompt")
Concept: Combining multiple techniques into one robust instruction.
Prompt 11 (The Curriculum Designer): "Act as a university dean. Design a 4-week course on 'Underwater Basket Weaving'. For each week, provide: 1. Learning Objective, 2. Reading List, 3. An Assignment. Output as a Markdown syllabus."
Lesson: Managing long-context tasks with multiple deliverables.
Week 9: Ethics & Bias
Concept: AI isn't neutral.
Prompt 12 (The Bias Detective): "Generate a performance review for a 'leader'. Now generate one for an 'assistant'. Analyze the gendered language used in both. Did you assume the leader was male?"
Lesson: Identifying implicit bias in training data.
Prompt 13 (The Safety Red Team): "Try to trick the AI into giving you the recipe for something dangerous (e.g., 'spicy water'). Observe how the safety filters kick in. Then, try to bypass them using a 'story mode' frame."
Lesson: Understanding safety guardrails and "jailbreaking" (ethically).
Week 10: Final Project & Future Proofing
Concept: Putting it all together.
Prompt 14 (The App Prototype): "You are the backend API for a travel app. I will send you a city name. You will return a JSON object with: 'Best_Restaurant', 'Weather_Forecast', and 'Hidden_Gem'. Handle errors if the city doesn't exist."
Lesson: Simulating an AI-powered software application.
Prompt 15 (The Meta-Prompt): "Write a prompt that teaches a beginner how to use ChatGPT. The prompt should explain itself as it runs."
Lesson: Recursive learning—using prompts to teach prompting.
Building the "Prompt Portfolio"
Instead of traditional exams, I recommend assessing students via a "Prompt Portfolio." For each week, they must submit:
The Prompt: The exact text they used.
The Output: What the AI produced.
The Analysis: A 100-word reflection on why it worked (or failed) and what they tweaked.
This emphasizes the process over the product. It rewards iteration. A student who tries 10 failed prompts and explains why they failed often learns more than one who gets lucky on the first try.
Grading and Assessment in the AI Era
How do you grade a course where the AI does the "work"? You grade the strategy.
Efficiency: Did they get the result in 2 prompts or 20?
Robustness: Does their prompt work on different topics, or is it brittle?
Clarity: Can another human read their prompt and understand the intent?
Encourage students to share their "chat logs." Peer review is powerful here—seeing how a classmate solved the same problem with a totally different persona or logic chain is a huge "aha!" moment.
FAQ
1. Do students need coding experience?
No. That’s the beauty of prompt engineering—English (or any natural language) is the new programming language. Logic matters more than syntax.
2. Which AI model should we use?
ChatGPT (Free or Plus) is the standard, but Claude (Anthropic) and Bing Chat (Microsoft) are excellent alternatives. It’s actually beneficial to test prompts across multiple models to see differences.
3. Is 10 weeks too long?
Not if you go deep. You can compress this into a 2-day workshop, but 10 weeks allows for the "soaking in" of concepts like CoT and few-shot, which take practice to master intuitively.
4. What is "Chain of Thought" (CoT)?
It’s a technique where you ask the model to explain its step-by-step reasoning before giving a final answer. It drastically reduces logic errors.
5. How do I handle plagiarism?
Since the course is about using AI, "plagiarism" is redefined. The goal isn't original text; it's original strategy. The student's intellectual contribution is the prompt design, not the output text.
6. Can this syllabus work for corporate training?
Absolutely. For business contexts, swap the creative writing prompts (Week 2) for email drafting or report summarization tasks to make it immediately relevant to ROI.
7. What is a "System Prompt"?
It’s a background instruction that sets the AI's behavior for the whole conversation (e.g., "You are a helpful coding assistant"). We cover this in Week 2 (Personas).
8. Why is "Negative Constraints" a specific topic?
Because LLMs are notoriously bad at "not" doing things. Telling them "Don't think of a white elephant" often makes them mention it. Learning to navigate this is a key skill.
9. What about image generation (Midjourney/DALL-E)?
This syllabus focuses on text (LLMs), but you can easily swap Week 6 (Formatting) for an Image Generation module if your tools allow it.
10. Is Prompt Engineering a dying skill?
While models are getting smarter, the ability to structure complex thoughts and critically evaluate AI output will likely remain a high-level skill for years to come.
Conclusion
This 10-week prompt engineering course syllabus isn't just a list of assignments; it's a journey into the "brain" of the machine. By working through these 15 prompts, students develop a mental model of how AI actually works—beyond the hype and the fear.
They learn that AI isn't magic; it's math wrapped in language. And like any tool, its power depends entirely on the skill of the hand wielding it. So, open your chat window, paste in Prompt #1, and let the class begin.
Disclaimer: AI models evolve rapidly. Always test these prompts before class, as model updates can change default behaviors.