On this page
- TL;DR
- The one rule that fixes 80% of bad prompts
- Writing
- 1. The "rewrite this without the LLM smell" prompt
- 2. The "pretend you're me" prompt
- 3. The editor pass
- 4. The headline generator
- Coding
- 5. The "explain this code to me" prompt
- 6. The minimal repro
- 7. The "find the bug without fixing it"
- 8. The test generator
- 9. The refactor proposal
- 10. The code review
- Research
- 11. The compare-three
- 12. The "steelman the opposition"
- 13. The "what am I missing"
- 14. The document Q&A
- Thinking
- 15. The "think through this out loud"
- 16. The "bad idea generator"
- 17. The priority sorter
- Planning
- 18. The project kickoff
- 19. The pre-mortem
- 20. The OKR checker
- Teaching and learning
- 21. The Feynman explainer
- 22. The ELI-expert
- 23. The Socratic tutor
- Productivity
- 24. The inbox triage
- 25. The meeting prep
- How to actually use these
- The meta-template
- The summary
TL;DR
- "Prompt engineering" isn't magic — it's being specific about what you want, what you don't want, and what good looks like.
- The 25 templates below are organized by job-to-be-done: writing, coding, research, thinking, planning, and teaching.
- Each one includes why it works and which model to use it with.
- Save the ones you use to a prompt library (NovaKit has one built in — try it →).
The one rule that fixes 80% of bad prompts
Before we get into templates: most "bad AI output" comes from one cause.
You didn't tell the model what shape the answer should take.
"Explain quantum computing" produces a Wikipedia dump. "Explain quantum computing in 3 bullet points for a business executive who remembers high-school physics, with one concrete analogy and no math" produces a useful answer.
Every template below is really just a more specific answer to "what shape should the output be?"
Writing
1. The "rewrite this without the LLM smell" prompt
Rewrite the following text to sound like a human wrote it. Specifically:
- No phrases like "in today's fast-paced world", "it's important to note", "in conclusion", "delve into", "landscape of", "navigate the".
- No bulleted lists unless the original had them.
- Vary sentence length. Short sentences are fine.
- Use contractions naturally.
- Cut any sentence that's just scaffolding — if removing it changes nothing, it goes.
Text:
[paste]
Why it works: You're naming the specific tics LLMs have and forbidding them. Most "rewrite this" prompts fail because they're too vague. Best with: Claude Sonnet 4.6 or Claude Opus 4.
2. The "pretend you're me" prompt
Read these three things I've written:
[paste 3 samples — emails, blog posts, anything]
Now write a [email / blog intro / tweet] on the topic of [X] in my voice. Match:
- Sentence rhythm
- Vocabulary choices (don't use words I wouldn't use)
- How I open and close
- Where I use humor vs. where I don't
Why it works: You're giving the model enough signal to imitate your voice, not its default voice. Best with: Claude Opus 4 (best at voice matching) or GPT-4o.
3. The editor pass
You are my editor. You don't write for me; you cut and question.
Read this draft and reply in three sections:
1. Three lines/paragraphs to cut and why (be blunt).
2. Three unclear claims that need evidence or concreteness.
3. One thing this draft is missing that a reader would expect.
Do not rewrite anything yourself.
Draft:
[paste]
Why it works: You're giving the model a different role (editor, not co-writer). It'll resist the urge to rewrite and actually edit. Best with: Claude Opus 4.
4. The headline generator
Write 15 headlines for this article. Rules:
- No clickbait.
- No "Ultimate Guide" or "Everything You Need to Know".
- Each headline should make a specific promise or name a specific pain.
- Include 5 that lead with a number, 5 that lead with a question, 5 that lead with a statement.
Article:
[paste article or outline]
Why it works: Constrained quantity + explicit categories force variety instead of one-note outputs.
Coding
5. The "explain this code to me" prompt
Explain this code in three layers:
1. One sentence: what does it do?
2. Three bullets: what are the non-obvious design choices and why?
3. Three bullets: what would surprise a reader who hasn't seen this codebase before?
Don't explain syntax. Assume I know [TypeScript/Python/etc.].
Code:
[paste]
Why it works: Skipping syntax explanations and asking for design choices produces insight, not a line-by-line walkthrough. Best with: Claude Opus 4.
6. The minimal repro
Here's a bug: [describe].
Here's the relevant code: [paste].
Don't fix it yet. First produce the smallest possible code example that reproduces this bug. It should:
- Be under 30 lines
- Not depend on any external services
- Include the exact error message I should see
Then ask me to confirm the repro works before suggesting a fix.
Why it works: Forces the model to isolate the bug before jumping to a fix. Massively reduces wrong fixes.
7. The "find the bug without fixing it"
This code is buggy. Don't fix it yet.
First, list the top 3 most likely causes, ranked by probability. For each, explain:
- Why it's plausible
- How I would verify it (what I'd check/log/test)
- What the fix would look like in one line
Then ask me which one to investigate.
Code:
[paste]
Symptom:
[describe]
Why it works: Forces hypothesis-driven debugging instead of "change random things and hope."
8. The test generator
Generate Vitest tests for this function.
Coverage requirements:
- Every branch taken at least once
- At least 2 edge cases per parameter (boundary values, null/undefined, empty strings/arrays)
- One adversarial case (what happens if input is deliberately malformed?)
Style:
- `describe` per function, `it` per scenario
- Arrange/Act/Assert clearly separated
- No unnecessary mocks
Function:
[paste]
Why it works: Specifying coverage + style makes tests consistent and thorough instead of "three happy paths."
9. The refactor proposal
I want to refactor this code. Don't refactor yet.
Propose 3 different refactoring approaches. For each:
- Name the pattern/approach
- Explain what it optimizes for (readability / performance / testability / extensibility)
- Show the function signature (no bodies)
- List one trade-off
Then ask which approach I want before writing any code.
Code:
[paste]
Why it works: Multiple options with trade-offs > one "best" answer you may not agree with.
10. The code review
Review this PR as a senior engineer. Structure:
1. **Must fix:** Issues that would block merge (bugs, security, correctness).
2. **Should fix:** Issues worth addressing but not blocking (naming, clarity, small perf).
3. **Consider:** Taste-level suggestions you feel strongly about.
4. **Praise:** One thing done well (be specific).
Be direct. No "great job overall!" filler.
Diff:
[paste]
Research
11. The compare-three
Compare [A], [B], and [C] on these axes:
- [axis 1]
- [axis 2]
- [axis 3]
Output format:
- Markdown table with A/B/C as columns and axes as rows
- Below the table, 3 bullets: "if you care most about X, pick Y" decision rules
- One-paragraph synthesis: which would you personally pick and why?
Sources: if you cite data, say where it's from. If you don't know, say so.
Why it works: Structured comparison + personal recommendation + source discipline. No mushy middles.
12. The "steelman the opposition"
I believe [claim]. I'm likely biased.
Steelman the strongest argument against my position. Rules:
- No strawmen
- Use the strongest evidence available, even if it's uncomfortable
- Don't hedge with "but of course your view has merit too"
- End with: "the weakest point in your current thinking is X"
Why it works: Most models default to both-sides-ism. Forcing a one-sided steelman gets you genuine opposition. Best with: Claude Opus 4.
13. The "what am I missing"
Here's what I know about [topic]:
[dump everything you know]
What are the top 5 things a domain expert would say I'm missing or getting wrong? For each, one sentence of what it is and one sentence of why it matters.
Why it works: Asking for gaps, not summaries, surfaces blind spots.
14. The document Q&A
I'm going to give you a long document. Then I'll ask questions.
Rules when answering:
- Quote specific passages (with approximate location) when making factual claims.
- If the document doesn't answer the question, say "not in the document" — don't speculate.
- If two parts of the document contradict, point that out.
Document:
[paste — use Gemini 2.5 Pro for very long docs]
Question: [ask]
Best with: Gemini 2.5 Pro (2M context) for big docs. Claude Opus 4 for reasoning-heavy extracts.
Thinking
15. The "think through this out loud"
I'm trying to decide: [decision].
Don't give me the answer. Walk me through your reasoning as you think about it, showing:
- What you'd want to know first
- What considerations pull in which direction
- What you're uncertain about
End with a question you'd ask me to unlock your recommendation.
Why it works: Explicit reasoning traces > confident one-line answers. Helps you spot flaws in the reasoning.
16. The "bad idea generator"
I'm working on [project/problem].
Generate 15 *deliberately bad* ideas for how to solve this. Rules:
- Must still be a serious attempt at addressing the problem
- Should fail for different reasons
- No ideas that are bad in the same way
I'll learn from what makes each one fail.
Why it works: Inversion. Bad ideas surface assumptions and constraints the model wouldn't otherwise name.
17. The priority sorter
Here's my list of things I could work on:
[paste list]
Sort them by expected value, given these factors:
- Effort: how long each takes
- Impact: who benefits and how much
- Reversibility: how bad is it if we're wrong?
- Dependencies: does this unlock other work?
Output: a table with each item, your score on each factor (1-5), a total, and a recommendation of top 3 + why.
Planning
18. The project kickoff
I want to build [thing]. Help me stress-test the plan before I start.
1. Write a one-paragraph project brief in your words. (I'll correct if wrong.)
2. List the top 5 risks/unknowns in priority order.
3. Propose a 3-week MVP scope — what's in, what's explicitly out.
4. Ask me 3 clarifying questions whose answers would most change the plan.
19. The pre-mortem
Imagine it's 6 months from now and [project] has completely failed.
Write the post-mortem. Specifically:
- Top 3 root causes
- Early warning signs we missed
- What we should have done differently
- Which of these is most preventable right now?
Why it works: Working backwards from failure is more generative than working forward from plans.
20. The OKR checker
Here's my OKR / goals for [period]:
[paste]
Critique them against these criteria:
- Is each "measurable" actually measurable? (Who measures? What exact metric? Threshold?)
- Is each outcome-oriented (not activity-oriented)?
- Are any contradictory?
- Is the set realistic for the time window?
Rate each one A/B/C/F and suggest a better version for any that scored below B.
Teaching and learning
21. The Feynman explainer
Explain [concept] using:
- No jargon without immediate definition
- One everyday analogy
- One worked example with numbers
- One common misconception people have and why it's wrong
Target: someone who finished high school but hasn't studied [field].
22. The ELI-expert
Explain [concept] assuming I already know [related concepts].
Skip the intro. Start with what's interesting or non-obvious.
Name two common misunderstandings experts still have.
Why it works: Adjusting the baseline changes what the model includes vs. skips.
23. The Socratic tutor
Teach me [topic] Socratically.
- Ask me one question at a time.
- Wait for my answer before continuing.
- If my answer is wrong or incomplete, ask a smaller question that helps me see why.
- Don't tell me the answer until I've tried to reason it out.
- When I get something right, explain why it's right — don't just say "correct."
Productivity
24. The inbox triage
Here are the subject lines and first sentences of 20 emails:
[paste]
Categorize each as:
- Reply today (specifically why)
- Reply this week
- FYI / archive
- Delete / unsubscribe
Don't draft replies yet. I'll tell you which ones to draft next.
25. The meeting prep
I have a meeting with [person/team] about [topic] in [timeframe]. Prep me:
1. The 3 most important things I should communicate.
2. The 3 most important things I should learn from them.
3. The 2 most likely objections and how I'd address each.
4. One "bomb" they might drop that I should be ready for.
5. A 5-sentence opening I could use.
How to actually use these
Copy these into a prompt library you can reach in one click. Some options:
- NovaKit has a built-in prompt library — save these, favorite your top 5, insert into any chat with keyboard shortcut.
- Notion / Obsidian / Apple Notes + a keyboard shortcut work if you prefer external storage.
- Raycast snippets or TextExpander for power users.
The single biggest productivity unlock isn't finding better prompts — it's reusing good prompts without retyping. The difference between "I should use AI more" and "I use AI 30x/day" is usually one-click prompt access.
The meta-template
When none of these fit, start from:
Context: [who I am / what I'm doing]
Task: [what I want the output to be]
Constraints: [what it must/must not include]
Format: [exact shape of the output]
Quality bar: [what good looks like]
Any prompt that includes all five will out-perform any prompt that's just a question.
The summary
Prompt engineering is not witchcraft. It's being more specific than your instinct tells you to be. Give the model a job, a shape, and a quality bar, and most "wow, AI got bad" complaints go away.
Copy these. Modify them. Save what works.
Build your own prompt library with one-click insert in NovaKit. Works with GPT, Claude, Gemini, and every major provider — BYOK, local-first.