20 ChatGPT “Spells” That Actually Work in 2026 — A Thinking-Model-Era Prompting Guide


20 ChatGPT “Spells” That Actually Work in 2026 — A Thinking-Model-Era Prompting Guide

The old tricks stopped working. Here’s what replaced them.

Most people still prompt ChatGPT the way they did in 2023. They type vague questions, get vague answers, and blame the model. But the model isn’t the problem. The prompt is.

I know this because I used to do the same thing. When I first published a list of “magic prompts” on my Japanese blog, “think step by step” was my number-one recommendation. It worked beautifully — back then. Today, with GPT-5.2 and Claude 4.5 shipping built-in Thinking modes that reason internally before responding, that same instruction can actually degrade output quality. The models have changed. Our prompting habits haven’t kept up.

Over the past year, I’ve tested hundreds of prompt variations across content creation, research, and business planning. Below are the 20 techniques that consistently produce better results in 2026 — organized from beginner-friendly basics to the new Thinking-model strategies that most people haven’t discovered yet.


Why Does Wording Matter So Much?

A 2025 study from Coursera’s prompt engineering curriculum identified four variables that predict output quality: context, role, constraints, and output format. When any of these is missing, the model fills in the blanks with the most generic assumption available. That’s why “write me a blog post about productivity” returns boilerplate, while a prompt specifying the audience, tone, structure, and examples returns something you’d actually publish.

The good news: you don’t need a prompt engineering certification. Each “spell” below is a single phrase you can drop into your existing workflow today.


The Fundamentals — Start Here

1. “You are a [specific expert] with [X years] of experience.”

Role assignment remains the single highest-impact technique in 2026. When you tell ChatGPT to act as a senior marketing strategist (rather than a generic assistant), the vocabulary shifts, the reasoning deepens, and the assumptions align with professional-grade expectations. OpenAI’s own prompt engineering guide still lists this as a core best practice.

Example: “You are a conversion rate optimization specialist with 10 years of experience in e-commerce. Analyze my landing page copy and suggest three specific changes to increase sign-ups.”

2. “Before you start, ask me any questions you need so I can give you more context.”

This single line, highlighted in a recent Medium article on prompting best practices, eliminates the most common failure mode: the model guessing what you want instead of asking. For complex tasks — business plans, research summaries, content strategies — this consistently outperforms even well-structured prompts, because it surfaces assumptions you didn’t know you were making.

3. “Output this as [specific format].”

Tables, JSON, PREP structure, numbered lists with word counts — specifying the container eliminates variance. The model no longer has to guess whether you want a paragraph or a spreadsheet.

4. “The target audience is [description].”

The same question answered for a software engineer versus a first-time founder will differ in jargon density, assumed knowledge, and level of detail. One sentence of audience context does more than three paragraphs of instructions.

5. “Include concrete examples.”

Simple, but often forgotten. This phrase forces the model past abstract generalities and into specific, actionable territory.


Unlocking Deeper Reasoning

6. “Think about this thoroughly. Consider multiple approaches, and if the first one doesn’t work, try another.”

This is the 2026 evolution of “think step by step.” Here’s why the old version lost its edge: today’s Thinking models (GPT-5.2’s reasoning mode, Claude’s extended thinking) already generate internal chain-of-thought before responding. Telling them how to think can actually interfere with their native reasoning process. Anthropic’s official documentation now recommends telling the model how deeply to think rather than prescribing the steps. The prompt above does exactly that.

7. “Verify your own answer before giving me the final response.”

A verification prompt leverages the Thinking model’s strongest capability: self-correction. Rather than asking the model to show its work (which it’s already doing internally), you ask it to audit its work. This catches arithmetic errors, logical gaps, and unsupported claims that would otherwise slip through.

8. “Use lateral thinking.”

When you need ideas that break out of predictable patterns, this phrase forces the model away from linear logic and into associative, cross-domain reasoning. Particularly effective for brainstorming sessions where conventional suggestions aren’t enough.

9. “Analyze this from both [Perspective A] and [Perspective B].”

Multi-perspective analysis produces richer outputs than single-angle responses. CEO vs. frontline employee. Short-term vs. long-term. Proponent vs. critic. The tension between opposing viewpoints surfaces nuances that a single-perspective prompt would miss entirely.

10. “Give me ideas from the perspective of multiple specialists.”

This technique, featured in the Japanese bestseller “All Techniques for Thinking with AI,” simulates a virtual expert panel. Ask for a marketer’s take, an engineer’s take, and a designer’s take simultaneously. You get three distinct strategic lenses in one response — a form of cognitive diversity that’s hard to replicate even in a real meeting.


Raising the Quality Bar

11. “This is 60% quality. Make it 100%.”

After generating a first draft, this prompt triggers a focused revision pass. The model will restructure arguments, add specificity, tighten language, and fill logical gaps — all without you having to diagnose each problem individually. For even better results, specify what feels off: “The opening is weak” or “The examples aren’t specific enough.”

12. “If you’re uncertain about any information, explicitly say so.”

Hallucination remains the Achilles’ heel of large language models. This grounding instruction gives the model permission to say “I don’t know” instead of fabricating a confident-sounding answer. Pair it with: “Base your response only on the attached document” for maximum reliability.

13. “Increase the abstraction level” / “Make this more concrete.”

A dial for controlling output granularity. Strategy discussions benefit from abstraction (“What’s the big picture?”). Execution plans need specificity (“What do I do tomorrow morning?”). Being explicit about which level you want prevents the model from defaulting to a middle-ground that serves neither purpose well.

14. “Summarize our conversation so far as a reusable prompt.”

For ongoing projects, this turns a multi-turn dialogue into a portable context block. Paste it into a new session and you’re back to full speed — no re-explaining needed. This is especially valuable now that conversation context doesn’t always persist across sessions.

15. “What’s the best way for me to prompt you to get the best possible result?”

The meta-prompt. Instead of optimizing your instructions yourself, you ask the model to design its own ideal input. This consistently surfaces dimensions you hadn’t considered — tone specifications, missing context, structural preferences. It’s the fastest path to prompt literacy for beginners, and even experienced users find blind spots this way.


The 2026 Playbook — Thinking Model Techniques

16. Grounding: “Respond based only on the attached document.”

As models get more capable, the temptation to let them freewheel increases — and so does hallucination risk. Grounding restricts the model’s reasoning to verified inputs only. A 2025 DataStudios analysis found that grounding transforms prompting from open-ended conversation into controlled retrieval, producing results that are defensible and auditable. Essential for any task where accuracy matters more than creativity.

17. Few-Shot: “Follow the pattern of these examples.”

Showing 1–3 examples of your desired output remains one of the strongest techniques for consistency. The model pattern-matches on structure, tone, and information density simultaneously. Particularly useful for recurring tasks: email templates, product descriptions, social media captions, article headlines.

Example: “Generate 5 blog post titles following this pattern: — How I Built a $2K/Month Side Hustle Using AI Tools (No Coding Required) — 3 AI Writing Assistants I Tested for 30 Days — Here’s the Only One Worth Paying For Topic: Getting started with AI image generation”

18. Tree of Thoughts: “Generate multiple solution branches, then select the strongest one.”

Instead of rushing to a single answer, this technique asks the model to explore several reasoning paths in parallel, evaluate their merits, and converge on the best option. It’s the prompt-level equivalent of brainstorming before deciding. Effective for complex decisions where the first intuition isn’t necessarily the best one.

19. Multi-Step Prompting: “First do X, then do Y based on the results.”

Cramming everything into one prompt degrades quality on complex tasks. Breaking the work into sequential steps — where each step’s output feeds the next — maintains precision throughout. Think of it as project management for your prompts.

Example: “Step 1: Research the current market landscape for AI writing tools, including the top 5 players and their pricing. Step 2: Based on Step 1, recommend the three most cost-effective options for a solo content creator publishing 4 articles per week.”

20. “Thank you. This was really helpful.”

Not a technique. A habit. Some research suggests positive feedback may influence subsequent model responses. But more importantly, expressing gratitude — even to an AI — reinforces a communication style that serves you well in every interaction, human or otherwise.


Disclosure: This article contains no affiliate links.

The One Thing to Do Today

You don’t need to memorize all 20 spells. Pick two: Role Assignment (#1) and the Question-First Prompt (#2). Add them to your very next ChatGPT session. That’s it.

The shift in 2026 isn’t about AI getting smarter — it’s about humans getting better at communicating with AI. A good prompt is really just a good question: clear, specific, and honest about what you need. Master that skill, and every model upgrade amplifies your capability instead of leaving you behind.

If this was useful, follow me for more practical AI strategies. I publish regularly on both Medium and note (in Japanese), documenting my real-time experiments with AI tools, content creation, and monetization.

— –

If this was useful, follow me here on Medium.

I’m S — a content creator and AI practitioner based in rural Japan (Shimane Prefecture). I publish practical, honest takes on AI tools, content monetization, and what it actually looks like to build income with these tools from outside a major city.

[Follow →](https://medium.com/@johnpascualkumar077)


Tags: ChatGPT, Prompt Engineering, AI, Artificial Intelligence, Productivity, Writing, 2026


コメント

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です