Prompt Engineering

Common Prompt Engineering Mistakes (and How to Fix Them)

The most frequent prompt engineering mistakes that lead to bad AI output — and the specific fixes that make your prompts dramatically more effective.

Bad prompts don't produce bad output because AI is limited. They produce bad output because they give the AI too little to work with, too much to juggle, or conflicting signals about what you want.

Here are the mistakes that cause the most frustration — and the specific fixes for each.

Mistake 1: Being Too Vague

This is the most common mistake by far. Vague prompts get vague output.

The mistake: "Help me with my marketing strategy."

Why it fails: The AI doesn't know your business, audience, budget, goals, timeline, or constraints. It responds with the most generic advice possible because that's the safest answer to an undefined question.

The fix: "I run a B2B SaaS tool for HR teams, $30K MRR, 150 customers. I need to grow to $60K MRR in 6 months. Current channels: content marketing (blog, 5K monthly organic visits) and LinkedIn (3K followers). Budget: $5K/month for paid. Team: me plus one part-time content writer. What's the highest-leverage marketing strategy given these constraints?"

The principle: Every piece of context you add eliminates thousands of possible (wrong) responses the AI might generate.

Mistake 2: Asking for Too Much at Once

Overloaded prompts produce shallow output across every dimension rather than deep output on anything.

The mistake: "Create a complete marketing plan with buyer personas, competitive analysis, content strategy, channel recommendations, budget allocation, KPIs, and a 12-month content calendar."

Why it fails: This is 7 separate analytical tasks. Each one deserves focused attention. Cramming them into one prompt means each section gets 2-3 generic sentences instead of actionable depth.

The fix: Break it into a chain of focused prompts:

  1. "Create 3 buyer personas for [business]. Include: role, company size, pain points, information sources, buying triggers, and objections."
  2. "Based on these personas [paste], analyze our competitive landscape: [list competitors]. What positioning gaps exist?"
  3. "Given our personas and positioning [paste], recommend a content strategy. What topics, formats, and channels?"
  4. Continue building...

The principle: One prompt, one job. Chain prompts together so each output informs the next.

Mistake 3: Not Specifying the Output Format

Without format instructions, AI defaults to essay-style prose. That's rarely what you actually need.

The mistake: "What are the best marketing channels for a SaaS startup?"

You get: four paragraphs of general advice about various channels.

The fix: "What are the best marketing channels for a B2B SaaS startup at $20K MRR? Format as a ranked table with columns: Channel, Why It Works at This Stage, Monthly Budget Needed, Time to Results, and Key Metric to Track."

You get: a structured comparison you can actually use for planning.

Common formats to request:

  • Tables (for comparisons and data)
  • Numbered lists (for steps and priorities)
  • Bullet points (for key takeaways)
  • Headers with sections (for documents)
  • JSON or structured data (for technical use)
  • Checklists (for action items)

The principle: If you can visualize what the output should look like, describe that visualization in the prompt.

Mistake 4: Forgetting to Include Constraints

Unconstrained prompts produce unconstrained output — usually too long, too broad, and too generic to be useful.

The mistake: "Write me a blog post about prompt engineering."

Why it fails: Without constraints, the AI writes a 1,500-word generic overview that reads like every other article on the topic. No target audience, no angle, no differentiation.

The fix: "Write a blog post about prompt engineering for marketing professionals who use ChatGPT daily but have never studied prompt engineering formally. Angle: practical techniques they can use today, not theory. Tone: direct, no fluff, like a smart coworker explaining something over coffee. Length: 1,000 words. Don't use phrases like 'in today's AI landscape' or 'it's important to note.' Start with a specific example, not a definition."

Useful constraints:

  • Word count — for the total and per section
  • Audience — their knowledge level and what they care about
  • Tone — with specific examples or anti-patterns
  • Exclusions — what NOT to include
  • Angle — what makes this different from generic coverage

The principle: Constraints are creative fuel. They force the AI to make choices rather than playing it safe with generic output.

Mistake 5: Not Providing Context About Your Situation

AI can't read your mind. It doesn't know your industry, team size, budget, or what you've already tried.

The mistake: "How should I improve my website's SEO?"

The fix: "How should I improve my website's SEO? Context: I run an e-commerce store selling custom furniture. 200 product pages, 15 blog posts, 8K monthly organic visits. We rank on page 2 for our main keywords. Built on Shopify. No dedicated SEO person — I do it myself in about 5 hours per week. What are the 5 highest-impact SEO improvements I should make given these constraints?"

The principle: The quality of AI output is directly proportional to the quality of context you provide. Spending 60 seconds adding context saves 10 minutes of useless output.

Mistake 6: Accepting the First Output

Most people prompt once, get a mediocre result, and either accept it or give up on AI. The real value comes from iterating.

The mistake: Sending a prompt, reading the output, deciding "AI isn't that useful," and moving on.

The fix: Use the first output as a starting point and refine:

  • "This is good, but the recommendations are too generic. Make them specific to [my situation]."
  • "The structure is right but the tone is too formal. Rewrite in a more conversational voice."
  • "Section 3 is the strongest — expand it to twice the length. Cut section 5 entirely."
  • "Now challenge your own recommendations. What could go wrong with this approach?"

The principle: Treat AI output like a first draft from a smart intern — useful raw material that needs your judgment and direction to become truly good.

Mistake 7: Using AI for the Wrong Tasks

Not every task benefits from AI. Using it for the wrong things wastes time and produces poor results.

Tasks where AI excels:

  • Structuring information (turning messy notes into organized documents)
  • First drafts (content, emails, plans)
  • Analysis frameworks (competitive analysis, SWOT, decision matrices)
  • Brainstorming (generating ideas and options)
  • Summarization (compressing long content)
  • Format conversion (turning prose into tables, or lists into narratives)

Tasks where AI struggles:

  • Tasks requiring current, real-time information
  • Tasks requiring access to your specific data or systems
  • Highly subjective creative decisions (brand identity, design)
  • Tasks where accuracy is critical and errors are costly (legal, medical, financial advice)
  • Tasks requiring deep domain expertise in niche fields

The principle: Use AI for the tasks it's good at (speed, structure, breadth) and apply your human judgment for the tasks it's not (accuracy, nuance, domain expertise).

Mistake 8: Not Telling AI What to Avoid

AI has default behaviors that are often undesirable — generic openings, hedging language, unnecessary caveats, and filler content. If you don't explicitly tell it to stop, it won't.

Common defaults to override:

  • "Don't start with a generic introduction. Start with the most valuable piece of information."
  • "Don't qualify every statement with 'however' or 'it depends.' Be direct."
  • "Don't include a summary section at the end that repeats what you already said."
  • "Don't use corporate jargon. Use plain language."
  • "Don't pad the response to seem more comprehensive. Short and actionable beats long and thorough."

The principle: Anti-instructions are as important as instructions. Every default behavior you override gets you closer to output that sounds like you, not like AI.

Mistake 9: Ignoring the Role/Persona Technique

Without a role, AI responds as a generic assistant. With a role, it responds with specific expertise and perspective.

The mistake: "Review my pricing strategy."

The fix: "You're a SaaS pricing consultant who specializes in B2B products with $10K-$100K ACV. Review my pricing strategy: [describe]. What would you change and why?"

Why it matters: The role determines vocabulary, depth, recommendations, and perspective. A pricing consultant gives different (better) advice than a generic AI assistant.

Mistake 10: Not Learning from What Works

Most people treat every AI interaction as a fresh start. The best prompt engineers build a library of prompts that work for their specific needs.

The fix:

  • When a prompt produces great output, save it as a template
  • Note what context made the difference
  • Build prompt templates for your recurring tasks (weekly reports, content briefs, code reviews)
  • Share effective prompts with your team

The principle: Prompt engineering is a skill that compounds. Every prompt you save and refine makes you faster and more effective next time.

The Quick Fix Checklist

When AI gives you bad output, run through this checklist:

  1. Did I provide enough context about my situation?
  2. Did I specify the output format I need?
  3. Am I asking for too many things at once?
  4. Did I set constraints (length, tone, audience)?
  5. Did I include examples of what good looks like?
  6. Did I tell it what NOT to do?
  7. Did I iterate on the first output instead of giving up?

Most bad outputs can be fixed by addressing just 2-3 of these questions.

Further Reading