The Art of Prompt Writing: How I Improved Product MVP With Better Prompts
When I first started leading a product initiative to build a marketing campaign creation tool powered by AI, I knew we’d be working with large language models (LLMs). What I didn’t realize is how much of the output quality would come down to one thing: the prompt.
We experimented with a bunch of models — ChatGPT, Claude, Grok, DeepSeek. And while each model had its own quirks, one thing was consistent: garbage in, garbage out. The better we got at prompting, the better the model performed.
That’s when I decided to take IBM’s free certification course on Generative AI: Prompt Engineering.
This blog breaks down what I learned, how I’m applying it in real-world product work, and a few prompt hacks you can start using today — whether you’re building with LLMs or just want better answers from ChatGPT.
Why Prompt Engineering Matters
Prompts aren’t just input — they’re interface. Every time you type something into an LLM, you’re crafting the single instruction the model will use to generate results. That’s a lot of weight for one sentence to carry.
When we started testing user flows in our AI campaign builder, we noticed:
Prompts with no context returned bland or off-topic content
Prompts with too much fluff confused the model
A little structure went a long way in improving quality
If you're building anything that relies on generative AI — marketing tools, summarizers, internal copilots, customer support agents — learning to craft solid prompts is like learning to write clean code.
Key Lessons From the IBM Course (But Told Like a Human)
1. Good Prompts Have Structure
At a high level, every effective prompt has these parts:
Instruction – What do you want the model to do?
Context – Why is this task important? Who is the user?
Input Data – What’s the raw material?
Output Indicator – What should the answer look like?
Example:
Naive prompt:
"Write a product description."
Structured prompt:
"You’re a senior copywriter at a direct-to-consumer skincare brand. Write a product description for a new vitamin C serum aimed at Gen Z users. Keep it under 60 words. Make it witty and fresh."
You don’t need to overcomplicate, but framing matters.
2. Clarity Beats Cleverness
We often assume the model “knows what we mean.” It doesn’t. It only knows what we say.
Here are some best practices:
Be specific. Don't just say “make it better” — say how.
Avoid ambiguous words like “good” or “fun.”
Use role-play or personas to ground tone and context.
Example:
Bad prompt:
“Help me with a product idea.”
Better prompt:
“You are a product strategist at a healthtech startup. Suggest three product ideas using wearable sensor data to improve sleep quality. Mention user benefit and potential challenges.”
3. Use Prompt Patterns (That Work)
🔗 Chain-of-Thought Prompting
Ask the model to reason step by step before giving a final answer.
“Let’s think step-by-step about how to build a product that uses AI to optimize influencer campaigns.”
This helps avoid hallucinations and increases logical depth.
Tree-of-Thought Prompting
Like Chain-of-Thought but more advanced. You ask the model to explore different branches of reasoning before deciding.
“List three different strategies for automating campaign briefs. Then evaluate the pros and cons of each before suggesting the best one.”
Roleplay/Persona Prompts
Assign the model a role to frame its response style.
“You are a sarcastic but helpful chef. Explain why someone should stop putting olive oil in their boiling pasta water.”
4. Prompt Hacks to Make Models More Creative, Precise, or Weird
Prompt engineering isn’t just about structure — it’s about control.
Modify the Tone:
“Write a product description in the tone of a Gen Z TikToker.”
Inject Examples:
“Here are three campaign briefs we’ve used in the past… Based on these, generate a brief for a new product launch targeting college students.”
Hack Explainability:
“Explain how our LLM chooses influencer tiers, using plain language for a non-technical marketing manager.”
Use Feedback Loops:
“Here’s a first draft output. Now rewrite it to make it more concise and remove passive voice.”
How This Changed My Product Work
Our AI campaign creation tool went from “nice demo” to “usable MVP” once we:
Structured prompts around user goals, not model capabilities
Created prompt libraries by use case (campaign copy, brief writing, tone variation)
Switched from “ask the model” to “talk to the model”
Now, our prompts aren’t just queries — they’re product features.
Takeaway: If You Use LLMs, Learn to Prompt
Whether you're building with AI or just exploring ChatGPT for fun, better prompting changes everything.
You don't need to memorize terms like "zero-shot" or "Tree-of-Thought" — you just need to:
Be clear
Provide context
Think like a user
Test, refine, repeat