Half the AI projects I see fail not because the model is bad, but because the prompt is bad. Someone writes "summarize this contract" and then wonders why the output is a meaningless wall of text. The difference between a good prompt and a bad one is understanding what you're actually trying to do.
Here's how to write prompts that work for business workflows.
The Difference Between Chat and Business Prompts
Chat prompts: "Hey, what's a good way to structure a presentation?" The response is conversational. It's fine if it's a little vague. You'll follow up with more questions.
Business prompts: "Extract the payment terms, effective date, and termination clause from this contract. Return as JSON." The response must be precise, structured, and machine-readable. You can't follow up with clarifying questions. You need it to work reliably on 1000 contracts.
Most bad prompts are written like chat prompts when they should be business prompts.
The Three-Part Formula
Part 1: Context and Role
Tell the AI what it is and what domain it's working in. "You are a legal analyst specializing in contract review" or "You are a data extractor focused on accuracy."
This matters more than you'd think. It focuses the model on the right domain and sets expectations for quality.
Part 2: Exact Instructions
Be specific about what you want. Not "summarize this contract." Instead: "Extract the following fields from this contract: (1) parties involved, (2) effective date, (3) payment terms, (4) termination clause. Format as JSON with keys: 'parties', 'effective_date', 'payment_terms', 'termination_clause'."
The more specific you are, the more consistent the output.
Part 3: Examples and Edge Cases
Show examples of what you want. "If the termination clause specifies 'either party may terminate with 30 days notice,' return that exactly. If there is no termination clause, return 'none specified.'"
Examples prevent hallucination. They reduce ambiguity. They make the prompt predictable.
A Real Example
Bad prompt: "Analyze this customer intake form and extract the key information."
Good prompt: "You are a client intake specialist. Extract the following information from the intake form below: (1) Client name (2) Industry (3) Primary contact (4) Specific problem statement (5) Budget range. Return as JSON. If a field is not provided in the form, use 'not provided' as the value. Do not infer or assume information that is not explicitly stated."
The good prompt is specific about what to extract, the format, how to handle missing data, and what assumptions not to make. The output will be consistent and reliable.
The Advanced Techniques
Chain-of-thought prompting: "Before answering, think through the logic step-by-step. First, identify the parties. Second, find the termination clause. Third, extract the specific terms. Then provide your answer." This reduces errors on complex reasoning tasks.
Temperature and constraints: For business workflows, use lower temperature (closer to 0). You want consistency, not creativity. You also want to constrain the output format (JSON, CSV, specific fields) to prevent variation.
Negative examples: "Do not infer dates if they are not explicitly stated. Do not summarize—extract only. Do not include commentary." Telling the model what not to do is often as important as telling it what to do.
Testing and Iteration
You don't get a good prompt in one go. You write it, test on 5-10 examples, identify failures, refine, test again. Most prompts take 2-3 iterations to get right.
Common failure modes:
- Hallucination (making up information): Add "do not infer" to the prompt
- Inconsistent output format: Add explicit format examples
- Missing information: Add "if not provided, state 'not provided'"
- Over-interpretation: Add "extract only, do not interpret"
Versioning Prompts
Once you have a working prompt, version it like code. If you improve it, increment the version. If a new model comes out, keep the old prompt version because switching models might require tweaks.
Good naming: "contract-extraction-v2-claude35" instead of "contract extraction prompt final v2 (2)."
The Honest Take
Good prompts are the difference between AI that works in production and AI that's a curiosity. Invest time in prompt engineering. Test on real data. Document what works. Version your prompts. The firms that do this systematically have AI that actually works. The firms that don't have failures.
Want to discuss AI strategy for your firm?
Book a free 30-minute assessment — no pitch, just practical insights.
Book a Call