"How do I get better results from ChatGPT?"

This is the question I'm hearing from teams now. The answer is simple: better prompts. And that's a learnable skill.

What's a Prompt?

A prompt is the instruction you give to the AI. It ranges from simple ("summarize this") to complex ("You are a legal analyst specializing in contract review. Read the following employment agreement and flag any clauses that are unusual or concerning...").

The better your prompt, the better your result.

Why This Matters

Bad prompt: "Draft an email to a client."
Result: Generic, could be for anyone, takes editing.

Good prompt: "Draft a follow-up email to a client who has been with us for three years, thanking them for a referral, mentioning a specific project, and suggesting a time for a call next week."
Result: Specific, personalized, ready to send with minimal edits.

The difference is effort on the prompt, not effort in editing the result.

The Three Elements of Good Prompts

1. Context

"You are helping a law firm with contract review."

This tells the AI who it's helping and what domain it's working in. The AI adjusts its knowledge and tone accordingly.

2. Task

"Review the following contract and identify any clauses that are ambiguous or potentially unfavorable to the client."

This is the actual thing you want done. Be specific about what you want, not vague about what you need.

3. Constraints

"Format your response as bullet points. Explain why each item is concerning. Stay under 500 words. Don't make assumptions about the client's preferences."

Constraints help the AI understand what "done" looks like.

Prompt Engineering Basics

Rule 1: Be Specific

Bad: "Summarize this meeting."
Good: "Summarize the action items from this meeting. Include who's responsible, the deadline, and any dependencies."

Rule 2: Give Examples

Bad: "Write a professional email."
Good: "Write a professional email in a tone similar to this example: [paste example]. It should have three paragraphs and a clear call to action."

Rule 3: Clarify Format

Bad: "List the issues with this proposal."
Good: "List the issues with this proposal as a numbered list. For each issue, explain the problem and suggest a fix."

Rule 4: Give Context About Your Domain

Bad: "Is this ethical?"
Good: "From a legal ethics perspective (focusing on IOLTA rules), is this ethical?"

Rule 5: Iterate

Your first prompt might not be perfect. Use follow-up: "Make that more concise." "Focus more on X." "Give me an alternative that's more aggressive."

How to Teach Your Team

Day 1: Show them a bad prompt and a good prompt. Explain the differences. Have them try both on ChatGPT themselves.

Day 2-3: Have them write prompts for real work they do. Share them. Critique together. Iterate.

Day 4-5: They practice on their own. Build a library of good prompts your firm uses repeatedly.

That's it. A week to get competent at something that matters.

Building Your Prompt Library

Create a shared doc (Google Docs, Notion, whatever) with prompts your team has tested and likes:

Your team can now use proven prompts instead of starting from scratch.

What You Should Do

Pick one task your team does regularly. Spend 15 minutes writing a really good prompt for it.

Test it. Run it 3-5 times on real work. Refine.

Share it. Show your team the prompt. Explain why it works. Let them use it.

Document it. Save it somewhere accessible.

Repeat for the next task.

The Payoff

Good prompt engineering turns AI from a "neat tool I use occasionally" to "the standard way we do this work."

That's when you get real productivity gains.