Prompt Tips

Writing a great custom prompt can take a few tries, especially when you have a complex or nuanced task. Here are some quick tips for crafting and testing a good custom prompt in Relay.app.

Write like you're explaining a task to a new intern

How would you describe what you need from a new hire? You'd use natural language, be polite, and give examples of the kind of results you're looking for. Try the same mindset when writing a custom prompt!

Set the stage

Start your prompt by providing a little bit of context about what the model's role is and the kind of output you expect. This is especially helpful in content generation prompts when you're looking for a certain tone or perspective.

This might sound like "You're a friendly customer service agent…" or "You're a professional HR leader and your job is to…"

Give concrete examples

Examples are incredibly helpful. In your prompt, include concrete examples of what output you'd expect given various scenarios. Think of a few cases that might arise—like the common case and a few edge cases—and describe what you'd expect the model to do in those scenarios.

Let's say you're extracting costs from an invoice: You could give examples of what you expect the model to output if the cost is shown in dollars or euros, and when it includes tax or not.

Test, test, test

Obviously, testing your prompt is important. But when the model doesn't produce what you expect, there are so many things that could have gone awry that it's often hard to figure out what to tinker with. Here are some strategies for making testing manageable.

Break it up

Does your prompt ask the model to do multiple tasks? Could the task be broken down into steps? Splitting up a large prompt into a couple smaller ones can make getting each piece right easier.

This is also a good strategy if you're running into length limits with a single long prompt.

Create intermediate variables

Validate that your prompt is making correct assumptions along the way by making intermediate variables.

For example, if you're processing customer emails, you could add booleans like "Is bug report" and "Is feature request" to confirm that the model is correctly classifying an email.

Throw in some hard-coded values

While you're building your prompt, try replacing variables (say, a meeting date or invoice value) with hard-coded values as placeholders.

This makes it easier to test specific scenarios—like setting the date to a weekday or weekend—and to isolate the parts of the prompt you want to work on.

Try a variety of use cases

We recommend validating your prompt on a variety of inputs before setting your workflow live. For a complex prompt, don't be surprised if you need to test it 10 times on different cases to feel satisfied it performs well consistently.

Think of the common case, and think of a few edge cases. Use step testing to test them all.

Model matters

Today's models vary greatly in terms of strengths, weaknesses, and cost. We've tried to be helpful by noting particular strengths of some models and indicating cost with dollar sign tags.

Start with a less expensive model. Then, if you're not getting good results and you've done a few prompt iterations, give another model a whirl.

"Show your work"

We love this tip: Ask the model to show its reasoning. You should get an explanation of the logic used to come up with an output. It's a great way to check that the reasoning checks out.

Ask for a confidence score

If you're using AI to make an important decision (like classifying an email from a client so you can reply with the right template), you may want to know how confident the model is in its decision.

Ask the model its level of confidence in its decision with a score of, say, 1 to 10. You can use this score to create a path and only run automations when the confidence score is high.

Bring a human in the loop

If you're asking AI to make a decision, but its confidence score is low, that's a great time to bring in a person to review or make the decision.

You can do this by splitting the workflow into two paths based on high and low confidence scores. On the low confidence path, add a "Get data input" step to ask a person for input or a decision based on the output from the AI step.

Check out other prompt writing resources

We've learned these best practices from our own experience while writing prompts in Relay.app, but we're certainly not the first to write about good prompts!

There are many great resources out there. Check out other resources for specific models if you need more help, like these:

Last updated