Knowledge

One of the most powerful features of Relay.app is the ability to attach dynamic data to your AI prompts. However, there is often a need to give a model the same reference material every time — like transcripts of some support interactions, or a product catalog.

When using the Prompt any model AI step, you have an additional option to attach Knowledge to your prompt.

AI step configuration showing the Knowledge button option to attach reference files to prompts

Your knowledge base can be thought of as a collection of files that live outside of any one workflow, so you can easily attach them to your AI steps wherever they may be needed.

If you haven't uploaded any files to your knowledge base, clicking the Knowledge button as shown above will guide you through the process.

Knowledge upload dialog showing options for Local files, Inline text, Google Drive files, Dropbox files, and Box files
We support local files, inline text, and a few types of cloud-hosted files. If you need something that's not listed here, reach out to [email protected]!

Cloud files (like from Drive, Dropbox, or Box) are automatically kept in-sync. No need to reimport whenever you make a change!

Attaching knowledge

Once you have a few files set up, you'll be able to select exactly which files are relevant to your prompt.

Knowledge file selection interface showing a list of available files with checkboxes to select which files to attach to the AI prompt

Knowledge attachments show up on your AI step alongside any data from previous steps.

AI step prompt field showing data references in blue and knowledge references in purple, demonstrating how both types of references appear in the prompt
Data references are blue, and knowledge references are purple. You can see one of each in this screenshot (data on the left, knowledge on the right).

To remove a knowledge file from your prompt, click the 'x' icon — just as you would to remove a data reference.

Using knowledge

Unlike data references, the contents of knowledge files are not embedded directly into the prompt.

Instead, the model is told about all of the files it has access to. It can then choose to access any knowledge file in one of three ways:

  1. It can search a file for similarity to a query

  2. It can search a file for a literal match to a query

  3. It can read an entire file

We'll explain a bit more about how these modes are different, why a model might choose one mode over another, and how you can steer the model toward a particular mode through your prompt.

When searching the internet, it's rare that you're looking for an exact match to your query. You will almost always get (and want) search results that are similar in subject matter or meaning, even if the exact words used differ.

With similarity search, the model will issue a query, and get back the content from your knowledge files that is most likely to be related — even if there are no exact words in common between the two!

This is the search mode that the model will prefer by default, especially in the absence of phrases that would steer it toward a different search mode.

Use cases

  • Given an incoming customer support email [data] and a history of past support interactions [knowledge], check to see if we've answered similar questions, and summarize previous answers.

  • Given an incoming bug report [data] and an overview of our team members and their specialties [knowledge], return the name of the team member most familiar with the product area mentioned in the bug.

Occasionally, there are times where you want to look up something very precise in a knowledge file. In those cases, a similarity search could introduce noise, by finding things very similar in name or meaning, but ultimately unrelated.

With literal search, the model will issue a query, and get back exact matches (with surrounding context) from your knowledge files.

To guide the model toward this search mode, try using trigger words like "exact match," "literal," or "precise." That will hint to the model that it should prefer this search mode over similarity search.

Use cases

  • Given an incoming question about a particular part number [data], and a parts catalog [knowledge], find the exact part number in the catalog, and use the returned information to answer the question.

Reading a file

It may also be necessary for the model to ingest an entire knowledge file, in scenarios where the full content is relevant.

To guide the model toward this interaction, emphasize in your prompt that the model should interact with the entire file in some way.

Use cases

  • Write a LinkedIn post, using examples of previous posts [knowledge] to match the general writing style.

  • Once a week, summarize the recent updates in our team standup doc [knowledge].

Tip: If multiple knowledge files are attached to a single step, models can (and will!) interact with those files in different ways! It may choose to read one of the files, and only perform a similarity or literal search on another.

If you find that the model isn't using one or more of your files in the way that you'd expect, you'll often be able to control its behavior by adjusting your prompt.

Last updated

Was this helpful?