Knowledge
Last updated
Was this helpful?
Last updated
Was this helpful?
One of the most powerful features of Relay.app is the ability to attach dynamic data to your AI prompts. However, there is often a need to give a model the same reference material every time — like transcripts of some support interactions, or a product catalog.
When using the Prompt any model AI step, you have an additional option to attach Knowledge to your prompt.
Your knowledge base can be thought of as a collection of files that live outside of any one workflow, so you can easily attach them to your AI steps wherever they may be needed.
If you haven't uploaded any files to your knowledge base, clicking the Knowledge button as shown above will guide you through the process.
Once you have a few files set up, you'll be able to select exactly which files are relevant to your prompt.
Knowledge attachments show up on your AI step alongside any data from previous steps.
To remove a knowledge file from your prompt, click the 'x' icon — just as you would to remove a data reference.
Unlike data references, the contents of knowledge files are not embedded directly into the prompt.
Instead, the model is told about all of the files it has access to. It can then choose to access any knowledge file in one of three ways:
It can search a file for similarity to a query
It can search a file for a literal match to a query
It can read an entire file
We'll explain a bit more about how these modes are different, why a model might choose one mode over another, and how you can steer the model toward a particular mode through your prompt.
When searching the internet, it's rare that you're looking for an exact match to your query. You will almost always get (and want) search results that are similar in subject matter or meaning, even if the exact words used differ.
With similarity search, the model will issue a query, and get back the content from your knowledge files that is most likely to be related — even if there are no exact words in common between the two!
This is the search mode that the model will prefer by default, especially in the absence of phrases that would steer it toward a different search mode.
Given an incoming customer support email [data] and a history of past support interactions [knowledge], check to see if we've answered similar questions, and summarize previous answers.
Given an incoming bug report [data] and an overview of our team members and their specialties [knowledge], return the name of the team member most familiar with the product area mentioned in the bug.
Occasionally, there are times where you want to look up something very precise in a knowledge file. In those cases, a similarity search could introduce noise, by finding things very similar in name or meaning, but ultimately unrelated.
With literal search, the model will issue a query, and get back exact matches (with surrounding context) from your knowledge files.
To guide the model toward this search mode, try using trigger words like "exact match," "literal," or "precise." That will hint to the model that it should prefer this search mode over similarity search.
Use cases
Given an incoming question about a particular part number [data], and a parts catalog [knowledge], find the exact part number in the catalog, and use the returned information to answer the question.
It may also be necessary for the model to ingest an entire knowledge file, in scenarios where the full content is relevant.
To guide the model toward this interaction, emphasize in your prompt that the model should interact with the entire file in some way.
Use cases
Write a LinkedIn post, using examples of previous posts [knowledge] to match the general writing style.
Once a week, summarize the recent updates in our team standup doc [knowledge].