Assistant Settings
Create your new AI Assistant and configure its settings.

Go to ‘Assistants’.
Choose ‘Add Assistant’.
For more information on the various settings, please see the section below.
Assistant Configuration

Prompt: Input your custom prompt instructions here. If you need some inspiration, we have a prompt library you can browse to see if something there might suit your needs. Please remember to include <CONTEXT> where required. ASAP will insert the results of the user's semantic search wherever the <CONTEXT> tag is placed. For more information, please see this excellent article: https://platform.openai.com/docs/guides/gpt-best-practices/six-strategies-for-getting-better-results
Type: ASAP currently only supports Questions & Answers.
Greeting Message: This setting only applies to the ASAP sandbox. Show a pre-defined greeting message when a user starts a new session or opens the chat. This will not get sent out to external channels.
Style: Pick the temperature of the response from the LLM. Precise (temp = 0.2) causes the LLM to respond in more consistent ways based on the provided context. Create (temp = 0.8) causes the LLM to respond in more diverse and creative ways, while still referencing the provided context.
Model: Pick your desired LLM. At the moment, ASAP only supports GPT-3.5 and GPT-4.
GPT-3.5 is suitable for most tasks however is weak when it comes to logic, pure creation, understanding of sciences, and handling of complex enquires.
GPT-4 is a better choice for long and complex questions, as well as sentiment analysis, working with numbers, and connecting the dots on discrete pieces of information.
We recommend trying GPT-3.5 first as the response speed is superior compared to GPT-4. You may consider switching to GPT-4 if 3.5 is unable to provide the kinds of answers you seek.
The token allocation per model is as follows:
GPT-3.5 4k: 3k tokens for the prompt & 1k tokens for the response
GPT-3.5 16k: 14k tokens for the prompt & 2k tokens for the response
GPT-4 8k: 7k tokens for the prompt & 1k tokens for the response.
GPT-4 32k: 30k tokens for the prompt & 2k tokens for the response.

Translate: This function is deprecated and will be removed soon.
AI Goal Affirmation: A set of priority instructions that will be sent to the LLM. This is appended to the Prompt Payload towards the button, which tends to make the LLM give it a higher priority compared to the main Prompt Instructions. See this article for more information. This input box has a limit of 255 characters.
Fulfillments: Pick up to 3 fulfillments to attach to your assistant. ASAP will search the documents attached to these fulfilments in order to find relevant documents to send to the LLM as part of the Prompt payload.
Response URL: If you are sending out to an external channel, please input the webhook URL here.
Session Reset Timer: When set to 0, this function is disabled. When a value is selected (1 - 12), the user's session will soft reset if no message is sent to ASAP during that time period. When a session is reset, the previous message history will not be added to the Prompt Payload.
Event Webhook URL: Input your webhook URL for ASAP to send events to. ASAP only currently sends a session reset event.
Validation Methods

There are 3 options for Validation, Confidence Threshold, LLM Validator, and Both.
Confidence Threshold: If selected, this option filters out all chunks that are lower than the set score. Each chunk is assigned a sorting score which determines the relevance of the chunk based on the Semantic Search. A value of 100% on the Confidence Threshold slider is equal to a sortingScore of 1.0 and this filtering will cover all chunks returned from all fulfillments attached to this Assistant.
LLM Validator: If selected, this option sends the LLM completion, the user input, and the relevant chunks back to the LLM to confirm if the initial completion is valid or not. If the response is 'Fail', ASAP will send the user the Fallback Response instead.
Validator Model: Input a custom LLM model identifier here or use the pre-filled custom model. This field must not be left blank.
LLM Validator Prompt: The Validator Prompt allows you to fine-tune the parameters sent to the LLM. Please note there is a default value in place and we encourage you to use that. This field must not be left blank.
N.B. The Validator relies on the response coming back to ASAP being in the prescribed JSON format. Please use the default JSON output format exactly.
Fallback Response Text: If ASAP receives a fail response from the LLM Validator, the system will return the Fallback Response Text instead of the LLM completion. You can customise the text here.
Validator Sequence Diagram

Last updated