Table of Content
AIChatFlow enables users to control the behavior of the language model using conversation templates. This section presents a set of example templates that demonstrate the implementation of several techniques described in OpenAI’s “GPT Best Practices Guide”, aimed at achieving accurate responses from the language model.
Note that, by default, the user-created templates should be located in the
$XDG_CONFIG_HOME/aichatflow/templates
directory.
Our first two examples illustrate how to use conversation templates add a structure to the user prompts. In the first one we instruct the LLM to provide a concise 50-word summary of a given document (just as described in “Tactic: Specify the desired length of the output” section of OpenAI’s guide.
- role: system
content: |-
You will be provided with a document wrapped with XML tags. Your task
is to summarize it in about 50 words.
- role: user
content: |-
<document>
{{- (index .promptInput.Attachments 0).Content -}}
</document>
Please note that this template is designed to be used with the ai
command,
as it refers to the command line argument provided during execution. It wraps
the content of the first file provided with the --inline
argument in an XML
tag and submits it to the model along with instructions given in a system
message.
In general, you can provide any data structure to the template. To do
so, you can either use the --json-file
command line flag or use AIChatFlow
as a library in your own program. For more information about data available
to the templates refer to the Conversation Templates
Reference section.
If the above template resides under the name short-summary.yaml
in the
templates directory, it can be invoked as follows:
$ ai --new --template short-summary --attach README.md
AIChatFlow is a command line tool that enables seamless interactions with
Language Models. It simplifies scripting LLM interactions, manages
conversations with LLM models, assists in prompt creation and offers
customizable conversation templates. AIChatFlow is useful for the rapid
development of AI-heavy applications. It stores all interactions in an SQLite
database in your home directory for easy access and management.
The second example is quite similar and illustrates how to instruct the LLM in responding to a user query solely based on the information given in a provided document. This aligns closely with the tactic “Instruct the model to answer with citations from a reference text” outlined in OpenAI’s guide .
- role: system
content: |-
You will be provided with a document wrapped with XML tags and a
question. Your task is to answer the question using only the provided
document and to cite the passage(s) of the document used to answer the
question. If the document does not contain the information needed to
answer this question then simply write: "Insufficient information." If
an answer to the question is provided, it must be annotated with a
citation. Use the following format for to cite relevant passages
({"citation": …}).
- role: user
content: |-
<document>
{{- (index .promptInput.Attachments 0).Content -}}
</document>
Question: {{- .promptInput.Prefix -}}
You can copy and paste the above template into the
document-based-response-generator.yaml
file and then use it with the
following command:
$ ai --new --template document-based-response-generator --attach README.md "Can I use AIChatFlow for prompt engineering?"
Yes, you can use AIChatFlow for prompt engineering. This feature is made
possible by the use of customizable "conversation templates". These
templates, combined with the data you provide via the command line, allow
you to structure the beginning of a conversation that the LLM model will
then complete. The templates not only allow for custom system messages but
also help structure prompts. This gives you more control over the AI
assistant's behavior so that you can receive more precise responses. You
have the option to use predefined templates or to supply your own.
({"citation": "Unique to AIChatFlow is the ability for prompt engineering
using sturdy conversation templates. These templates, when merged with data
provided by the user through the command line, establish the beginning of a
conversation, which the LLM model subsequently completes. Templates not only
support custom system messages but also induce prompt structuring, granting
you control over the assistant's behavior for more precise responses. A
selection of predefined templates is available, but you also have the option
to supply your own."})
Once again, we are following OpenAI’s guide in this example. It demonstrates how to provide examples to the AI model:
- role: system
content: Answer in a consistent style.
- role: user
content: Teach me about patience.
- role: assistant
content: |-
The river that carves the deepest valley flows from a modest spring; the
grandest symphony originates from a single note; the most intricate
tapestry begins with a solitary thread.
- role: user
Please note that when creating a template, you have the ability to include both system and user messages, as well as simulated assistant responses. In the same manner, you can imitate function calls and their corresponding responses.
Please also note that the second user message lacks content. This is
intentional, as empty user message content is substituted with a default
template that operates effectively with both the ai
command line program and
scenarios where AIChatFlow is utilized as a library. For details about the
default content template, refer to the Conversation Templates
Reference section.
The following example showcases the “Ask the model if it missed anything on previous passes” tactic from the OpenAI’s guide. This tactic involves making multiple requests to the model in order to receive a more precise response. This can be accomplished through the use of a single template:
- role: system
content: |-
You will be provided with a document wrapped with XML tags, and a question.
Your task is to select excerpts which pertain to the question.
Ensure that excerpts contain all relevant context needed to interpret
them - in other words don't extract small snippets that are missing
important context. Provide output in JSON format as follows:
[{"excerpt": "..."},
...
{"excerpt": "..."}]
- role: user
content: |-
<document>
{{- (index .promptInput.Files 0).Content -}}
</document>
Question: {{ .promptInput.Prefix -}}
- role: request
- role: user
content: |-
Are there more relevant excerpts? Ensure that excerpts contain all relevant
context needed to interpret them - in other words don't extract small
snippets that are missing important context. Please output all the
relevant excerpts in the JSON format specified erlier.
One noteworthy aspect of this example template is the inclusion of the special
message of type request
. This message is not sent to the AI model; instead,
it instructs AIChatFlow to send all preceding messages, receive a response,
and then proceed with the remaining parts of the template. By utilizing this
kind of message, it is possible to create templates that are executed in
multiple stages. Each stage ends with a request to the model that includes all
prior messages from the conversation (as long as they fit within the model’s
context window). This allows the model to improve its own answers through an
“internal monologue”.
Here is an example of how to use this template, assuming it is placed in a
excerpts.yaml
template file.
$ ai --new --template excerpts --attach README.md "Can I use AIChatFlow for prompt engineering?"
[{"excerpt": "So, you've decided to try out AIChatFlow, and
you're impressed with how it improves your daily tasks using AI. But you
also believe that you can get even more value from language models by using
well-known prompt engineering techniques. Maybe you've even been tasked with
prototyping a chatbot for your company's customers. In any scenario,
AIChatFlow is the right tool for you."}, {"excerpt": "AIChatFlow provides
you with a powerful ChatBot that sits between you and your AI assistant and
moderates your conversations. It can be extensively configured with so-called
conversation templates. As you are likely aware, conversations with LLMs
involve not only user and assistant messages but also include system
messages (and potentially function calls). Think about ChatBot as the
\"system\" in this equation and consider conversation templates as
instructions on how to effectively moderate the conversation, ensuring it
unfolds in the desired direction."}, {"excerpt": "Conversation templates,
when supplemented with input data, establish a list of messages that are
appended to the conversation prior to being sent for model completion.
Templates can include any type of message that the model understands, and
can also specify parameters (such as temperature or frequency penalty) to be
sent with the completion request. Template may also be chained, either
manually before the conversation begins or dynamically by instructing the
model to select the next template based on user input. As for input data, it
can range from free-form text (as seen in traditional chat applications) to
structured information (eg. obtained through API calls)."}, {"excerpt": "It
is worth noting that AIChatFlow leaves the decision of how to utilize
ChatBot up to you. You have the option to use them either through our
command line utility or integrate them into your own program using the
Golang library. These two options are complementary. You may start
prototyping and experimenting with your chatbot using the provided program,
and once you are satisfied with the results, you can seamlessly embed it
into your own software."}, (...) ]
There is another crucial aspect to consider: when executing a multi-stage
template, unless the --stream-stages
option is given, the ai
program
streams only the final model response to the console. It is therefore
essential to be patient, as it takes time to generate completions for the
intermediate requests.
Remember that you can always review the entire interaction using the
--show-all
command-line flag. When inspecting structured prompts, it is also
useful to include the --output-format raw
option to instruct the program to
print markdown without any formatting, as shown in the following example:
$ ai --show-all --output-format raw
**System**: You will be provided with a document wrapped with XML tags, and a question.
Your task is to select excerpts which pertain to the question.
Ensure that excerpts contain all relevant context needed to interpret
them - in other words don't extract small snippets that are missing
important context. Provide output in JSON format as follows:
[{"excerpt": "..."},
...
{"excerpt": "..."}]
---
**User**: <document># AIChatFlow
Unleash the Power of LLMs from Your Terminal
Welcome to AIChatFlow, a Golang-powered library and command-line utility
designed to facilitate the construction of chatbots using large language
models (LLMs). Our definition of "chatbots" goes beyond typical conversational
applications to include more structured inter-program communication channels.
(...)
</document>
Question:Can I use AIChatFlow for prompt engineering?
---
**Request**:
{}
---
**Assistant**: [{"excerpt": "So, you've decided to try out AIChatFlow, and you're impressed with how it improves your daily tasks using AI. But you also believe that you can get even more value from language models by using well-known prompt engineering techniques. Maybe you've even been tasked with prototyping a chatbot for your company's customers. In any scenario, AIChatFlow is the right tool for you."},
{"excerpt": "AIChatFlow provides you with a powerful ChatBot that sits between you and your AI assistant and moderates your conversations. It can be extensively configured with so-called conversation templates. As you are likely aware, conversations with LLMs involve not only user and assistant messages but also include system messages (and potentially function calls). Think about ChatBot as the "system" in this equation and consider conversation templates as instructions on how to effectively moderate the conversation, ensuring it unfolds in the desired direction."},
{"excerpt": "Conversation templates, when supplemented with input data, establish a list of messages that are appended to the conversation prior to being sent for model completion. Templates can include any type of message that the model understands, and can also specify parameters (such as temperature or frequency penalty) to be sent with the completion request. Template may also be chained, either manually before the conversation begins or dynamically by instructing the model to select the next template based on user input. As for input data, it can range from free-form text (as seen in traditional chat applications) to structured information (eg. obtained through API calls)."},
(...)
]
---
**User**: Are there more relevant excerpts? Ensure that excerpts contain all relevant
context needed to interpret them - in other words don't extract small
snippets that are missing important context. Please output all the
relevant excerpts in the JSON format specified erlier.
---
**Request**:
{}
---
**Assistant**: [{"excerpt": "So, you've decided to try out AIChatFlow, and you're impressed with how it improves your daily tasks using AI. But you also believe that you can get even more value from language models by using well-known prompt engineering techniques. Maybe you've even been tasked with prototyping a chatbot for your company's customers. In any scenario, AIChatFlow is the right tool for you."},
{"excerpt": "AIChatFlow provides you with a powerful ChatBot that sits between you and your AI assistant and moderates your conversations. It can be extensively configured with so-called conversation templates. As you are likely aware, conversations with LLMs involve not only user and assistant messages but also include system messages (and potentially function calls). Think about ChatBot as the \"system\" in this equation and consider conversation templates as instructions on how to effectively moderate the conversation, ensuring it unfolds in the desired direction."},
{"excerpt": "Conversation templates, when supplemented with input data, establish a list of messages that are appended to the conversation prior to being sent for model completion. Templates can include any type of message that the model understands, and can also specify parameters (such as temperature or frequency penalty) to be sent with the completion request. Template may also be chained, either manually before the conversation begins or dynamically by instructing the model to select the next template based on user input. As for input data, it can range from free-form text (as seen in traditional chat applications) to structured information (eg. obtained through API calls)."},
{"excerpt": "It is worth noting that AIChatFlow leaves the decision of how to utilize ChatBot up to you. You have the option to use them either through our command line utility or integrate them into your own program using the Golang library. These two options are complementary. You may start prototyping and experimenting with your chatbot using the provided program, and once you are satisfied with the results, you can seamlessly embed it into your own software."},
{"excerpt": "As [previously mentioned](#chatbot-prototyping-with-prompt-engineering-techniques), AIChatFlow enables users to control the behavior of the language model using conversation templates. This section presents a set of example templates that demonstrate the implementation of several techniques described in OpenAI's \"GPT Best Practices Guide\", aimed at achieving accurate responses from the language model."},
(...)
]
The final example in our Quick Start guide demonstrates how to instruct a language model to automatically switch templates based on interactions with the user. This serves as an illustration for the “Use intent classification to identify the most appropriate instructions for a user query” tactic outlined in OpenAI’s guide.
Consider the customer-service.yaml
template provided below. This template
instructs the language model to classify customer service queries. Using this
classification, the model can automatically switch templates by calling the
botSwitchTemplate function. Once this function is initiated, the AIChatFlow
begins executing a new template, which may provide the model with more
detailed instructions on how to handle the user’s request.
- role: default-request
model: gpt-4
functions:
- botSwitchTemplate
- role: system
content: |-
You will receive customer service queries. Your task is to categorize each
query and generate output by calling the botSwitchTemplate function,
passing it the template name specified below in parentheses.
Here is a list of categories grouped into a higher-level hierarchy:
Billing categories:
- Unsubscribe or upgrade (`billing-unsubscribe`)
- Add a payment method (`billing-payment-methods`)
- Explanation for charge (`billing-why-charge`)
- Dispute a charge (`billing-dispute`)
Technical Support categories:
- Troubleshooting (`tech-troubleshooting`)
- Device compatibility (`tech-compatibility`)
- Software updates (`tech-updates`)
Account Management categories:
- Password reset (`acc-passwd-reset`)
- Update personal information (`acc-pi-update`)
- Close account (`acc-close`)
- Account security (`acc-security`)
General Inquiry categories:
- Product information (`product-info`)
- Pricing (`pricing`)
- Feedback (`feedback`)
- Speak to a human (`speak-to-human`)
Provide your output by caliling botSwitchTemplate function passing it a
template name as specified in parentheses above.
- role: user
In contrast to the original example from the OpenAI guide, where the model outputs its classification as a JSON structure and developers are obliged to parse it and provide the next appropriate prompt independently, AIChatFlow facilitates a seamless transition between conversation templates.
Few notes about the above template:
request
message demonstrated in the previous example, this
message does not initiate a request. It simply specifies the parameters
that will be used with any future requests in the conversation (including
requests defined in other templates) until the next default-request
message is encountered.Below you can find a complementary template called
tech-troubleshooting.yaml
:
- role: truncate
- role: default-request
functions:
- botInitializeTemplateSwitch
- botSwitchTemplate
model: gpt-4
- role: system
content: |-
You will be provided with customer service inquiries that require
troubleshooting in a technical support context. Help the user by:
- Ask them to check that all cables to/from the router are connected. Note
that it is common for cables to come loose over time.
- If all cables are connected and the issue persists, ask them which router model they are using
- Now you will advise them how to restart their device:
-- If the model number is MTD-327J, advise them to push the red button and
hold it for 5 seconds, then wait 5 minutes before testing the connection.
-- If the model number is MTD-327S, advise them to unplug and replug it,
then wait 5 minutes before testing the connection.
- If the customer's issue persists after restarting the device and waiting
5 minutes, connect them to IT support by calling botSwitchTemplate
function with `speak-to-human` template name.
If, at any time, the user starts asking questions not related to this topic,
follow the below procedure to change the conversation template:
1. Call the botInitializeTemplateSwitch function.
2. Ensure that the user is ready to conclude the troubleshooting conversation.
3. Use the `botSwitchTemplate` function with 'customer-service' as the template name.
- role: user
truncate
message is to instruct the ChatBot to discard
previous messages in the conversation and not include them in the request.
This only applies to user, assistant, function, and system messages. The
default request parameters that were previously appended to the
conversations will still be honored by the ChatBot.botInitializeTemplateSwitch
function is another function provided by
the AIChatFlow library for the model to use. Like any other function, it
has to be enabled by providing the functions
request parameter. When
called by a model, this function marks the user prompt that triggered the
template switch. This prompt will be used as the user message after the
template has actually been switched using the botSwitchTemplate
function.To finish the example and to fully demonstrate the power of the designed
ChatBot let’s create another template (acc-password-reset.yaml
):
- role: truncate
- role: default-request
model: gpt-4
functions:
- botInitializeTemplateSwitch
- botSwitchTemplate
- role: system
content: |-
You will receive customer service inquiries regarding password resets.
Instruct the user that the only available option to reset the router's
password is to restore the router to its factory defaults. This can be
accomplished by pressing and holding the small button near the power LED. To
perform this action, the user may require a thin tool such as a needle or
pin.
Once the factory settings have been restored, the user can utilize the
default password, which is indicated on the router's label. They will then
need to configure the router according to their preferences.
If, at any time, the user starts asking questions not related to this topic,
follow the below procedure to change the conversation template:
1. Call the botInitializeTemplateSwitch function.
2. Ensure that the user is ready to conclude the troubleshooting conversation.
3. Use the `botSwitchTemplate` function with 'customer-service' as the template name.
- role: user
The above template doesn’t brings anything that wasn’t discussed earlier, so let’s see how our new chatbot works:
$ ai --new --template customer-service "I need my internet working again"
I'm sorry to hear that you're having issues with your internet connection.
We can try troubleshooting this right now.
First of all, could you please confirm if all the cables connected to your
router are properly plugged in? This may seem basic but loose cables are a
common issue.
In case everything is connected and the issue persists, could you please
provide me with the model number of your router? It will help me guide you
through the next steps.
$ ai Cables are ok. My router is MTD-327J
Thank you for confirming. Since your router is MTD-327J, let's try doing a
reset. There's a red reset button on your router.
Please push that button and hold it for 5 seconds. After releasing the
button, wait for about 5 minutes to let the router reset itself and get back
online.
Once the 5 minutes have passed, please test your internet connection and let
me know if it's working again.
$ ai "How do I reset my router's password?"
Sure, I can help you with resetting your router password. But before we move
onto that, let's conclude with our current troubleshooting.
Did the reset work? Is your internet connection now working?
$ ai Yes it does.
Fantastic! I'm glad to hear that your internet is working again.
Now, regarding the router password reset, unfortunately, the only way to
reset it is to restore the router to its factory settings. To do this, you
will need to press and hold the small button near the power LED using
something thin like a needle or pin.
After the factory reset, you can use the default password that's written on
the router's label and then, you can change it to something that you prefer.
Please note that after resetting, you would need to reconfigure the router
to your preference.
Hope this helps! If you have any other questions, feel free to ask.
Great! To view the entire conversation, including function calls, you can use
the ai --show-all
command. We intentionally omitted its output to give you
an opportunity to practice.