Use Cases

This section describes use cases ideally suited for AIChatFlow. If you prefer a hands-on approach, feel free to skip ahead to the Installation and Configuration, Basic Usage Examples and Conversation Templates Reference chapters of this document.

Chat with LLMs from the Terminal

If you are a command-line warrior, or maybe just a regular software developer, eager to leverage the power of AI in your daily workflow, then the AIChatFlow project is something you should definitely check out. One of its key components is a user-friendly command-line utility called ai, which enables you to seamlessly interact with LLM models and integrate them with your daily workflow.

Managing Conversation

The ai program essentially enables you to:

  • Initiate new chats,
  • Continue ongoing conversations with fresh prompts,
  • Regenerate the model’s most recent response, especially after altering your last prompt,
  • Keep track of and discard past interactions, and
  • Print your conversations, beautifully formatted within your terminal.

Additionally, the program maintains a contextual window of the model for you, ensuring it remains updated with the latest messages from the conversation. Essentially, ai acts as the command-line counterpart to the ChatGPT web app, operating locally in your terminal and storing conversations in a local SQLite database.

Easy prompt composition

The real strength of the ai program lies in its user-friendly methods for crafting prompts. Gone are the days of copy-pasting between your files or terminal and a web browser. With ai, composing a prompt can be as simple as typing command-line arguments. You can easily include the content of your files or pipe additional context to the program’s standard input. Furthermore, you can launch your favorite editor to manually adjust the prompt according to your needs.

Moreover, if the way the program composes prompts from the command line arguments doesn’t suit your needs, you can easily replace a prompt-composition template embedded in the program with your own. This gives you even more flexibility to tailor the prompts to your liking.

Integration with Other Tools

The ai program can output model responses or entire conversations in various formats including JSON, YAML, or Markdown. This makes it highly scriptable, allowing for seamless integration with other tools in your workflow.

Additionally, you can provide the model with custom functions, which enable it to interact with external programs and systems. This is discussed in detail in the External Services Integration use case and External Services Integration reference.

To learn how to utilize the ai program for conversation management, please refer to the Installation and Configuration, Basic Usage Examples and CLI Reference chaters.

Prompt Engineering

If you’re anything like us, you’ve likely experimented with the ai program and been inspired by its potential for integrating AI into your daily workflows. Like us, you recognize that building a successful assistant necessitates thoughtful prompt engineering.

At its core, AIChatFlow serves as an intermediary between you and the language model, managing your conversations. You can guide how these interactions unfold, using powerful conversation templates.

As you are likely aware, interactions with large language models (LLMs) encompass not only user and assistant messages but also system prompts and possibly function invocations. In this framework, AIChatFlow can be thought of as the ‘system’, and the conversation templates as programs or scripts that guide both the system and the language model. Well-crafted templates ensure that your engagements with the assistant progress as planned.

In essence, conversation templates:

  1. Define a sequence of messages that are added to the conversation before they are submitted to the model for completion.
  2. Incorporate any kind of message that the model is capable of understanding and processing.
  3. Set parameters such as temperature or frequency penalty that are part of the completion request.
  4. Can be sequenced, either pre-configured before the conversation starts or dynamically by prompting the model to select the next template in response to user input.
  5. Require input data, which can range from free-form text, as with traditional chat applications, to structured information, for example, from API calls.

Important note: The conversation templates and prompt-composition templates mentioned earlier serve distinct purposes. Conversation templates enable you to engineer prompts and guide your interactions with LLMs. These templates are part of the library implementation. In contrast, prompt-composition templates are utilized by the command line utility to convert command line arguments into text, which (after possibly being supplemented with structured data) is subsequently used as input for your conversation templates.

To learn more about conversation templates, including usage examples, please refer to the Conversation Templates Quick Start Guide and Conversation Templates Reference chapters.

External Services Integration

Building a useful assistant often requires interaction with external services such as databases, corporate knowledge repositories, ticketing systems, CRMs, and more. To facilitate the integration of your next AI chatbot with these systems, AIChatFlow enables you to easily define remote HTTP/REST endpoints. By providing a simple description of your external resources in the configuration file, you make your assistant ready to initiate communication with them.

With this mechanism, you can enhance the model with custom functions written in almost any programming language, as long as they are reachable through the standard HTTP/REST protocol. If you prefer Go as your programming language, you can leverage the AIChatFlow library, which provides a straightforward API to register your functions, eliminating the need to encapsulate them in network protocols.

In the future, we plan to enhance AIChatFlow by enabling interaction with gRPC services and adding support for NATS.io message queues. Additionally, we intend to provide ready-to-use modules for integration with common external services. These modules will supply developers with pre-built connectors and APIs, reducing the necessity for manual integration work. Stay tuned for updates on these developments.

Chatbot Prototyping and Production Deployments

Chatbot Prototyping

Assume, you have been tasked with developing a chatbot for your company’s sales team, marketing staff, or customer service. In such a scenario, you are likely seeking a tool that allows for rapid prototyping of chatbots and supports experimentation with large language models.

Look no further. Equipped with the ai program, which brings the power of LLMs to your terminal, and the sophisticated conversation templates enabling advanced prompt engineering techniques, you are immediately ready to start prototyping your next AI assistant.

What’s important is that you don’t have to start by building a UI for interactions with your chatbot or worry about how to call the model. Instead, you can focus on the most important part: crafting flawless interactions with the AI assistant.

Only after you are confident that LLMs can indeed benefit your use case should you begin considering how to make your chatbot production-ready. AIChatflow can assist you with this as well.

Chatbot Deployment

After creating your templates and ensuring that you receiving accurate responses from your assistant, the next step is to make it easily accessible to others, such as friends, family, coworkers, or customers.

Besides the ai command-line utility, which allows for rapid chatbot prototyping, AIChatFlow also provides a Golang library that enables interaction with your new AI assistant from your own programs. By utilizing this library, you can provide your own UI for your assistant, likely offering something more sophisticated and user-friendly to your end-users than our developer-oriented ai tool.

It doesn’t really matter whether you plan to build a standalone desktop application or a web application — AIChatFlow was built with both in mind. What is even more important is that the project was carefully designed to facilitate a fast transition from developer-friendly environments to production-ready, end-user-oriented deployments.

The library’s API facilitates effortless exposure of your chatbot as a microservice, giving you the freedom to choose a network protocol such as HTTP/REST, gRPC, or NATS.io. Moreover, it supports custom implementations for storing templates and conversations, enabling you to select the most appropriate storage option, like SQL or NoSQL databases.

Note: The rest of this section describes features that are planned for future implementation.

It is planned for the library to include a ready-to-use implementation that utilizes NATS.io key-value storage, which will allow for the deployment of your chatbot in a highly scalable and secure manner.

We also plan to provide components which will enable the exposure of your chatbot over HTTP/REST, gRPC, or NATS.io APIs. This will greatly simplify the integration of your chatbot with other application components. In addition, the command line utility is intended to generate a significant portion of the code required to deploy your chatbot as a microservice in the future.

Lastly, we plan to offer simple, customizable, and user-friendly UI components. These components will enable the straightforward creation of web applications for end-users. Leveraging the power of Golang, we aim to compile these elements into a self-contained, statically-linked binary that will be simple to deploy.