CLI reference

This chapter covers all aspects of the ai command-line program. The first section describes the available command-line arguments, followed by a section that outlines how these arguments are converted into user prompts. The third section details how prompts are provided to conversation templates as their context.

Command Line Arguments

Note: this section is yet to be written. As a good starting point we share output of ai --help command

Usage: ai [commands and flags] Prompt prefix

The ai supports the following command flags (only one
can be used at a time):

--new       Start a new conversation. If no template is
            given using the --tmpl option (see below), the
            'new-conversation' template is used.

-n          Equivalent to --new.

--cont      Continues the existing conversation. This is the
            default command and can be omitted. If no
            conversation is selected using the --convo flag,
            the most recent conversation is chosen.
            If no template is given using the --tmpl option
            (see below), the 'default' template is
            used.

--regenerate

            Regenerates the most recent LLM's response for
            the selected conversation (see --convo flag
            for instruction how to select conversation).

--list      Lists all your conversations.

-l          Equivalent to --list.


--show [N]  Prints the last N messages for the selected
            conversation. If N is omitted, only the last
            message is printed. If N is zero, all messages
            are printed. Note: this option prints only user
            and assistant messages (no function calls, no
            system messages and no request parameters are
            printed). See --show-all below.

-s [N]      Equivalent to --show.

--show-all [N]

            Prints N last messages from the selected
            conversation. If no N is given then the whole
            conversation is printed. Unlike --show command
            this one prints all messages including system messages,
            function calls, request parameters, etc.).

--trim      Removes the last user message and all subsequent
            messages from the most recent conversation or
            from a conversation selected with the --convo
            option.

--delete [ID]

            Deletes the given conversation.

--title [ID]

            Sets the title of a given conversation.

--help      Prints this message.

-h          Equivalent to --help.


Configuration options (all options, except the first one,
can be specified in the configuration file using their long
form):

--config file

            Read the configuration from the specified file.
            By default, the config.yaml file is read from
            the $XDG_CONFIG_HOME/aichatflow/ directory.
            Values read from the configuration file override
            the defaults specified in this documentation.
            Values passed using the below flags take
            precedence over the configuration read from the
            file.

--recursive-config

            If the flag is provided, either through the
            command line or in the config file, and the
            working directory is located under $HOME, then
            directories are walked upwards, from the current
            directory to $HOME, in search of the
            .aichatflow.yaml file. The first file that is
            found is read in addition to main config file
            (see --config option above).

--format-output format

            Use the given format when printing messages.
            Valid formats are: markdown, raw, yaml, json.
            The default format is: markdown, if the standard
            output is connected to a terminal or raw in
            other cases. This can be used with --cont,
            --new, --regenerate, or --show commands.

            Please be aware that when using raw or markdown
            formats with --cont, --new, or --regenerate
            command, the last assistant message alone will
            be streamed, and no other messages will be
            printed. However, when YAML or JSON formats are
            employed, all new messages will be displayed.

--fmt format

            Equivalent to --format-output.

--tmplate-dir directory

            Read conversation templates from the provided
            directory. The default is
            $XDG_CONFIG_HOME/aichatflow/templates.

--prompt-template file

            Use the provided file as a template to compose
            the prompt. If the flag is missing,
            'prompt-template.txt' from the
            $XDG_CONFIG_HOME/aichatflow/templates directory
            will be utilized, if available. Otherwise, the
            built-in template will be used. Note that the
            --tmpl-dir flag does not change this behavior.

--db-file file

            Use specified file as a database to store
            conversations. The default is
            $HOME/.conversations.db

--key-file file

            Read the OpenAI API key from the provided file.
            If the file happens to be an executable, run it
            and read the key from its standard output. The
            default file is $XDG_CONFIG_HOME/aichatflow/key.

--stream-stages

            For multi-stage templates, stream every model
            response. By default, only the last response is
            streamed.

-ss         Equivalent to --stream-stages

Prompt composing options:

--edit      Opens a pre-composed user message in your
            favorite editor ($EDITOR or vim), allowing
            further editing. Can be used with --cont,
            --new, or --regenerate command. This flag
            can not be given in the configuration file.

-e          Equivalent to --edit.

--inline path

            Read the provided file and append it to the
			prompt prefix. If '-' is used as the path,
			the standard input is read. This can be used
			with either the --new or --cont command.

            Note that, if the standard input is not a
            terminal, it will still be read irrespective of
            whether it was requested with the --inline option
            or not.

-i path     Equivalent to --inline.

--attach path

			Attach the provided file so that it can be
			send to the model as a separate content part.
			Images are supported.

-a path     Equivalent to --attach

--img-url URL

			Attach the provided URL so that it can be
			send to the model as a separate content part.

--img-detail detail

			For all the following --img-url and --attach
			options user provided detail level ("auto",
			"hight", "low").

--json-file path

            Read the given file, parse it as JSON struct,
            and make it available to the conversation
            template. If '-' is used as the path, standard
            input is read. Can be used with --new or --cont
            command.

--template template-name

            Use the provided conversation template. This
            flag can only be utilized with --new or --cont
            commands. Refer to the command documentation
            above to learn what templates are used in the
            absence of this flag.

-t template-name

            Equivalent to --template

Other options:

--convo ID-prefix

            Select conversation for --cont, --regenerate,
            --show, or --show-all commands.

-c ID-prefix

            Equivalent to --convo

--model name

            Use the specified LLM model to generate
            completions. This will overwrite the model
            specified in the template or the default model.

--dont-save

            Do not save (create or update) the conversation
            in the database.

--record filename

            Write prompts, requests, responses, and the
            conversation in the given file in json format.
            This feature is useful for testing and debugging
            purposes.

Prompt Templates

Prompt Templates vs. Conversation Templates

The prompt templates detailed in this section and the conversation templates covered extensively in other parts of the AIChatFlow project documentation, serve different purposes.

The conversation templates play a key role in creating well-performing AI-based assistants. They allow for use of a number of techniques, known as prompt engineering, to steer interactions between users and the assistant and to ensure desired results. The engine that executes these templates is implemented as a component of the AIChatFlow library, which allows for their utilization across multiple applications.

Prompt templates, in contrast, are specific to the ai program. They transform command-line arguments into text that is then used as context for the execution of conversation templates. While switching from the default prompt template, embedded in the program binary, to a custom-crafted one is uncommon, the option is provided despite the potential for confusion.

The rest of this section discusses the details related to the prompt templates and their execution. The subsequent section provides information on how the ai program invokes conversation templates.

Prompt Template Execution

When the ai program is executed, it aggregates the information specified via command-line arguments into the following data structure, which is then used as context for the execution of a prompt template.

type promptInput struct {
	Prefix      string         `json:"Prefix"` // Prefix for the prompt input
	Attachments []attachment   `json:"InlineFiles"`
	JSON        map[string]any `json:"-"`
}

type attachment struct {
	Type        string `json:"Type"`
	Url         string `json:"URL"`
	Path        string `json:"Path"`
	Name        string `json:"Name"`
	Ext         string `json:"Ext"`
	Content     string `json:"Content"`
	ContentType string `json:"ContentType"`
	ImageDetail string `json:"ImageDetail"`
}

The prompt template is executed using a template engine provided in the text/template package from the Go’s standard library. If the user does not specify a custom template with the --prompt-template command-line flag, the program defaults to using the following template:

{{- if . }}
    {{- $wrap := or (paragraph .Prefix) .Attachments (gt (len .Attachments) 1) -}}
    {{- with (paragraph .Prefix) -}}
        {{- paragraph . -}}
    {{- end -}}
    {{- range .Attachments -}}
        {{- if eq .Type "inline" -}}
        {{- if and $wrap (not (eq .Ext ".md"))  -}}
        {{- if .Name -}}{{- "File: `" }}{{ .Name }}{{"`\n"}}{{end}}
        {{- codeBlock .Content }}
        {{- else -}}{{ .Content }}{{- end -}}
        {{- end -}}
    {{- end -}}
{{- end -}}

When executed, the template generates output that includes the prompt prefix specified via command-line arguments combined with the content of files indicated by the --inline flag. The content of each file is wrapped within Markdown code block syntax, except when the file’s extension indicates Markdown formatting or if the file is the only input — meaning no other files are provided and no prefix was specified.

The output from the template is subsequently fed into the conversation template engine, as described in the subsequent section.

Conversation Templates Execution

The ai program selects the conversation template for execution based on the --template command-line argument or defaults to the configuration file settings if the argument is omitted, ultimately falling back on the built-in templates if neither is specified.

When executing conversation templates, the ai program provides a map containing the following properties to serve as the template context.

  • commandLine: a promptInput structure defined in the previous section
  • composedPrompt: a message composed using the prompt template as described in the above section
  • contentParts: an array of message parts as defined in OpenAI’s API reference, detailed in the Conversation Templates Reference chapter. The first part is identical to the message held in the composedPrompt property. Subsequent parts include all attachments from the promptInput structure that were not inlined in the composedPrompt message.
  • Additionally, the map includes fields from the JSON structure that was read from the file specified using the --json-file command-line flag.

This design integrates seamlessly with the conversation template engine, offering great flexibility for crafting conversation templates intended specifically for use with the ai program. It also ensures a smooth transition from prototyping the AI assistant using the ai utility to deploying it in production using the AIChatFlow library.

For details on how conversation templates are built and executed, as well as the role of the template context, please refer to the Conversation Templates Reference chapter.