A Python library for text-based AI interactions
[ English 日本語 ]
multiai
is a Python library and command-line tool designed to interact with text-based generative AI models from OpenAI, Anthropic, Google, Perplexity, and Mistral. This manual will guide you through the installation, configuration, and usage of multiai
.
multiai
as a Python Library
multiai
allows you to interact with AI models from the following providers:
AI Provider | Web Service | Models Available |
---|---|---|
OpenAI | ChatGPT | GPT Models |
Anthropic | Claude | Claude Models |
Gemini | Gemini Models | |
Perplexity | Perplexity | Perplexity Models |
Mistral | Mistral | Mistral Models |
To install multiai
, use the following command:
pip install multiai
Before using multiai
, configure your API key(s) for your chosen AI provider(s). Set your API key as an environment variable or in a user-setting file:
export OPENAI_API_KEY=your_openai_api_key_here
Once your API key is set up, you can start interacting with the AI:
To send a simple query:
ai hi
You should see a response like:
gpt-4o-mini>
Hello! How can I assist you today?
For an interactive session, enter interactive mode:
ai
In this mode, you can continue the conversation:
user> hi
gpt-4o-mini>
Hello! How can I assist you today?
user> how are you
gpt-4o-mini>
I'm just a program, so I don't have feelings, but I'm here and ready to help you! How about you? How are you doing?
user>
In interactive mode, you can input multi-line text and control when the input is finished using the blank_lines
parameter in the [command]
section of the settings file. Here’s how it works:
blank_lines = 1
, input will finish only after a blank line (i.e., pressing Enter twice). This is particularly useful when you want to copy and paste text with multiple lines. If your input includes blank lines, increase the blank_lines
parameter accordingly.Interactive mode can be exited by:
q
, x
, quit
or exit
.Ctrl-D
.Ctrl-C
.multiai
reads its settings from a configuration file, which can be located in the following order of precedence:
~/.multiai
./.multiai
Settings from the latter files overwrite those from the former.
Here’s an example of a configuration file:
[system]
[model]
ai_provider = openai
openai = gpt-4o-mini
anthropic = claude-3-haiku-20240307
google = gemini-1.5-flash
perplexity = llama-3.1-sonar-small-128k-chat
mistral = mistral-large-latest
[default]
temperature = 0.7
max_requests = 5
[command]
blank_lines = 0
always_copy = no
always_log = no
log_file = chat-ai-DATE.md
[prompt]
color = blue
english = If the following sentence is English, revise the text to improve its readability and clarity in English. If not, translate into English. No need to explain. Just output the result English text.
factual = Do not hallucinate. Do not make up factual information.
url = Summarize following text very briefly.
[api_key]
openai = (Your OpenAI API key)
anthropic = (Your Claude API key)
google = (Your Gemini API key)
perplexity = (Your Perplexity API key)
mistral = (Your Mistral API key)
The default AI provider is specified in the [model]
section of the settings file. However, you can override this via command-line options:
-o
for OpenAI-a
for Anthropic-g
for Google-p
for Perplexity-i
for MistralYou can also specify the model using the -m
option. For example, to use the gpt-4o
model from OpenAI:
ai -om gpt-4o
When multiple AI provider options are given, for example:
ai -oa
you can communicate with multiple models simultaneously. Default model for each provider is used.
API keys can be stored as environment variables:
OPENAI_API_KEY
for OpenAIANTHROPIC_API_KEY
for AnthropicGOOGLE_API_KEY
for GooglePERPLEXITY_API_KEY
for PerplexityMISTRAL_API_KEY
for MistralIf environment variables are not set, multiai
will look for keys in the [api_key]
section of your settings file.
Parameters such as temperature
and max_tokens
can be configured in the settings file or via command-line options:
-t
option to set the temperature
.max_tokens
parameter can be omitted.If the response is incomplete, multiai
will request additional information until the specified number of requests, max_requests
, is reached.
multiai
provides several command-line options to simplify specific types of prompts:
-e
Option: Adds a pre-prompt to correct or translate English text. This pre-prompt is defined in the english
parameter in the [prompt]
section of the settings file.
Example usage:
ai -e This are a test
-f
Option: Adds a pre-prompt to prevent hallucination or fabricated information. This is defined in the factual
parameter in the settings file.
Example usage:
ai -f Explain quantum mechanics
-u URL
Option: Automatically retrieves and converts the content of a given URL to text. If the URL ends in .pdf
, the content of the PDF file is also converted to text. The program will summarize the text based on a pre-prompt and then allow for further interactive queries about the content. If you want the program to summarize in your native language, rewrite the pre-prompt defined in the url
parameter in the settings file in your language.
Example usage:
ai -u https://en.wikipedia.org/wiki/Artificial_intelligence
Paging Long Responses: If a response exceeds one page in your terminal, multiai
uses pypager to display it.
Copy to Clipboard: Use the -c
option to copy the last response to the clipboard. If always_copy = yes
is set in the [command]
section of the settings file, this option is always enabled.
Example usage:
ai -c "What is the capital of France?"
Logging Chats: Use the -l
option to log the chat to a file named chat-ai-DATE.md
in the current directory, where DATE
is replaced by today’s date. The file name can be changed with the log_file
key in the [command]
section. If always_log = yes
is set in the [command]
section, this option is always enabled.
Example usage:
ai -l Tell me a joke
To see a list of all command-line options, use:
ai -h
For more detailed documentation, you can open this manual in a web browser with:
ai -d
multiai
as a Python Librarymultiai
can also be used as a Python library. Here’s a simple example:
import multiai
# Initialize the client
client = multiai.Prompt()
# Set model and temperature.
# If not written, default setting is used.
client.set_model('openai', 'gpt-4o')
client.temperature = 0.5
# Send a prompt and get a response
answer = client.ask('hi')
print(answer)
# Continue the conversation with context
answer = client.ask('how are you')
print(answer)
# Clear the conversation context
client.clear()
If an error occurs during client.ask
, the error message will be returned, and client.error
will be set to True
.
Here is an example of a Python script using the multiai
library to translate a text file. Save the following code as english.py
.
import multiai
import sys
pre_prompt = "Translate the following text into English. Just answer the translated text and nothing else."
file = sys.argv[1]
with open(file) as f:
prompt = f.read()
client = multiai.Prompt()
client.set_model('openai', 'gpt-4o')
answer = client.ask(pre_prompt + '\n\n' + prompt)
print(answer)
Then if you have a markdown file of text.md
in Japanese, for example, run
python english.py text.md
Then the translated English is shown. If you want to save it to output.md
, just redirect the result by
python english.py text.md > output.md
If you change pre_prompt
parameter, you can make various kinds of script.
You can run your local chat app using streamlit
. Install streamlit
by running the following command:
pip install streamlit
Download app.py and run your local server with the following command:
streamlit run app.py
Once the server is running, your default web browser will open and display the chat application, Chotto GPT. This app allows you to easily select from a variety of AI models from different providers and engage in conversations with them. You can customize the list of available models and the log file location by directly editing the source code.
To run on Google Colab, use this notebook. You will need to set API keys in your Colab Secrets.