multiai

A Python library for text-based AI interactions

View the Project on GitHub or PyPI

[ English 日本語 ]

multiai

multiai is a Python library and command-line tool designed to interact with text-based generative AI models from OpenAI, Anthropic, Google, Perplexity, and Mistral. This manual will guide you through the installation, configuration, and usage of multiai.

Table of Contents

Supported AI Providers and Models

multiai allows you to interact with AI models from the following providers:

AI Provider Web Service Models Available
OpenAI ChatGPT GPT Models
Anthropic Claude Claude Models
Google Gemini Gemini Models
Perplexity Perplexity Perplexity Models
Mistral Mistral Mistral Models

Key Features

Getting Started

Installation

To install multiai, use the following command:

pip install multiai

Setting Up Your Environment

Before using multiai, configure your API key(s) for your chosen AI provider(s). Set your API key as an environment variable or in a user-setting file:

export OPENAI_API_KEY=your_openai_api_key_here

Basic Usage

Once your API key is set up, you can start interacting with the AI:

Interactive Mode Details

In interactive mode, you can input multi-line text and control when the input is finished using the blank_lines parameter in the [command] section of the settings file. Here’s how it works:

Interactive mode can be exited by:

Configuration

Settings File

multiai reads its settings from a configuration file, which can be located in the following order of precedence:

  1. System Default: system default settings
  2. User-Level: ~/.multiai
  3. Project-Level: ./.multiai

Settings from the latter files overwrite those from the former.

Here’s an example of a configuration file:

[model]
ai_provider = openai
openai = gpt-4o-mini
anthropic = claude-3-haiku-20240307
google = gemini-1.5-flash
perplexity = llama-3.1-sonar-small-128k-chat
mistral = mistral-large-latest

[default]
temperature = 0.7
max_requests = 5

[command]
blank_lines = 0
always_copy = no
always_log = no
log_file = chat-ai-DATE.md

[prompt]
color = blue
english = If the following sentence is English, revise the text to improve its readability and clarity in English. If not, translate into English. No need to explain. Just output the result English text.
factual = Do not hallucinate. Do not make up factual information.
url = Summarize following text very briefly.

[api_key]
openai = (Your OpenAI API key)
anthropic = (Your Claude API key)
google = (Your Gemini API key)
perplexity = (Your Perplexity API key)
mistral = (Your Mistral API key)

Selecting Models and Providers

The default AI provider is specified in the [model] section of the settings file. However, you can override this via command-line options:

You can also specify the model using the -m option. For example, to use the gpt-4o model from OpenAI:

ai -om gpt-4o

When multiple AI provider options are given, for example:

ai -oa

you can communicate with multiple models simultaneously. Default model for each provider is used.

API Key Management

API keys can be stored as environment variables:

If environment variables are not set, multiai will look for keys in the [api_key] section of your settings file.


Advanced Usage

Model Parameters

Parameters such as temperature and max_tokens can be configured in the settings file or via command-line options:

If the response is incomplete, multiai will request additional information until the specified number of requests, max_requests, is reached.

Input Options

multiai provides several command-line options to simplify specific types of prompts:

Output Options

Command-Line Options

To see a list of all command-line options, use:

ai -h

For more detailed documentation, you can open this manual in a web browser with:

ai -d

Using multiai as a Python Library

multiai can also be used as a Python library. Here’s a simple example:

import multiai

# Initialize the client
client = multiai.Prompt()
# Set model and temperature.
# If not written, default setting is used.
client.set_model('openai', 'gpt-4o')
client.temperature = 0.5

# Send a prompt and get a response
answer = client.ask('hi')
print(answer)

# Continue the conversation with context
answer = client.ask('how are you')
print(answer)

# Clear the conversation context
client.clear()

If an error occurs during client.ask, the error message will be returned, and client.error will be set to True.

Sample script to translate a text file

Here is an example of a Python script using the multiai library to translate a text file. Save the following code as english.py.

import multiai
import sys
pre_prompt = "Translate the following text into English. Just answer the translated text and nothing else."
file = sys.argv[1]
with open(file) as f:
    prompt = f.read()
client = multiai.Prompt()
client.set_model('openai', 'gpt-4o')
answer = client.ask(pre_prompt + '\n\n' + prompt)
print(answer)

Then if you have a markdown file of text.md in Japanese, for example, run

python english.py text.md

Then the translated English is shown. If you want to save it to output.md, just redirect the result by

python english.py text.md > output.md

If you change pre_prompt parameter, you can make various kinds of script.

Running your local chat app

You can run your local chat app using streamlit. Install streamlit by running the following command:

pip install streamlit

Download app.py and run your local server with the following command:

streamlit run app.py

Once the server is running, your default web browser will open and display the chat application, Chotto GPT. This app allows you to easily select from a variety of AI models from different providers and engage in conversations with them. You can customize the list of available models and the log file location by directly editing the source code.

Running on Google Colab

To run on Google Colab, use this notebook. You will need to set API keys in your Colab Secrets.