AI4Free is a Python library that provides convenient access to a variety of large language models (LLMs) from different providers, all without requiring any API keys or fees. This allows developers and researchers to experiment with various LLMs and explore their capabilities without the barrier of cost.
Disclaimer: This project is not officially associated with Any Offical APIs. It is an independent reverse engineering effort to explore the All Available APIs.
- Multiple LLM Providers: AI4Free supports a diverse range of LLM providers including:
- Open-source LLMs: KoboldAI, LEO (Brave AI)
- Free-tier access LLMs: YouChat, OpenGPT, Yep
- Research/Demo access LLMs: Phind, Blackbox
- Conversation Management: The library helps maintain conversation history with the LLMs, enabling more natural and context-aware interactions.
- Prompt Optimization: AI4Free includes built-in prompt optimization techniques to enhance the quality and relevance of generated responses.
- Streaming Support: Responses can be streamed in real-time, allowing for immediate feedback and dynamic interactions.
- Asynchronous Capabilities: Async versions of several providers are available for efficient handling of multiple requests and improved performance.
pip install -U ai4free
Use code with caution.
The basic usage pattern involves creating an instance of the desired LLM provider and then using the chat()
method to interact with the model.
Here's how to use each of the available providers in AI4Free without asynchronous functions:
from ai4free import LEO
leo = LEO()
while True:
prompt = input("You: ")
response = leo.chat(prompt)
print(f"LEO: {response}")
from ai4free import KOBOLDAI
koboldai = KOBOLDAI()
while True:
prompt = input("You: ")
response = koboldai.chat(prompt)
print(f"KoboldAI: {response}")
from ai4free import BLACKBOXAI
ai = BLACKBOXAI(
is_conversation=True,
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
model=None # You can specify a model if needed
)
# Start an infinite loop for continuous interaction
while True:
# Define a prompt to send to the AI
prompt = input("Enter your prompt: ")
# Check if the user wants to exit the loop
if prompt.lower() == "exit":
break
# Use the 'chat' method to send the prompt and receive a response
r = ai.chat(prompt)
print(r)
from ai4free import ThinkAnyAI
opengpt = ThinkAnyAI()
while True:
prompt = input("Enter your prompt: ")
response_str = opengpt.chat(prompt)
print(response_str)
from ai4free import PhindSearch
# Create an instance of the PHIND class
ph = PhindSearch()
# Define a prompt to send to the AI
prompt = "write a essay on phind"
response = ph.chat(prompt)
print(response)
from ai4free import YEPCHAT
# Instantiate the YEPCHAT class with default parameters
YEPCHAT = YEPCHAT()
# Define a prompt to send to the AI
prompt = "What is the capital of France?"
# Use the 'chat' method to get a response from the AI
r = YEPCHAT.chat(prompt)
print(r)
from ai4free import YouChat
ai = YouChat(
is_conversation=True,
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
)
prompt = "what is meaning of life"
response = ai.ask(prompt)
# Extract and print the message from the response
message = ai.get_message(response)
print(message)
from ai4free import Cohere
# Replace 'YOUR_API_KEY' with your Cohere API key
cohere = Cohere(api_key='YOUR_API_KEY')
while True:
prompt = input("You: ")
response = cohere.chat(prompt)
print(f"Cohere: {response}")
from ai4free import REKA
# Replace 'YOUR_API_KEY' with your REKA API key
reka = REKA(api_key='YOUR_API_KEY')
while True:
prompt = input("You: ")
response = reka.chat(prompt)
print(f"REKA: {response}")
from ai4free import GROQ
# Replace 'YOUR_API_KEY' with your GROQ API key
groq = GROQ(api_key='YOUR_API_KEY')
while True:
prompt = input("You: ")
response = groq.chat(prompt)
print(f"GROQ: {response}")
from ai4free import VLM
# Initialize the VLM class
vlm = VLM(model="llava-hf/llava-1.5-7b-hf", system_prompt="You are a helpful and informative AI assistant.")
# Path to the image and the user message
image_path = r"C:\Users\hp\Desktop\ai4free\WhatsApp Image 2024-05-19 at 19.01.01_47251a0f.jpg"
user_message = "What is shown in this image?"
# Encode the image to base64
image_base64 = vlm.encode_image_to_base64(image_path)
# Define the prompt with both image and text
prompt = {
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}},
{"type": "text", "text": user_message}
]
}
# Get the response
response = vlm.ask(prompt)
# Extract and print the message from the response
message = vlm.get_message(response)
print(message)
from ai4free import DeepInfra
ai = DeepInfra(
model="meta-llama/Meta-Llama-3-70B-Instruct", # DeepInfra models
is_conversation=True,
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
)
prompt = "what is meaning of life"
response = ai.ask(prompt)
# Extract and print the message from the response
message = ai.get_message(response)
print(message)
- Cohere: Provides access to various text generation models including "command-r-plus" with capabilities like summarization, copywriting, and dialogue.
- REKA: Offers several LLM models like "reka-core", "reka-flash", and "reka-edge" for tasks such as question answering, text generation, and summarization.
- GROQ: Grants access to models like "mixtral-8x7b-32768" with capabilities for text generation, translation, and question answering.
- LEO: Provides access to "llama-2-13b-chat" with abilities for dialogue, text generation, and question answering.
- KoboldAI: Offers various open-source LLM models for text generation and creative writing.
- OpenAI: Enables interaction with OpenAI models like "gpt-3.5-turbo" for diverse tasks like text generation, translation, and code generation. Requires an API key.
- OpenGPT: Provides access to various LLM models for text generation and creative writing.
- Blackbox: Grants access to powerful LLMs for various tasks like text generation, translation, and question answering.
- Phind: Offers access to advanced LLMs with research and demo capabilities for tasks like text generation, code generation, and question answering.
- Yep: Provides access to models like "Mixtral-8x7B-Instruct-v0.1" with capabilities for text generation, translation, and question answering.
- YouChat: Offers free-tier access to a powerful LLM with abilities for dialogue, text generation, and question answering.
- ThinkAnyAI: Offers access to various LLM models like "claude-3-haiku", "llama-3-8b-instruct", "mistral-7b-instruct", "rwkv-v6", "gemini-pro", and "gpt-3.5-turbo" for tasks like text generation, question answering, and creative writing.
AI4Free opens up exciting possibilities for exploring and utilizing the power of large language models without any cost. With its easy-to-use interface and support for diverse LLM providers, the library provides a valuable tool for developers, researchers, and anyone interested in exploring the cutting-edge of AI language technology.