This guide outlines how to effectively interact with the Open WebUI API endpoints for smooth integration and automation using our models. Please note that this setup is experimental and may be updated in the future for improvements.
Authentication
To ensure secure access, API requests require authentication 🛡️. Use the Bearer Token method for authorization by retrieving your API key from Settings > Account in the Open WebUI, or utilize a JWT (JSON Web Token).
Notable API Endpoints
📜 Retrieve All Models
- Endpoint:
GET /api/models
- Description: Fetches all models created or added through Open WebUI.
- Example:
curl -H "Authorization: Bearer YOUR_API_KEY" http://localhost:3000/api/models
💬 Chat Completions
- Endpoint:
POST /api/chat/completions
- Description: OpenAI API-compatible chat completion endpoint for models in Open WebUI, including Ollama, OpenAI, and Function models.
- Example:
curl -X POST http://localhost:3000/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.1",
"messages": [{"role": "user", "content": "Why is the sky blue?"}]
}'
🧩 Retrieval Augmented Generation (RAG)
RAG enhances responses by integrating data from external sources. Below are methods for managing files and knowledge collections via the API, allowing for more contextually enriched conversations.
Uploading Files
Upload files to be used in RAG responses; the content is extracted and stored in a vector database.
- Endpoint:
POST /api/v1/files/
- Curl Example:
curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Accept: application/json" \
-F "file=@/path/to/your/file" http://localhost:3000/api/v1/files/
- Python Example:
import requests
def upload_file(token, file_path):
url = 'http://localhost:3000/api/v1/files/'
headers = {'Authorization': f'Bearer {token}', 'Accept': 'application/json'}
files = {'file': open(file_path, 'rb')}
response = requests.post(url, headers=headers, files=files)
return response.json()
Adding Files to Knowledge Collections
Group uploaded files into a knowledge collection for reference in chats.
- Endpoint:
POST /api/v1/knowledge/{id}/file/add
- Curl Example:
curl -X POST http://localhost:3000/api/v1/knowledge/{knowledge_id}/file/add \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"file_id": "your-file-id-here"}'
- Python Example:
import requests
def add_file_to_knowledge(token, knowledge_id, file_id):
url = f'http://localhost:3000/api/v1/knowledge/{knowledge_id}/file/add'
headers = {'Authorization': f'Bearer {token}', 'Content-Type': 'application/json'}
data = {'file_id': file_id}
response = requests.post(url, headers=headers, json=data)
return response.json()
Using Files and Collections in Chat Completions
You can include individual files or entire collections in your RAG queries for detailed responses.
- Using an Individual File: Useful when focusing the model’s response on the content of a specific file.
- Using a Knowledge Collection: Leverage a broader context by referencing entire collections in your queries.
Advantages of Using Open WebUI as a Unified LLM Provider
Open WebUI provides several key benefits, making it an invaluable tool for developers and businesses:
- Unified Interface: Manage interactions with multiple LLMs through a single platform.
- Ease of Implementation: Quick integration with comprehensive documentation and active community support.
Swagger Documentation Links
Access API documentation for Open WebUI services:
Application | Documentation Path |
---|---|
Main | /docs |
WebUI | /api/v1/docs |
Ollama | /ollama/docs |
OpenAI | /openai/docs |
Images | /images/api/v1/docs |
Audio | /audio/api/v1/docs |
RAG | /retrieval/api/v1/docs |
Each documentation portal provides interactive examples, schema details, and testing tools for a better understanding of API capabilities.
By following this guide, you can quickly integrate and start using the Open WebUI API. For further assistance, join our Discord Community or check out the FAQs. Happy coding! 🌟
You also may be interested in: