A simple chat interface for interacting with local AI models through Ollama.
This project provides a lightweight web interface for chatting with local AI models using Ollama. It features a clean, responsive design and supports multiple popular language models.
- Real-time chat interface with typing indicators
- Support for multiple models (llama3.2, llama2, mistral)
- FastAPI backend with error handling
- Clean, responsive UI with vanilla JavaScript
- CORS enabled for development
- Python 3.8 or higher
- Ollama installed and running locally
- Task (optional, for using Taskfile commands)
-
Clone the repository:
git clone https://github.com/yourusername/local-chat-ai.git cd local-chat-ai
-
Install dependencies:
# Install Python dependencies task install # or pip install -r requirements.txt
-
Start the FastAPI server:
task run # or uvicorn api:app --reload # or python api.py
-
Open your browser and navigate to
http://localhost:8002/api/chat
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.