feat: add Together AI integration and provider implementation guide #385
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Related issue: #307
Question ?
I wrote a small documentation about the AI Provider Architecture and implementation steps to add new provider in the future. Where should it be stored, is there a knowledge base, wiki or storage somewhere?
AI Provider Implementation Guide
Architecture Overview
The system follows a modular architecture for managing LLM providers and their models. Here's how the components interact:
Diagram
Implementation Steps
1. Environment Variables Setup
First, add the necessary environment variables in
.env.example
:Example from Together AI implementation:
2. Docker Configuration
Update the Docker-related files to include the new provider's environment variables:
Dockerfile
:docker-compose.yaml
:3. Type Definitions
Add the environment variable types to
worker-configuration.d.ts
:4. Provider List Configuration
Add the provider information in
app/utils/constants.ts
. This is the entry point for both static and dynamic model configurations:5. API Key and Base URL Handling
Modify
app/lib/.server/llm/api-key.ts
to include the new provider:6. Model Implementation
Update
app/lib/.server/llm/model.ts
to handle the new provider:7. Dynamic Model Loading (Optional)
If your provider supports dynamic model discovery, implement the
getDynamicModels
function:Implementation Patterns
1. OpenAI-Compatible APIs
For providers offering OpenAI-compatible APIs (like Together AI):
2. Custom APIs
For providers with unique APIs:
Example: Together AI Implementation
The Together AI implementation demonstrates using the OpenAI-compatible API pattern:
Testing Checklist
Environment Setup
Static Configuration
Dynamic Loading
API Integration
Error Handling
Docker Integration
Best Practices
Notes