- Project Overview
- System Architecture
- Setup Guide
- Core Features
- Technology Stack
- Integration Details
- Workflows
- Data Flow and Storage
- Deployment Strategy
Devr.AI is an advanced AI-powered Developer Relations (DevRel) assistant designed to revolutionize open-source community management. By integrating with platforms like Discord, Slack, GitHub, and Discourse, Devr.AI functions as a virtual DevRel advocate that helps maintainers engage with contributors, streamline onboarding processes, and deliver real-time project updates.
The system leverages Large Language Models (LLMs), knowledge retrieval mechanisms, and workflow automation to enhance community engagement, simplify contributor onboarding, and ensure that open-source projects remain active and well-supported.
- Reduce maintainer workload by automating routine interactions and queries
- Improve contributor experience through personalized onboarding and support
- Enhance project visibility via consistent engagement and community nurturing
- Generate actionable insights from community interactions and contribution patterns
- Ensure knowledge preservation by capturing and organizing project information
flowchart TB
%% External Platforms
subgraph "External Platforms"
GH["GitHub"]
DS["Discord"]
SL["Slack"]
end
%% React Frontend
subgraph "React Frontend"
FRONT["React + TS + TailwindCSS"]
DASH["Dashboard"]
end
%% FastAPI Backend
subgraph "FastAPI backend"
API["FastAPI Gateway"]
end
%% Authentication
subgraph "Authentication"
GitAuth["GitHub Authentication"]
SupaAuth["Supabase Authentication"]
end
%% Core Processing Engine
subgraph "Core Processing Engine"
WF["Workflow Orchestrator"]
Q["Task Queue"]
end
%% AI Service Layer (Groq APIs)
subgraph "AI Service Layer (Groq APIs)"
LLM["LLM Service"]
KR["Knowledge Retrieval"]
CODE["Code Understanding"]
end
%% Integration Services
subgraph "Integration Services"
GHS["GitHub Service"]
DSS["Discord Service"]
SLS["Slack Service"]
end
%% Data Storage Layer
subgraph "Data Storage Layer"
Supa["(Supabase)"]
VDB["Vector DB and Relational DB "]
end
%% Analytics Engine
subgraph "Analytics Engine"
METRICS["Metrics Calculator"]
REPORT["Report Generator"]
TREND["Trend Analyzer"]
end
%% Connections
FRONT --> DASH
DASH <--> API
GH <--> GHS
DS <--> DSS
SL <--> SLS
GHS <--> API
DSS <--> API
SLS <--> API
API <--> GitAuth
API <--> SupaAuth
API <--> WF
WF <--> Q
WF <--> LLM
WF <--> KR
WF <--> CODE
LLM <--> Supa
KR <--> Supa
Supa --> VDB
WF --> METRICS
METRICS --> REPORT
METRICS --> TREND
REPORT --> Supa
TREND --> Supa
%% Styling for Light Colors with Black Text
classDef external fill:#e0f7fa,stroke:#00796b,stroke-width:1px,color:#000;
classDef backend fill:#e8f5e9,stroke:#388e3c,stroke-width:1px,color:#000;
classDef auth fill:#f3e5f5,stroke:#8e24aa,stroke-width:1px,color:#000;
classDef core fill:#fff3e0,stroke:#f57c00,stroke-width:1px,color:#000;
classDef ai fill:#e1f5fe,stroke:#0288d1,stroke-width:1px,color:#000;
classDef integration fill:#fce4ec,stroke:#d81b60,stroke-width:1px,color:#000;
classDef storage fill:#ede7f6,stroke:#5e35b1,stroke-width:1px,color:#000;
classDef analytics fill:#e8eaf6,stroke:#3949ab,stroke-width:1px,color:#000;
%% Apply classes to nodes
class GH,DS,SL external;
class API backend;
class GitAuth,SupaAuth,FRONT auth;
class WF,Q core;
class LLM,KR,CODE,DASH ai;
class GHS,DSS,SLS integration;
class VDB storage;
class METRICS,REPORT,TREND analytics;
Devr.AI follows a microservices architecture with the following key components:
-
API Gateway Layer
- Handles all incoming requests from integrated platforms
- Manages authentication and request routing
- Implements rate limiting and request validation
-
Core Processing Engine
- Orchestrates workflows between different services
- Manages the processing queue for asynchronous tasks
- Handles context management for ongoing conversations
-
AI Service Layer
- LLM integration for natural language understanding and generation
- Knowledge retrieval system for accessing project-specific information
- Specialized models for code understanding and issue triage
-
Integration Services
- Platform-specific adapters for Discord, Slack, GitHub, and Discourse
- Webhook handlers and event processors
- Authentication managers for each platform
-
Data Storage Layer
- Vector database for semantic search functionality
- Relational database for structured data and relationships
- Document store for conversation history and analytics
-
Analytics Engine
- Real-time metrics calculation
- Report generation
- Anomaly detection and trend analysis
-
New Contributor Welcome & Onboarding
- Automatic detection of first-time contributors
- Personalized welcome messages with project-specific onboarding instructions
- Interactive guidance through first contribution steps
-
Community Interaction
- Natural language conversations across all integrated platforms
- Contextual responses based on user history and project knowledge
- Multi-turn dialogue management with memory of previous interactions
-
Activity Promotion
- Automated suggestions for good first issues to new contributors
- Regular updates about project milestones and achievements
- Recognition of contributor achievements and milestones
-
Issue Classification
- Automatic categorization of new issues by type, component, and priority
- Identification of duplicate issues and linking them together
- Suggested assignment based on contributor expertise and availability
-
PR Review Support
- Automated initial code review comments for common issues
- Documentation verification and suggestions
- Test coverage analysis and feedback
-
Contributor Guidance
- Step-by-step assistance for setting up development environments
- Code style and convention explanations
- Troubleshooting help for common development issues
-
Dynamic Documentation
- Automatic extraction of FAQs from community conversations
- Creation and maintenance of project wikis and guides
- Code documentation generation and enhancement
-
Contextual Help
- Instant answers to common technical questions
- Project-specific knowledge retrieval
- Code snippet explanations and examples
-
Knowledge Preservation
- Capturing of tribal knowledge from experienced contributors
- Archiving of important decisions and their context
- Historical project evolution tracking
-
Engagement Metrics
- Contributor activity tracking across platforms
- Response time and resolution rate monitoring
- Community growth and retention analytics
-
Contribution Analysis
- Identification of valuable contributors and their patterns
- Code quality and impact measurements
- Diversity and inclusion metrics
-
Health Monitoring
- Early warning system for declining project activity
- Burnout risk detection for maintainers
- Community sentiment analysis
For installing the project locally refer to the Installation Guide
- Core Framework: FastAPI
- Containerization: Docker & Kubernetes
- Messaging Queue: RabbitMQ
- Task Scheduling: Celery
- LLM Integration: Strong LLM with reasoning capacity
- Embeddings: Embedding Model
- Vector Database: Supabase
- Relational Database: Supabase (PostgreSQL)
- Document Storage: Supabase
- Dashboard: React.js + Tailwind CSS
- Analytics UI: React.js + Shadcn
- CI/CD: GitHub Actions
- Monitoring: Prometheus
- Logging: ELK Stack
- Cloud Provider: AWS / GCP
sequenceDiagram
participant User as Discord User
participant Bot as Discord Bot
participant API as API Gateway
participant EP as Event Processor
participant AI as AI Service
participant KB as Knowledge Base
participant DB as Database
User->>Bot: Sends message or command
Bot->>API: Forwards event via webhook
API->>EP: Routes to Event Processor
EP->>DB: Check user context
DB->>EP: Return context
EP->>KB: Retrieve relevant knowledge
KB->>EP: Return knowledge
EP->>AI: Generate response with context
AI->>EP: Return formatted response
EP->>DB: Update conversation history
EP->>Bot: Send response to Discord
Bot->>User: Display response message
Note over EP,AI: For complex queries, additional<br/>processing steps may occur
- OAuth2 flow for bot installation
- Server-specific configuration and permission setup
- Role-based access control configuration
- Message creation and update events
- Channel join/leave events
- Reaction events for issue tracking
- Thread creation for complex discussions
- Slash commands for direct interaction with DevrAI
- Automated welcome messages in designated channels
- Role assignment based on GitHub contribution history
- Discord webhook sends event to API Gateway
- Event processor extracts relevant information
- AI Service generates appropriate response
- Integration service formats and sends response back to Discord
sequenceDiagram
participant User as Slack User
participant Slack as Slack Platform
participant API as API Gateway
participant EP as Event Processor
participant AI as AI Service
participant KB as Knowledge Base
User->>Slack: Sends message/command
Slack->>API: Forwards via Events API
API->>EP: Process Slack event
EP->>KB: Query relevant information
KB->>EP: Return knowledge snippets
EP->>AI: Generate response
AI->>EP: Return formatted response
EP->>Slack: Send Block Kit message
Slack->>User: Display interactive response
alt User Interaction
User->>Slack: Clicks interactive element
Slack->>API: Action payload
API->>EP: Process interaction
EP->>Slack: Update message
Slack->>User: Show updated content
end
- Slack App Directory installation flow
- Workspace-specific settings configuration
- Channel mapping to project components
- Message events in channels and direct messages
- App mention events
- Interactive component events (buttons, dropdowns)
- Slash commands for project information
- Interactive message components for issue triage
- Automatic daily/weekly project updates
- Direct message onboarding for new contributors
- Slack Events API sends event to API Gateway
- Event processor validates and processes the event
- Workflow engine determines appropriate action
- Response is formatted according to Slack Block Kit
- Message is sent back to appropriate Slack channel
sequenceDiagram
participant GH as GitHub
participant API as API Gateway
participant EP as Event Processor
participant AT as Automated Triage
participant AI as AI Service
participant DB as Database
GH->>API: Webhook (Issue/PR/Comment)
API->>EP: Process GitHub event
alt New Issue
EP->>AT: Triage new issue
AT->>AI: Analyze issue content
AI->>AT: Return classification
AT->>GH: Apply labels & suggestions
AT->>DB: Log issue metadata
else New PR
EP->>AT: Review PR
AT->>AI: Analyze code changes
AI->>AT: Return review comments
AT->>GH: Post initial review
AT->>DB: Track PR statistics
else Comment
EP->>AI: Process comment context
AI->>EP: Generate appropriate response
EP->>GH: Post response comment
EP->>DB: Update conversation tracking
end
- GitHub App installation process
- Repository-specific configuration
- Permission scopes management
- Issue creation, update, and comment events
- Pull request lifecycle events
- Repository star and fork events
- Release publication events
- Automated issue labeling and assignment
- PR review comments and suggestions
- Release notes generation
- Contributor statistics and recognition
- GitHub webhook sends event to API Gateway
- Event processor categorizes and enriches event data
- Task is assigned to appropriate service based on event type
- Response actions are taken via GitHub API
- Event and action are logged for analytics
- API key authentication
- Category and tag mapping
- User role configuration
- New topic creation events
- Post creation and update events
- User registration events
- Automatic responses to common questions
- Cross-linking between forum topics and GitHub issues
- Knowledge base article suggestions
- Community showcase of project achievements
- Discourse webhook or API polling detects new content
- Content is processed and classified
- Knowledge retrieval finds relevant information
- Response is generated and posted to appropriate thread
- New knowledge is extracted and stored for future use
stateDiagram-v2
[*] --> DetectNewContributor
DetectNewContributor --> GenerateWelcome
GenerateWelcome --> DetermineIntention
DetermineIntention --> IssueGuidance: Issue Creation
DetermineIntention --> PRGuidance: PR Submission
DetermineIntention --> GeneralGuidance: Platform Join
IssueGuidance --> ProvideResources
PRGuidance --> ProvideResources
GeneralGuidance --> ProvideResources
ProvideResources --> MonitorEngagement
MonitorEngagement --> FollowUp: No Activity
MonitorEngagement --> AnswerQuestions: User Response
MonitorEngagement --> CompleteOnboarding: Task Completed
FollowUp --> MonitorEngagement
AnswerQuestions --> MonitorEngagement
CompleteOnboarding --> RecordStats
RecordStats --> [*]
AnswerQuestions --> EscalateToMaintainer: Complex Question
EscalateToMaintainer --> [*]
- Trigger: First-time contributor opens an issue or PR, or joins community platform
- Detection: System identifies user as new contributor based on platform history
- Personalization: AI generates personalized welcome message based on:
- Contribution type (issue, PR, question)
- Project area of interest
- Technical background (if available)
- Guidance: Provides specific next steps based on contribution intent:
- Development environment setup instructions
- Coding standards and guidelines
- Testing requirements
- Documentation expectations
- Follow-up: Monitors engagement and provides additional assistance:
- Answers to follow-up questions
- Escalation to human maintainers when necessary
- Check-ins on progress after predefined intervals
stateDiagram-v2
[*] --> NewIssueDetected
NewIssueDetected --> AnalyzeContent
AnalyzeContent --> CheckDuplicates
CheckDuplicates --> IdentifyDuplicate: Match Found
CheckDuplicates --> ClassifyIssue: No Duplicate
IdentifyDuplicate --> LinkIssues
LinkIssues --> NotifyUser
NotifyUser --> CloseAsDuplicate
CloseAsDuplicate --> [*]
ClassifyIssue --> AssignLabels
AssignLabels --> DetermineComplexity
DetermineComplexity --> SuggestAssignees
SuggestAssignees --> CheckCompleteness
CheckCompleteness --> RequestInfo: Incomplete
CheckCompleteness --> UpdateProject: Complete
RequestInfo --> AwaitResponse
AwaitResponse --> AnalyzeContent: Info Provided
AwaitResponse --> CloseStale: No Response
UpdateProject --> NotifyTeam
NotifyTeam --> ScheduleFollowUp
ScheduleFollowUp --> [*]
CloseStale --> [*]
- Trigger: New issue created on GitHub
- Analysis:
- AI extracts key information from issue description
- Compares with existing issues for duplicates
- Identifies affected components and potential severity
- Classification:
- Applies appropriate labels (bug, feature, documentation, etc.)
- Assigns priority level
- Suggests potential assignees based on expertise
- Enhancement:
- Requests additional information if description is incomplete
- Suggests reproducible test cases if applicable
- Provides links to relevant documentation
- Notification:
- Alerts appropriate team members in Slack/Discord
- Updates project boards
- Schedules follow-up if issue remains unaddressed
stateDiagram-v2
[*] --> QuestionDetected
QuestionDetected --> ClassifyIntent
ClassifyIntent --> ExtractEntities
ExtractEntities --> SearchKnowledgeBase
SearchKnowledgeBase --> SearchCodebase
SearchCodebase --> SearchPriorConversations
SearchPriorConversations --> GenerateResponse: Information Found
SearchPriorConversations --> FallbackResponse: No Information
GenerateResponse --> FormatWithExamples
FormatWithExamples --> AddReferences
AddReferences --> DeliverResponse
FallbackResponse --> GenerateGenericGuidance
GenerateGenericGuidance --> SuggestAlternatives
SuggestAlternatives --> DeliverResponse
DeliverResponse --> RecordInteraction
RecordInteraction --> UpdateFAQ: Common Question
RecordInteraction --> [*]: Unique Question
UpdateFAQ --> [*]
- Trigger: Question asked in any integrated platform
- Intent Recognition:
- Identifies question type and topic
- Extracts key entities and concepts
- Knowledge Retrieval:
- Searches vector database for semantically similar content
- Retrieves relevant documentation and past answers
- Examines code repository for relevant examples
- Response Generation:
- Creates comprehensive yet concise answer
- Includes code examples if appropriate
- Adds links to official documentation
- Knowledge Capture:
- Records question and answer in knowledge base
- Updates FAQ if question is common
- Identifies documentation gaps for future improvement
- Data Collection:
- Continuous monitoring of activity across all platforms
- Tracking of individual contributor actions
- Recording of response times and resolution rates
- Processing:
- Aggregation of metrics by timeframe and category
- Calculation of derived metrics (e.g., contributor retention)
- Trend analysis and anomaly detection
- Insight Generation:
- Identification of active vs. declining areas
- Recognition of valuable contributors
- Detection of potential community issues
- Reporting:
- Automated weekly summaries to maintainers
- Interactive dashboard updates
- Quarterly comprehensive project health reports
- Action Recommendation:
- Suggestions for community engagement improvements
- Identification of contributors for recognition
- Alerts for areas needing maintainer attention
flowchart TB
subgraph "External Data Sources"
GH["GitHub API"]
DS["Discord API"]
SL["Slack API"]
end
subgraph "Data Collection Layer"
WH["Webhooks"]
API["API Clients"]
UI["User Interactions"]
end
subgraph "Data Processing"
NORM["Data Normalizer"]
EXTR["Entity Extractor"]
EMB["Embedding Generator"]
end
subgraph "Storage Layer"
PIN["Supabase<br>(Vector DB)"]
SUP["Supabase<br>(PostgreSQL)"]
MDB["Supabase<br>(Document Store)"]
end
GH --> WH
DS --> WH
SL --> API
WH --> NORM
API --> NORM
UI --> NORM
NORM --> EXTR
EXTR --> EMB
EMB --> PIN
EXTR --> SUP
NORM --> MDB
-
External Data Sources
- Platform APIs (GitHub, Discord, Slack)
- Webhook events
- Direct user interactions
-
Data Transformation
- Normalization of platform-specific data formats
- Entity extraction and relationship mapping
- Embedding generation for textual content
-
Storage Destinations
- Vector embeddings → Pinecone
- Structured relationships → Supabase
- Historical conversations → MongoDB
- Temporary state → Redis
flowchart TB
subgraph "Development Environment"
DEV_K8S["Kubernetes Cluster"]
DEV_DB["Database Services"]
DEV_CACHE["Cache Layer"]
end
subgraph "Staging Environment"
STAGE_K8S["Kubernetes Cluster"]
STAGE_DB["Database Services"]
STAGE_CACHE["Cache Layer"]
end
subgraph "Production Environment"
subgraph "Region A"
PROD_K8S_A["Kubernetes Cluster"]
PROD_DB_A["Database Primary"]
PROD_CACHE_A["Cache Primary"]
end
subgraph "Region B"
PROD_K8S_B["Kubernetes Cluster"]
PROD_DB_B["Database Replica"]
PROD_CACHE_B["Cache Replica"]
end
LB["Load Balancer"]
CDN["Content Delivery Network"]
end
subgraph "CI/CD Pipeline"
GIT["Git Repository"]
CI["Continuous Integration"]
REG["Container Registry"]
CD["Continuous Deployment"]
end
GIT --> CI
CI --> REG
REG --> CD
CD --> DEV_K8S
CD --> STAGE_K8S
CD --> PROD_K8S_A
CD --> PROD_K8S_B
LB --> PROD_K8S_A
LB --> PROD_K8S_B
PROD_DB_A <--> PROD_DB_B
PROD_CACHE_A <--> PROD_CACHE_B
CDN --> LB
-
Multi-environment Setup:
- Development environment for active feature development
- Staging environment for integration testing
- Production environment for live deployment
-
Containerized Deployment:
- Microservices packaged as Docker containers
- Kubernetes for orchestration and scaling
- Helm charts for deployment configuration
-
High Availability Design:
- Multiple replicas of critical services
- Cross-zone deployment on cloud provider
- Automatic failover mechanisms
-
Code Integration:
- Pull request validation
- Automated code quality checks
- Unit test execution
-
Build Process:
- Docker image building
- Image vulnerability scanning
- Artifact versioning
-
Deployment Stages:
- Automated deployment to development
- Manual approval for staging promotion
- Canary deployment to production
- Progressive rollout strategy
-
Monitoring and Rollback:
- Health check validation post-deployment
- Automatic rollback on critical metrics deviation
- Deployment audit logging