AI Platform Architecture
This page details the architecture of our AI platform, focusing on how we leverage Supabase Edge Functions for AI integrations and asynchronous processing.
Overview
Our AI platform is built on Supabase Edge Functions, providing scalable, serverless infrastructure for AI-powered features including:
- AI chat assistance for co-parenting guidance
- Content moderation for messages
- Intelligent document processing
- Async task processing for resource-intensive operations
Architecture Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Mobile App (Expo) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ User Interface Components โ โ
โ โ โข AI Chat Screen โข Content Creation โ โ
โ โ โข Message Composer โข Document Upload โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ HTTPS/WebSocket
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Supabase Edge Functions โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ AI Processing Functions โ โ
โ โ โ โ
โ โ โข /ai-chat - AI conversation handling โ โ
โ โ โข /moderate-content - Message moderation โ โ
โ โ โข /process-document - Async document processing โ โ
โ โ โข /analyze-expense - AI expense categorization โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ API Calls
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ External AI Services โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โ
โ โ OpenAI API โ โ Other AI APIs โ โ
โ โ โ โ โ โ
โ โ โข GPT-4 โ โ โข Moderation โ โ
โ โ โข Embeddings โ โ โข Vision โ โ
โ โ โข Moderation โ โ โข Speech-to-Text โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโEdge Functions Implementation
AI Chat Function
Handles real-time AI conversations with context-aware responses for co-parenting guidance.
// supabase/functions/ai-chat/index.ts
import { serve } from "https://deno.land/std@0.168.0/http/server.ts";
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
import { Configuration, OpenAIApi } from "https://esm.sh/openai@3.1.0";
serve(async (req) => {
const { message, lodgeId, userId } = await req.json();
// Initialize clients
const supabase = createClient(/* ... */);
const openai = new OpenAIApi(/* ... */);
// Fetch conversation context
const context = await fetchConversationContext(supabase, lodgeId);
// Generate AI response
const completion = await openai.createChatCompletion({
model: "gpt-4",
messages: [
{ role: "system", content: CO_PARENTING_PROMPT },
...context,
{ role: "user", content: message }
],
temperature: 0.7,
max_tokens: 500
});
// Store in database
await storeAiResponse(supabase, lodgeId, userId, completion);
return new Response(JSON.stringify({
response: completion.data.choices[0].message.content
}));
});Content Moderation Function
Automatically moderates messages to maintain constructive communication between co-parents.
// supabase/functions/moderate-content/index.ts
serve(async (req) => {
const { messageId, content } = await req.json();
// Check content with OpenAI Moderation API
const moderation = await openai.createModeration({
input: content
});
if (moderation.data.results[0].flagged) {
// Handle flagged content
await flagMessage(supabase, messageId, moderation.data.results[0]);
return new Response(JSON.stringify({
allowed: false,
reason: "Content violates community guidelines"
}));
}
return new Response(JSON.stringify({ allowed: true }));
});Async Document Processing
Handles resource-intensive document processing tasks asynchronously.
// supabase/functions/process-document/index.ts
serve(async (req) => {
const { documentId, operation } = await req.json();
// Queue async task
await supabase
.from('async_tasks')
.insert({
type: 'document_processing',
status: 'pending',
payload: { documentId, operation }
});
// Return immediately with task ID
return new Response(JSON.stringify({
taskId: generateTaskId(),
status: 'processing'
}));
});Async Task Processing
Task Queue Pattern
We use a database-backed queue for managing async operations:
CREATE TABLE async_tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
type TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
payload JSONB,
result JSONB,
error TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
started_at TIMESTAMP WITH TIME ZONE,
completed_at TIMESTAMP WITH TIME ZONE
);Background Worker Function
Processes queued tasks using scheduled Edge Functions:
// supabase/functions/task-processor/index.ts
serve(async (req) => {
// Fetch pending tasks
const { data: tasks } = await supabase
.from('async_tasks')
.select('*')
.eq('status', 'pending')
.order('created_at', { ascending: true })
.limit(10);
// Process each task
for (const task of tasks) {
await processTask(task);
}
});AI Feature Implementation
Co-Parenting AI Assistant
- Purpose: Provides helpful, neutral guidance for co-parenting situations
- Implementation: Edge Function with OpenAI GPT-4
- Context Management: Maintains conversation history per lodge
- Safety: Built-in content filtering and moderation
Smart Document Analysis
- Purpose: Extract key information from uploaded documents
- Implementation: Async Edge Function with document parsing
- Supported Types: PDFs, images, text documents
- Output: Structured data stored in PostgreSQL
Expense Categorization
- Purpose: Automatically categorize and split child-related expenses
- Implementation: Edge Function with custom ML model
- Categories: Medical, Education, Activities, etc.
- Fairness: Suggests equitable expense splits
Security and Privacy
Data Protection
- Encryption: All AI requests use TLS encryption
- Data Isolation: Strict lodge-based data separation
- PII Handling: Automatic PII detection and masking
- Audit Logging: Complete audit trail for AI interactions
API Key Management
- Secure Storage: API keys stored in Supabase Vault
- Environment Variables: Keys never exposed in code
- Rotation Policy: Regular key rotation schedule
- Access Control: Function-level access restrictions
Performance Optimization
Caching Strategy
- Response Caching: Cache common AI responses
- Embedding Cache: Store frequently used embeddings
- Context Cache: Optimize context retrieval queries
Rate Limiting
- User Limits: Per-user rate limits for AI features
- Cost Control: Token usage monitoring and limits
- Graceful Degradation: Fallback responses when limits reached
Async Processing Benefits
- Non-blocking: UI remains responsive during AI processing
- Scalability: Handle multiple requests concurrently
- Reliability: Retry failed tasks automatically
- Cost Efficiency: Optimize resource usage
Monitoring and Analytics
Performance Metrics
- Response Time: Track AI response latency
- Success Rate: Monitor completion rates
- Token Usage: Track OpenAI API consumption
- Queue Health: Monitor async task processing
Error Handling
- Graceful Failures: User-friendly error messages
- Retry Logic: Automatic retry for transient failures
- Fallback Responses: Default responses when AI unavailable
- Error Reporting: Comprehensive error logging
Future Enhancements
Planned Features
- Voice Integration: Speech-to-text for messages
- Multi-language Support: AI responses in multiple languages
- Custom Models: Fine-tuned models for co-parenting scenarios
- Predictive Analytics: Anticipate scheduling conflicts
Scalability Roadmap
- Edge Caching: Deploy functions to multiple regions
- Model Optimization: Use smaller, faster models where appropriate
- Batch Processing: Group similar requests for efficiency
- WebSocket Support: Real-time AI interactions