Architecture Overview
The agent uses a multi-step tool pipeline:- classify_intent → Determines if this is a new question or follow-up
- text_to_sql → Generates SQL from natural language + schema
- run_sql → Executes SQL against your database
- analyse_data → Analyzes the data slice and extracts insights
- create_chart → Generates chart configurations
- summarizer → Creates final user-facing response
Key Technical Details
- Data handling: Only
analyse_data
receives actual data rows.text_to_sql
andcreate_chart
work with schema and context only. - Golden Assets: Retrieved via RAG at runtime to provide few-shot examples for each tool
- Session memory: Maintains conversation context using thread IDs
- Streaming: All operations stream events in real-time
Integration Options
Option 1: Full Upsolve Application
Complete UI for data catalog, chat, Golden Assets, and evaluations.Option 2: Embedded React Widget
Drop-in components for your application:tenantJWT
: Authentication tokenagentId
: Which agent configuration to useorganizationId
: Your organization identifierconnection
: Database connection details

Option 3: Headless API Integration
Server-Sent Events (SSE) stream for real-time tool execution:Data Catalog Configuration
Table Selection
Define which tables the agent can access:Selectable Columns
Mark columns for precise filtering. The system pre-scans these for distinct values:- High-cardinality columns (IDs, timestamps)
- Numeric columns with many unique values
- Free-text fields
Golden Assets Management
Golden Assets provide few-shot examples that improve consistency. They map questions to expected outputs:Creating Golden Assets
Save directly from chat interactions or use the API:
How Golden Assets Work
At runtime, the agent:- Retrieves similar Golden Assets using vector search
- Includes them as few-shot examples in tool prompts
- Generates more consistent outputs for similar questions
Evaluations
Test the agent against ground truth before deployment:Creating Test Sets
Running Evaluations
The system automatically:- Runs each test case through the full agent pipeline
- Compares outputs against expected results
- Generates similarity scores and detailed reports

Security & Privacy
Data Access Controls
- Table-level: Restrict which tables the agent can query
- Column-level: Exclude sensitive columns from schema and analysis
- Row-level: Apply WHERE clause filters to limit data access
Data Handling
- At rest: Golden Assets stored with logical tenant separation
- In transit: All API calls require authenticated JWT tokens
- In processing: Only analyse_data tool receives actual data rows
Performance Optimization
Query Limits
All generated SQL automatically includes LIMIT clauses to prevent large data transfers:Caching Strategy
- Query results: Cached per session to avoid re-running identical SQL
- Golden Assets: Retrieved and cached for the conversation duration
- Schema metadata: Cached and refreshed periodically
Monitoring
Track key metrics:- SQL execution time
- Token usage per conversation
- Golden Asset retrieval latency
- User satisfaction scores
Troubleshooting
Common Issues
SQL Generation Fails- Check table/column names in schema
- Verify Golden Assets have correct SQL examples
- Review any custom prompts for the text_to_sql tool
- Ensure data has appropriate columns for visualization
- Check chart configuration format
- Verify previous chart context if this is a follow-up
- Add more Golden Assets for common question patterns
- Run evaluations to identify specific failure modes
- Refine selectable columns to include relevant filter options
API Reference Summary
Core Endpoints:POST /api/chat
- Start conversational session (SSE)GET /api/agents
- List available agentsPOST /api/golden-assets
- Create/manage Golden AssetsPOST /api/evaluations/run
- Execute evaluation suite