EIAS Architecture¶
This document provides a comprehensive overview of the EIAS (Expert Interview Agent System) architecture, explaining how the system components interact to deliver AI-powered expert interviews.
System Overview¶
EIAS is an asynchronous expert elicitation platform that deploys AI agents to conduct structured, adaptive interviews with domain experts. The system uses Value of Information (VOI) calculations to optimally select questions and extract high-quality insights.
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ EIAS System Architecture │
├─────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ ┌─────────────────────────────────────────────────────┐ │
│ │ Researchers │ │ Next.js Application │ │
│ │ (Auth'd via │────▶│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │
│ │ NextAuth.js) │ │ │ Dashboard │ │ Project │ │ Insights │ │ │
│ └──────────────────┘ │ │ Pages │ │ Config │ │ Review │ │ │
│ │ └────────────┘ └────────────┘ └────────────┘ │ │
│ ┌──────────────────┐ │ │ │
│ │ Domain Experts │ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ (Token-based │────▶│ │ Interview Interface │ │ │
│ │ access) │ │ │ Real-time chat with AI interview agent │ │ │
│ └──────────────────┘ │ └─────────────────────────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ API Routes │ │ │
│ │ │ /api/projects /api/agents /api/interview │ │ │
│ │ │ /api/invitations /api/synthesis /api/voi │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ Service Layer │ │ │
│ │ │ ┌─────────────┐ ┌────────────┐ ┌──────────┐ │ │ │
│ │ │ │ Claude │ │ VOI │ │ Synthesis│ │ │ │
│ │ │ │ Service │ │ Service │ │ Service │ │ │ │
│ │ │ └─────────────┘ └────────────┘ └──────────┘ │ │ │
│ │ │ ┌─────────────┐ ┌────────────┐ ┌──────────┐ │ │ │
│ │ │ │ Calibration │ │ Interview │ │ Session │ │ │ │
│ │ │ │ Service │ │ Agent │ │ Service │ │ │ │
│ │ │ └─────────────┘ └────────────┘ └──────────┘ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ LangGraph State Machine │ │ │
│ │ │ ┌─────────┐ ┌───────────┐ ┌──────────────┐ │ │ │
│ │ │ │Greeting │─▶│ Interview │─▶│ Wrap-up │ │ │ │
│ │ │ │ Node │ │ Node │ │ Node │ │ │ │
│ │ │ └─────────┘ └───────────┘ └──────────────┘ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ └────────────┼──────────────┘ │ │ │
│ │ │ ▼ │ │ │
│ │ │ ┌───────────────┐ │ │ │
│ │ │ │ Synthesize │ │ │ │
│ │ │ │ Node │ │ │ │
│ │ │ └───────────────┘ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────────────────────────────┼───────────────────────────────┐ │
│ │ Data Layer │ │ │
│ │ ┌─────────────┐ ┌────────────────────┐ ┌───────────────┐ │ │
│ │ │ Prisma ORM │ │ LangGraph │ │ Claude API │ │ │
│ │ │ │ │ Checkpointer │ │ (Anthropic) │ │ │
│ │ └──────┬──────┘ └─────────┬──────────┘ └───────────────┘ │ │
│ │ │ │ │ │
│ │ └───────────┬───────┘ │ │
│ │ ▼ │ │
│ │ ┌─────────────────┐ │ │
│ │ │ PostgreSQL │ │ │
│ │ │ Database │ │ │
│ │ └─────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────────────┘
Core Components¶
1. Next.js Application Layer¶
The application uses Next.js 16 with the App Router for both frontend rendering and API routes.
Key directories:
- src/app/(dashboard)/ - Authenticated researcher pages
- src/app/interview/[token]/ - Public expert interview interface
- src/app/api/ - REST API endpoints
2. Service Layer (src/lib/services/)¶
Services encapsulate business logic and database operations:
| Service | Purpose |
|---|---|
claude.service.ts |
Claude API interactions for conversations |
voi.service.ts |
Value of Information calculations |
calibration.service.ts |
Expert calibration with golden questions |
synthesis.service.ts |
Insight extraction from interviews |
interview-agent.service.ts |
Interview agent configuration |
session.service.ts |
Interview session management |
invitation.service.ts |
Secure invitation link management |
3. LangGraph State Machine (src/lib/langgraph/)¶
The interview orchestration uses LangGraph for state management with PostgreSQL checkpointing.
Graph Structure:
START
│
▼
[Entry Router] ─┬─▶ GREETING ──▶ END (wait for input)
│
├─▶ SYNTHESIZE ──▶ INTERVIEW ──▶ END (wait for input)
│ │
│ └──▶ WRAP_UP ──▶ END
│
└─▶ FINAL ──▶ END (complete)
Key files:
- graph.ts - StateGraph definition and compilation
- state.ts - Interview state schema and annotations
- checkpointer.ts - PostgreSQL persistence configuration
- nodes/ - Individual graph nodes (greeting, interview, synthesize, wrap-up)
- edges/routing.ts - Conditional edge logic
4. VOI System (src/lib/voi/)¶
The Value of Information system optimizes question selection:
VOI(q, e | K, R) = E[U(K' | R)] - U(K | R)
Where:
- q = candidate question
- e = expert profile
- K = current knowledge state
- R = research context
- U = utility function
Utility Components: - Coverage (35%): Percentage of sub-questions addressed - Confidence (30%): Weighted average confidence in answers - Coherence (20%): 1 - contradiction rate - Actionability (15%): Ability to derive conclusions
See VOI System Documentation for details.
Data Flow¶
Interview Session Flow¶
1. Researcher creates Project
└─▶ Defines uncertainty areas and goals
2. Researcher creates Interview Agent
└─▶ System generates optimized system prompt
3. Researcher creates Invitation
└─▶ Generates secure token link
4. Expert accesses interview via token
└─▶ LangGraph initializes session state
5. Interview loop:
a. LangGraph generates question (VOI-optimized)
b. Expert responds
c. Synthesize node extracts findings
d. Knowledge state updates
e. VOI recalculates for next question
f. Repeat until VOI threshold or expert ends
6. Interview completes
└─▶ Final synthesis extracts insights
7. Researcher reviews insights
└─▶ Approve/defer/discard for integration
State Persistence¶
Interview state persists across sessions via LangGraph's PostgreSQL checkpointer:
// State is keyed by thread_id (session ID)
const config = {
configurable: {
thread_id: sessionId,
},
};
// State automatically saves after each node execution
const result = await graph.invoke(update, config);
This enables: - Experts can pause and resume anytime - Full conversation history preservation - State recovery after crashes
Authentication Model¶
EIAS uses a dual authentication approach:
- Researchers: NextAuth.js with Prisma adapter
- Email/password or OAuth providers
- Session-based authentication
-
Full dashboard access
-
Experts: Token-based access
- Secure random tokens in invitation URLs
- No account required
- Scoped to single interview session
Key Design Decisions¶
Why LangGraph?¶
LangGraph provides: - State persistence: Built-in checkpointing to PostgreSQL - Directed graph execution: Clear control flow for interview phases - Conditional routing: Dynamic path selection based on state - Pause/resume: Natural support for async expert interactions
Why VOI-Based Question Selection?¶
Traditional interviews follow fixed scripts. VOI optimization: - Adapts to expert responses in real-time - Maximizes information gain per question - Knows when to stop (diminishing returns) - Balances exploration vs. exploitation
Why Separate Findings from Insights?¶
- Findings: Raw extracted information with source attribution
- Insights: Reviewed and approved knowledge
This separation enables: - Traceability to source conversations - Quality control before integration - Contradiction detection across experts
Performance Considerations¶
VOI Calculation Speed¶
VOI uses a hybrid approach: - Fast path: Closed-form calculation (<100ms) - LLM path: Question generation when needed (~2s)
Database Indexing¶
Critical indexes for performance:
- Interview.token - Token lookup
- Message.sessionId + timestamp - Conversation loading
- SubQuestion.projectId + status - VOI queries
Checkpointing Strategy¶
LangGraph checkpoints after each node, which can be intensive. Considerations: - Use connection pooling for PostgreSQL - Consider Redis for high-throughput scenarios - Prune old checkpoints periodically
Security Model¶
- API Routes: Protected via NextAuth.js session validation
- Interview Access: Token validation with expiry checking
- Data Isolation: Project-level scoping for all queries
- Input Sanitization: Prisma parameterized queries prevent SQL injection
Extending the System¶
Adding a New Graph Node¶
- Create node function in
src/lib/langgraph/nodes/ - Add node to graph in
graph.ts - Update routing edges if needed
- Add to entry router conditions
Adding a New Service¶
- Create service file in
src/lib/services/ - Define TypeScript interfaces for inputs/outputs
- Use Prisma for database operations
- Export service object with methods
Adding API Endpoints¶
- Create route file in
src/app/api/[resource]/route.ts - Implement HTTP method handlers (GET, POST, etc.)
- Add authentication checks
- Use services for business logic
Related Documentation¶
- Getting Started - Setup and configuration
- VOI System - Value of Information details
- Database Schema - Data model documentation
- API Reference - Full API documentation
- Testing Guide - Testing strategy and practices