API Service: Standard Operating Procedures¶
This document provides detailed, step-by-step checklists for routine operational and maintenance tasks related to the API service. These procedures focus on the three core processing patterns: ingestion, search, and chat completion.
1. Request Flow Monitoring & Debugging¶
Understanding the request flow is essential for effective troubleshooting. The API service implements three distinct processing patterns.
1.1. Ingestion Request Flow Debugging¶
Pattern: Synchronous validation → Asynchronous processing
flowchart LR
A[POST /v1/ingest] --> B[Middleware]
B --> C[Validation]
C --> D[Idempotency Check]
D --> E[Queue Job]
E --> F[202 Response]
E --> G[Background Processing]
Debug Steps: 1. Check request validation:
-
Verify idempotency cache:
-
Monitor queue jobs:
1.2. Search Request Flow Debugging¶
Pattern: Request → Multi-service coordination → Response merge
flowchart LR
A[GET /v1/search] --> B[Mode Detection]
B --> C[BM25 Query]
B --> D[Vector Query]
C --> E[Result Fusion]
D --> E
E --> F[Response]
Debug Steps: 1. Test search modes individually:
curl "http://localhost/v1/search?query=test&mode=bm25"
curl "http://localhost/v1/search?query=test&mode=vector"
- Check AI-Box connectivity:
1.3. Chat Stream Flow Debugging¶
Pattern: Request → Stream initiation → Server-sent events
flowchart LR
A[POST /chat/completions] --> B[Request Validation]
B --> C[AI Service Call]
C --> D[Stream Response]
D --> E[SSE Format]
Debug Steps: 1. Test streaming endpoint:
curl -N -H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"test"}]}' \
http://localhost/chat/completions
- Monitor stream events:
Look for
data: {"event":"delta"}patterns in response
2. Service Integration Health Checks¶
Objective: Monitor the health of service integrations and data flow patterns.
2.1. Dependency Health Matrix¶
| Service | Health Check | Expected Response | Debug Command |
|---|---|---|---|
| PostgreSQL | Connection test | Database connected | docker compose exec api php artisan tinker |
| Redis | Cache operation | Key set/get works | docker compose exec redis redis-cli ping |
| OpenSearch | Search query | Results returned | curl http://localhost:9200/_cluster/health |
| AI-Box | API call | Service responds | curl http://ai-box:8000/health |
2.2. Request Pipeline Validation¶
Ingestion Pipeline Test:
# Test complete ingestion flow
curl -X POST http://localhost/v1/ingest/articles \
-H "Content-Type: application/json" \
-H "Idempotency-Key: test-$(date +%s)" \
-d '{"articles":[{"source_id":"test","external_id":"test-1","url":"http://test.com","title":"Test Article","content":"Test content","lang":"en","published_at":"2025-01-15T10:30:00Z"}]}'
Search Pipeline Test:
Chat Pipeline Test:
# Test streaming chat flow
curl -N -X POST http://localhost/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Hello"}]}'
3. Queue & Background Processing Management¶
These procedures manage the Laravel Horizon queue system that handles asynchronous processing.
3.1. Queue Architecture Overview¶
flowchart TD
A[API Request] --> B[Immediate Response]
A --> C[Queue Job]
C --> D[Horizon Worker]
D --> E[AI-Box Processing]
D --> F[Search Indexing]
D --> G[Data Enrichment]
3.2. Checking Queue Status¶
Objective: To get a real-time snapshot of the queue workers, recent jobs, and any failed jobs.
- Execute the
horizon:statuscommand: - Analyze Output:
- Healthy:
Horizon is running.andprocessescount is greater than zero. - Unhealthy:
Horizon is inactive.
- Healthy:
3.3. Viewing Failed Jobs¶
Objective: To list all jobs that have failed and are currently stored in the failed_jobs table.
- Execute the
horizon:failedcommand: - Get Specific Job Details: To see the full exception and stack trace for a single failed job, use its ID from the list.
3.4. Clearing Failed Jobs¶
Objective: To remove failed jobs from the queue after the root cause has been resolved.
Do Not Clear Jobs Blindly
Only perform this action after the underlying issue causing the job to fail has been fixed. Otherwise, the jobs will simply fail again upon retry.
- To forget a single failed job:
- To forget ALL failed jobs:
4. Cache Management & Performance¶
Objective: Manage caching layers for optimal performance across the three core patterns.
4.1. Cache Layer Architecture¶
flowchart TD
A[Request] --> B[Application Cache]
A --> C[Redis Cache]
A --> D[OpenSearch Cache]
B --> E[Config/Route Cache]
C --> F[Idempotency Cache]
C --> G[Session Cache]
D --> H[Query Results]
4.2. Application Cache Operations¶
Info
The standard deployment process in the main API Playbook includes these steps, but they can be run manually for debugging.
- Clear Application Cache:
- Clear and Cache Configuration:
This removes the old config cache and creates a new one. Required after any
.envchange. - Clear and Cache Routes:
This removes the old route cache and creates a new one. Required after any change to
routes/*.phpfiles.
4.3. Idempotency Cache Management¶
Check idempotency keys:
Clear specific idempotency key:
5. Data Flow Debugging¶
Objective: Debug data flow through the service architecture layers.
5.1. Request-Response Pattern Analysis¶
Ingestion Data Flow:
# Check article processing status
docker compose exec api php artisan tinker
# Monitor recent articles
App\Models\Article::orderBy('created_at', 'desc')->limit(5)->get();
Search Data Flow:
# Test OpenSearch connectivity
curl http://localhost:9200/_cat/indices
# Check indexed documents
curl "http://localhost:9200/articles/_search?size=1"
Chat Data Flow:
# Test AI-Box connectivity
curl http://ai-box:8000/health
# Check streaming response format
curl -N http://localhost/chat/completions -d '{"messages":[{"role":"user","content":"test"}]}'
5.2. Using the Tinker Shell for Data Analysis¶
Tinker provides an interactive REPL that bootstraps the entire Laravel application, allowing you to use your Eloquent models directly.
-
Open the Tinker Shell:
-
Execute Eloquent Queries: You can now run any PHP code or Eloquent query.
5.3. Direct Database Analysis¶
For raw SQL queries, you can connect directly to the database using the psql client.
- Open a shell in the API container:
- Connect to the database:
The password will be requested interactively; it is stored in the
.envfile. - Execute SQL Queries: