Core Components
Engine
The Engine is the central component that:
- Manages task execution
- Coordinates resources
- Handles caching
- Controls parallelism
from blastai import Engine
engine = await Engine.create(
settings=settings,
constraints=constraints
)
Scheduler
The Scheduler manages task execution:
- Tracks task states
- Handles task dependencies
- Manages execution order
- Coordinates parallel tasks
Task priorities:
- Tasks with cached results
- Tasks with cached plans
- Subtasks of running tasks
- Tasks with paused executors
- Remaining tasks (FIFO)
Tasks can be in different states:
- Scheduled: Task is queued for execution
- Running: Task is currently executing
- Completed: Task has finished execution
Resource Manager
Handles system resources:
- Browser instances
- Memory usage
- Cost tracking
- Resource cleanup
Cache Manager
Manages two types of caches both in memory and on disk:
- Results cache (task outputs)
- Plans cache (execution plans generated by LLM)
Planner
Generates natural language execution plans given user-provided task description.
Data Flow
-
Task Creation
task_id = scheduler.schedule_task(
description="Search Python docs",
cache_control=""
)
-
Cache Check
- Check results cache
- Check plans cache
- Return cached result if available
-
Resource Allocation
- Wait for prerequisites
- Allocate browser if needed
- Assign executor
-
Execution
- Run task via executor
- Stream progress updates
- Cache results
-
Cleanup
- Release resources
- Update cache
- Handle errors
Code Structure
blastai/
├── __init__.py # Package initialization
├── engine.py # Main engine implementation
├── scheduler.py # Task scheduling
├── cache.py # Caching system
├── config.py # Configuration
├── planner.py # Task planning
├── executor.py # Task execution
├── tools.py # Tools for parallelism
└── utils.py # Utilities
Configuration
Settings and constraints control behavior:
settings:
persist_cache: true
logs_dir: "blast-logs/" # Log to files (null for terminal-only)
blastai_log_level: "info" # BLAST engine log level
browser_use_log_level: "info" # Browser operations log level
constraints:
# Resource limits
max_memory: "4GB"
max_concurrent_browsers: 4
# Model configuration
llm_model: "openai:gpt-4.1" # Main model for complex tasks
llm_model_mini: "openai:gpt-4.1-mini" # Model for simpler tasks
# Parallelism settings
allow_parallelism:
task: true
data: true
Environment variables for API keys:
# OpenAI configuration
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://your-endpoint.com # Optional
# Google Gemini configuration
GOOGLE_API_KEY=AIza... # From aistudio.google.com
Error Handling
BLAST handles various error types:
- Browser errors
- Resource limits
- Task failures
- Cache issues
Error recovery:
- Log error details
- Clean up resources
- Retry if possible
- Report to user
Extending BLAST
You can extend BLAST by:
- Adding tools for further optimization
- Creating custom executors
- Tune better scheduling policy
Next Steps
Responses are generated using AI and may contain mistakes.