Create an Engine

from blastai import Engine, Settings, Constraints

# Create with default settings
engine = await Engine.create()

# Create with custom settings
settings = Settings(
    persist_cache=True,  # Persist cache between sessions
    blastai_log_level="INFO"  # Set logging level
)

constraints = Constraints(
    max_memory=8,  # Max memory in GB
    max_concurrent_browsers=4,  # Max concurrent browser instances
    allow_parallelism=True  # Enable parallel task execution
)

engine = await Engine.create(
    settings=settings,
    constraints=constraints
)

Run Tasks

Single Task

result = await engine.run(
    "Search for Python documentation",
    stream=False  # Get final result only
)
print(result.final_result())

Streaming Task

async for update in engine.run(
    "Search for Python documentation",
    stream=True  # Get real-time updates
):
    if isinstance(update, AgentReasoning):
        print(update.content)  # Print thoughts/actions
    elif isinstance(update, AgentHistoryListResponse):
        print("Final result:", update.final_result())

Sequence of Tasks

results = await engine.run([
    "Go to python.org",
    "Click on Documentation",
    "Search for 'asyncio'"
])

Caching

BLAST by default caches results and LLM-generated steps to optimize performance. You can control caching behavior using the cache_control option:

# Skip cache lookup and storage
result = await engine.run(
    "Search for Python documentation",
    cache_control="no-cache,no-store"
)

# Skip plan cache but store result
result = await engine.run(
    "Search for Python documentation",
    cache_control="no-cache-plan"
)

Resource Management

The engine automatically manages system resources:

  • Monitors memory usage
  • Limits concurrent browser instances
  • Tracks LLM costs
  • Handles browser reuse and cleanup

Get current resource metrics:

metrics = await engine.get_metrics()
print(f"Active browsers: {metrics['concurrent_browsers']}")
print(f"Memory usage: {metrics['memory_usage_gb']} GB")
print(f"Total cost: ${metrics['total_cost']}")
print(f"Tasks:", metrics['tasks'])

Monitor task states:

metrics = await engine.get_metrics()
task_states = metrics['tasks']
print(f"Scheduled: {task_states['scheduled']}")
print(f"Running: {task_states['running']}")
print(f"Completed: {task_states['completed']}")

The Engine can be used as an async context manager for automatic cleanup:

async with Engine() as engine:
    result = await engine.run("Search for Python documentation")
    print(result.final_result())
# Engine automatically stops and cleans up

Or stop the engine when done to clean up resources:

await engine.stop()  # Clean up resources

When to Use the Engine API

The Engine API is useful if you don’t want to “run a server”. For most use cases, prefer using blastai serve.

Next Steps