Skip to content

Quickstart Guide

CLI Usage

Start an interactive session:

# Quick chat with GPT-4
uvx llmling-agent quickstart openai:gpt-4o-mini

# Enable streaming mode
uvx llmling-agent quickstart openai:gpt-4o-mini --stream

Initialize and manage configurations:

# Create starter configuration
llmling-agent init agents.yml

# Add to your configurations
llmling-agent add agents.yml

# Start chatting
llmling-agent chat assistant

Launch the web interface:

# Install UI dependencies
pip install "llmling-agent[ui]"

# Launch the web interface
llmling-agent launch

Tip

Set OPENAI_API_KEY in your environment before running:

export OPENAI_API_KEY='your-api-key-here'

Configured Agents

Create an agent configuration:

# agents.yml
agents:
  assistant:
    name: "Technical Assistant"
    model: openai:gpt-4o-mini
    system_prompts:
      - You are a helpful technical assistant.
    environment:
      type: inline
      tools:
        read_file:
          import_path: llmling_agent_tools.file.read_source_file

Use it in code:

from llmling_agent import Agent

async def main():
    async with Agent.open_agent("agents.yml", "assistant") as agent:
        response = await agent.run("What is Python?")
        print(response.data)

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Web Interface Features

The web interface provides: - Interactive chat with your agents - Tool management - Command execution - Message history - Cost tracking

Visit http://localhost:7860 after launching.

Functional Interface

Quick model interactions without configuration:

from llmling_agent import run_with_model, run_with_model_sync

# Async usage
async def main():
    # Simple completion
    result = await run_with_model(
        "Analyze this text",
        model="openai:gpt-4o-mini"
    )
    print(result)

    # With structured output
    from pydantic import BaseModel

    class Analysis(BaseModel):
        summary: str
        key_points: list[str]

    result = await run_with_model(
        "Analyze the sentiment",
        model="openai:gpt-4o-mini",
        result_type=Analysis
    )
    print(f"Summary: {result.summary}")
    print(f"Key points: {result.key_points}")

# Sync usage (convenience wrapper)
result = run_with_model_sync(
    "Quick question",
    model="openai:gpt-4o-mini"
)

Next Steps

Note

For details about environment configuration (tools, resources, etc.), see the LLMling documentation.