Skip to content

agent

Class info

Classes

Name Children Inherits
AGUIAgent
llmling_agent.agent.agui_agent
MessageNode that wraps a remote AG-UI protocol server.
    Agent
    llmling_agent.agent.agent
    The main agent class.
      AgentContext
      llmling_agent.agent.context
      Runtime context for agent execution.
        Interactions
        llmling_agent.agent.interactions
        Manages agent communication patterns.
          MessageHistory
          llmling_agent.agent.conversation
          Manages conversation state and system prompts.
            SlashedAgent
            llmling_agent.agent.slashed_agent
            Wrapper around Agent that handles slash commands in streams.
              SystemPrompts
              llmling_agent.agent.sys_prompts
              Manages system prompts for an agent.

                🛈 DocStrings

                CLI commands for llmling-agent.

                AGUIAgent

                Bases: MessageNode[TDeps, str]

                MessageNode that wraps a remote AG-UI protocol server.

                Connects to AG-UI compatible endpoints via HTTP/SSE and provides the same interface as native agents, enabling composition with other nodes via connections, teams, etc.

                The agent manages: - HTTP client lifecycle (create on enter, close on exit) - AG-UI protocol communication via SSE streams - Event conversion to native llmling-agent events - Message accumulation and final response generation

                Supports both blocking run() and streaming run_stream() execution modes.

                Example
                # Connect to existing server
                async with AGUIAgent(
                    endpoint="http://localhost:8000/agent/run",
                    name="remote-agent"
                ) as agent:
                    result = await agent.run("Hello, world!")
                    async for event in agent.run_stream("Tell me a story"):
                        print(event)
                
                # Start server automatically (useful for testing)
                async with AGUIAgent(
                    endpoint="http://localhost:8000/agent/run",
                    name="test-agent",
                    startup_command="ag ui agent config.yml",
                    startup_delay=2.0,
                ) as agent:
                    result = await agent.run("Test prompt")
                
                Source code in src/llmling_agent/agent/agui_agent.py
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                266
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                303
                304
                305
                306
                307
                308
                309
                310
                311
                312
                313
                314
                315
                316
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                442
                443
                444
                445
                446
                447
                448
                449
                450
                451
                452
                453
                454
                455
                456
                457
                458
                459
                460
                461
                462
                463
                464
                465
                466
                467
                468
                469
                470
                471
                472
                473
                474
                475
                476
                477
                478
                479
                480
                481
                482
                483
                484
                485
                486
                487
                488
                489
                490
                491
                492
                493
                494
                495
                496
                497
                498
                499
                500
                501
                502
                503
                504
                505
                506
                507
                508
                509
                510
                511
                512
                513
                514
                515
                516
                517
                518
                519
                520
                521
                522
                523
                524
                525
                526
                527
                528
                529
                530
                531
                532
                533
                534
                535
                536
                537
                538
                539
                540
                541
                542
                543
                544
                545
                546
                547
                548
                549
                550
                551
                552
                553
                554
                555
                556
                557
                558
                559
                560
                561
                562
                563
                564
                565
                566
                567
                568
                569
                570
                571
                572
                573
                574
                575
                576
                577
                578
                579
                580
                581
                582
                583
                584
                585
                586
                587
                588
                589
                590
                591
                592
                593
                594
                595
                596
                597
                598
                599
                600
                601
                602
                603
                604
                605
                606
                607
                608
                609
                610
                611
                class AGUIAgent[TDeps = None](MessageNode[TDeps, str]):
                    """MessageNode that wraps a remote AG-UI protocol server.
                
                    Connects to AG-UI compatible endpoints via HTTP/SSE and provides the same
                    interface as native agents, enabling composition with other nodes via
                    connections, teams, etc.
                
                    The agent manages:
                    - HTTP client lifecycle (create on enter, close on exit)
                    - AG-UI protocol communication via SSE streams
                    - Event conversion to native llmling-agent events
                    - Message accumulation and final response generation
                
                    Supports both blocking `run()` and streaming `run_stream()` execution modes.
                
                    Example:
                        ```python
                        # Connect to existing server
                        async with AGUIAgent(
                            endpoint="http://localhost:8000/agent/run",
                            name="remote-agent"
                        ) as agent:
                            result = await agent.run("Hello, world!")
                            async for event in agent.run_stream("Tell me a story"):
                                print(event)
                
                        # Start server automatically (useful for testing)
                        async with AGUIAgent(
                            endpoint="http://localhost:8000/agent/run",
                            name="test-agent",
                            startup_command="ag ui agent config.yml",
                            startup_delay=2.0,
                        ) as agent:
                            result = await agent.run("Test prompt")
                        ```
                    """
                
                    def __init__(
                        self,
                        endpoint: str,
                        *,
                        name: str = "agui-agent",
                        description: str | None = None,
                        display_name: str | None = None,
                        timeout: float = 60.0,
                        headers: dict[str, str] | None = None,
                        startup_command: str | None = None,
                        startup_delay: float = 2.0,
                        mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                        agent_pool: AgentPool[Any] | None = None,
                        enable_logging: bool = True,
                        event_configs: Sequence[EventConfig] | None = None,
                        event_handlers: Sequence[IndividualEventHandler] | None = None,
                    ) -> None:
                        """Initialize AG-UI agent client.
                
                        Args:
                            endpoint: HTTP endpoint for the AG-UI agent
                            name: Agent name for identification
                            description: Agent description
                            display_name: Human-readable display name
                            timeout: Request timeout in seconds
                            headers: Additional HTTP headers
                            startup_command: Optional shell command to start server automatically.
                                           Useful for testing - server lifecycle is managed by the agent.
                                           Example: "ag ui agent config.yml"
                            startup_delay: Seconds to wait after starting server before connecting (default: 2.0)
                            mcp_servers: MCP servers to connect
                            agent_pool: Agent pool for multi-agent coordination
                            enable_logging: Whether to enable database logging
                            event_configs: Event trigger configurations
                            event_handlers: Sequence of event handlers to register
                        """
                        super().__init__(
                            name=name,
                            description=description,
                            display_name=display_name,
                            mcp_servers=mcp_servers,
                            agent_pool=agent_pool,
                            enable_logging=enable_logging,
                            event_configs=event_configs,
                        )
                        self.endpoint = endpoint
                        self.timeout = timeout
                        self.headers = headers or {}
                
                        # Startup command configuration
                        self._startup_command = startup_command
                        self._startup_delay = startup_delay
                        self._startup_process: Process | None = None
                
                        self._client: httpx.AsyncClient | None = None
                        self._state: AGUISessionState | None = None
                        self._message_count = 0
                        self.conversation = MessageHistory()
                        self._total_tokens = 0
                        self._event_queue: asyncio.Queue[RichAgentStreamEvent[Any]] = asyncio.Queue()
                        self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                
                    @property
                    def context(self) -> NodeContext:
                        """Get node context."""
                        from llmling_agent.messaging.context import NodeContext
                        from llmling_agent.models.manifest import AgentsManifest
                        from llmling_agent_config.nodes import NodeConfig
                
                        return NodeContext(
                            node_name=self.name,
                            pool=self.agent_pool,
                            config=NodeConfig(name=self.name, description=self.description),
                            definition=self.agent_pool.manifest if self.agent_pool else AgentsManifest(),
                        )
                
                    async def __aenter__(self) -> Self:
                        """Enter async context - initialize client and base resources."""
                        await super().__aenter__()
                
                        self._client = httpx.AsyncClient(
                            timeout=httpx.Timeout(self.timeout),
                            headers={
                                **self.headers,
                                "Accept": "text/event-stream",
                                "Content-Type": "application/json",
                            },
                        )
                        self._state = AGUISessionState(thread_id=self.conversation_id)
                
                        # Start server if startup command is provided
                        if self._startup_command:
                            await self._start_server()
                
                        self.log.debug("AG-UI client initialized", endpoint=self.endpoint)
                        return self
                
                    async def __aexit__(
                        self,
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    ) -> None:
                        """Exit async context - cleanup client and base resources."""
                        if self._client:
                            await self._client.aclose()
                            self._client = None
                        self._state = None
                
                        # Stop server if we started it
                        if self._startup_process:
                            await self._stop_server()
                
                        self.log.debug("AG-UI client closed")
                        await super().__aexit__(exc_type, exc_val, exc_tb)
                
                    async def _start_server(self) -> None:
                        """Start the AG-UI server subprocess."""
                        if not self._startup_command:
                            return
                
                        self.log.info("Starting AG-UI server", command=self._startup_command)
                
                        self._startup_process = await asyncio.create_subprocess_shell(
                            self._startup_command,
                            stdout=asyncio.subprocess.PIPE,
                            stderr=asyncio.subprocess.PIPE,
                            start_new_session=True,  # Create new process group
                        )
                
                        self.log.debug("Waiting for server startup", delay=self._startup_delay)
                        await asyncio.sleep(self._startup_delay)
                
                        # Check if process is still running
                        if self._startup_process.returncode is not None:
                            stderr = ""
                            if self._startup_process.stderr:
                                stderr = (await self._startup_process.stderr.read()).decode()
                            msg = f"Startup process exited with code {self._startup_process.returncode}: {stderr}"
                            raise RuntimeError(msg)
                
                        self.log.info("AG-UI server started")
                
                    async def _stop_server(self) -> None:
                        """Stop the AG-UI server subprocess."""
                        if not self._startup_process:
                            return
                
                        self.log.info("Stopping AG-UI server")
                
                        try:
                            # Platform-specific process termination
                            if sys.platform == "win32":
                                # On Windows, terminate the process directly
                                self._startup_process.terminate()
                                await self._startup_process.wait()
                            else:
                                # On Unix-like systems, kill entire process group
                                import os
                                import signal
                
                                os.killpg(os.getpgid(self._startup_process.pid), signal.SIGKILL)
                                await self._startup_process.wait()
                        except (ProcessLookupError, OSError):
                            # Process already dead
                            pass
                        finally:
                            self._startup_process = None
                            self.log.info("AG-UI server stopped")
                
                    async def run(
                        self,
                        *prompts: PromptCompatible,
                        message_id: str | None = None,
                        message_history: MessageHistory | None = None,
                        **kwargs: Any,
                    ) -> ChatMessage[str]:
                        """Execute prompt against AG-UI agent.
                
                        Sends the prompt to the AG-UI server and waits for completion.
                        Events are collected and the final text content is returned as a ChatMessage.
                
                        Args:
                            prompts: Prompts to send (will be joined with spaces)
                            message_id: Optional message id for the returned message
                            message_history: Optional MessageHistory to use instead of agent's own
                            **kwargs: Additional arguments (ignored for compatibility)
                
                        Returns:
                            ChatMessage containing the agent's aggregated text response
                        """
                        from llmling_agent.messaging.processing import prepare_prompts
                
                        if not self._client or not self._state:
                            msg = "Agent not initialized - use async context manager"
                            raise RuntimeError(msg)
                
                        # Determine which conversation to use
                        conversation = message_history if message_history is not None else self.conversation
                
                        # Prepare user message for history
                        user_msg, _processed_prompts, _original_message = await prepare_prompts(*prompts)
                
                        # Reset state for new run
                        self._state.text_chunks.clear()
                        self._state.thought_chunks.clear()
                        self._state.tool_calls.clear()
                        self._state.is_complete = False
                        self._state.error = None
                        self._state.run_id = str(uuid4())
                
                        # Collect all events
                        async for _ in self.run_stream(*prompts, message_id=message_id):
                            pass
                
                        # Create final message
                        self._message_count += 1
                        message = ChatMessage[str](
                            content="".join(self._state.text_chunks),
                            role="assistant",
                            name=self.name,
                            message_id=message_id or str(uuid4()),
                            conversation_id=self.conversation_id,
                            model_name=None,
                            cost_info=None,
                        )
                        self.message_sent.emit(message)
                
                        # Record to conversation history
                        conversation.add_chat_messages([user_msg, message])
                
                        return message
                
                    async def run_stream(
                        self,
                        *prompts: PromptCompatible,
                        message_id: str | None = None,
                        message_history: MessageHistory | None = None,
                        **kwargs: Any,
                    ) -> AsyncIterator[RichAgentStreamEvent[str]]:
                        """Execute prompt with streaming events.
                
                        Args:
                            prompts: Prompts to send
                            message_id: Optional message ID
                            message_history: Optional MessageHistory to use instead of agent's own
                            **kwargs: Additional arguments (ignored for compatibility)
                
                        Yields:
                            Native streaming events converted from AG-UI protocol
                        """
                        from ag_ui.core import RunAgentInput, UserMessage
                
                        from llmling_agent.agent.agui_converters import (
                            agui_to_native_event,
                            extract_text_from_event,
                        )
                        from llmling_agent.agent.events import RunStartedEvent
                        from llmling_agent.messaging.processing import prepare_prompts
                
                        if not self._client or not self._state:
                            msg = "Agent not initialized - use async context manager"
                            raise RuntimeError(msg)
                
                        # Determine which conversation to use
                        conversation = message_history if message_history is not None else self.conversation
                
                        # Prepare user message for history
                        user_msg, _processed_prompts, _original_message = await prepare_prompts(*prompts)
                
                        # Reset state
                        self._state.text_chunks.clear()
                        self._state.thought_chunks.clear()
                        self._state.tool_calls.clear()
                        self._state.is_complete = False
                        self._state.error = None
                        self._state.run_id = str(uuid4())
                
                        # Emit run started event
                        run_started = RunStartedEvent(
                            thread_id=self._state.thread_id,
                            run_id=self._state.run_id,
                            agent_name=self.name,
                        )
                        for handler in self.event_handler._wrapped_handlers:
                            await handler(None, run_started)
                        yield run_started
                
                        # Build request with proper content conversion
                        content = await _convert_to_agui_content(prompts)
                        user_message = UserMessage(id=str(uuid4()), content=content)
                
                        request_data = RunAgentInput(
                            thread_id=self._state.thread_id,
                            run_id=self._state.run_id,
                            state={},
                            messages=[user_message],
                            tools=[],
                            context=[],
                            forwarded_props={},
                        )
                
                        self.log.debug("Sending prompt to AG-UI agent")
                
                        # Send request and stream events
                        try:
                            async with self._client.stream(
                                "POST",
                                self.endpoint,
                                json=request_data.model_dump(by_alias=True),
                            ) as response:
                                response.raise_for_status()
                
                                # Parse SSE stream
                                async for event in self._parse_sse_stream(response):
                                    # Track text chunks
                                    if text := extract_text_from_event(event):
                                        self._state.text_chunks.append(text)
                
                                    # Convert to native event and distribute to handlers
                                    if native_event := agui_to_native_event(event):
                                        # Check for queued custom events first
                                        while not self._event_queue.empty():
                                            try:
                                                custom_event = self._event_queue.get_nowait()
                                                for handler in self.event_handler._wrapped_handlers:
                                                    await handler(None, custom_event)
                                                yield custom_event
                                            except asyncio.QueueEmpty:
                                                break
                                        # Distribute to handlers
                                        for handler in self.event_handler._wrapped_handlers:
                                            await handler(None, native_event)
                                        yield native_event
                
                        except httpx.HTTPError as e:
                            self._state.error = str(e)
                            self.log.exception("HTTP error during AG-UI run")
                            raise
                        finally:
                            self._state.is_complete = True
                
                            # Emit final message
                            final_message = ChatMessage[str](
                                content="".join(self._state.text_chunks),
                                role="assistant",
                                name=self.name,
                                message_id=message_id or str(uuid4()),
                                conversation_id=self.conversation_id,
                                model_name=None,
                                cost_info=None,
                            )
                            complete_event = StreamCompleteEvent(message=final_message)
                            for handler in self.event_handler._wrapped_handlers:
                                await handler(None, complete_event)
                            yield complete_event
                
                            # Record to conversation history
                            conversation.add_chat_messages([user_msg, final_message])
                
                    async def _parse_sse_stream(self, response: httpx.Response) -> AsyncIterator[Event]:
                        """Parse Server-Sent Events stream.
                
                        Args:
                            response: HTTP response with SSE stream
                
                        Yields:
                            Parsed AG-UI events
                        """
                        from ag_ui.core import Event
                
                        event_adapter: TypeAdapter[Event] = TypeAdapter(Event)
                        buffer = ""
                
                        async for chunk in response.aiter_text():
                            buffer += chunk
                
                            # Process complete SSE events
                            while "\n\n" in buffer:
                                event_text, buffer = buffer.split("\n\n", 1)
                
                                # Parse SSE format: "data: {json}\n"
                                for line in event_text.split("\n"):
                                    if line.startswith("data: "):
                                        json_str = line[6:]  # Remove "data: " prefix
                                        try:
                                            event = event_adapter.validate_json(json_str)
                                            yield event
                                        except (ValueError, TypeError) as e:
                                            self.log.warning(
                                                "Failed to parse AG-UI event",
                                                json=json_str[:100],
                                                error=str(e),
                                            )
                
                    async def run_iter(
                        self,
                        *prompt_groups: Sequence[PromptCompatible],
                        message_id: str | None = None,
                        **kwargs: Any,
                    ) -> AsyncIterator[ChatMessage[str]]:
                        """Execute multiple prompt groups sequentially.
                
                        Args:
                            prompt_groups: Groups of prompts to execute
                            message_id: Optional message ID base
                            **kwargs: Additional arguments (ignored for compatibility)
                
                        Yields:
                            ChatMessage for each completed prompt group
                        """
                        for i, prompts in enumerate(prompt_groups):
                            mid = f"{message_id or 'msg'}_{i}" if message_id else None
                            yield await self.run(*prompts, message_id=mid)
                
                    def to_tool(self, description: str | None = None) -> Callable[[str], Any]:
                        """Convert agent to a callable tool.
                
                        Args:
                            description: Tool description
                
                        Returns:
                            Async function that can be used as a tool
                        """
                
                        async def wrapped(prompt: str) -> str:
                            """Execute AG-UI agent with given prompt."""
                            result = await self.run(prompt)
                            return result.content
                
                        wrapped.__name__ = self.name
                        wrapped.__doc__ = description or f"Call {self.name} AG-UI agent"
                        return wrapped
                
                    @property
                    def model_name(self) -> str | None:
                        """Get model name (AG-UI doesn't expose this)."""
                        return None
                
                    async def get_stats(self) -> MessageStats:
                        """Get message statistics for this node."""
                        return MessageStats()
                

                context property

                context: NodeContext
                

                Get node context.

                model_name property

                model_name: str | None
                

                Get model name (AG-UI doesn't expose this).

                __aenter__ async

                __aenter__() -> Self
                

                Enter async context - initialize client and base resources.

                Source code in src/llmling_agent/agent/agui_agent.py
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                async def __aenter__(self) -> Self:
                    """Enter async context - initialize client and base resources."""
                    await super().__aenter__()
                
                    self._client = httpx.AsyncClient(
                        timeout=httpx.Timeout(self.timeout),
                        headers={
                            **self.headers,
                            "Accept": "text/event-stream",
                            "Content-Type": "application/json",
                        },
                    )
                    self._state = AGUISessionState(thread_id=self.conversation_id)
                
                    # Start server if startup command is provided
                    if self._startup_command:
                        await self._start_server()
                
                    self.log.debug("AG-UI client initialized", endpoint=self.endpoint)
                    return self
                

                __aexit__ async

                __aexit__(
                    exc_type: type[BaseException] | None,
                    exc_val: BaseException | None,
                    exc_tb: TracebackType | None,
                ) -> None
                

                Exit async context - cleanup client and base resources.

                Source code in src/llmling_agent/agent/agui_agent.py
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                async def __aexit__(
                    self,
                    exc_type: type[BaseException] | None,
                    exc_val: BaseException | None,
                    exc_tb: TracebackType | None,
                ) -> None:
                    """Exit async context - cleanup client and base resources."""
                    if self._client:
                        await self._client.aclose()
                        self._client = None
                    self._state = None
                
                    # Stop server if we started it
                    if self._startup_process:
                        await self._stop_server()
                
                    self.log.debug("AG-UI client closed")
                    await super().__aexit__(exc_type, exc_val, exc_tb)
                

                __init__

                __init__(
                    endpoint: str,
                    *,
                    name: str = "agui-agent",
                    description: str | None = None,
                    display_name: str | None = None,
                    timeout: float = 60.0,
                    headers: dict[str, str] | None = None,
                    startup_command: str | None = None,
                    startup_delay: float = 2.0,
                    mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                    agent_pool: AgentPool[Any] | None = None,
                    enable_logging: bool = True,
                    event_configs: Sequence[EventConfig] | None = None,
                    event_handlers: Sequence[IndividualEventHandler] | None = None
                ) -> None
                

                Initialize AG-UI agent client.

                Parameters:

                Name Type Description Default
                endpoint str

                HTTP endpoint for the AG-UI agent

                required
                name str

                Agent name for identification

                'agui-agent'
                description str | None

                Agent description

                None
                display_name str | None

                Human-readable display name

                None
                timeout float

                Request timeout in seconds

                60.0
                headers dict[str, str] | None

                Additional HTTP headers

                None
                startup_command str | None

                Optional shell command to start server automatically. Useful for testing - server lifecycle is managed by the agent. Example: "ag ui agent config.yml"

                None
                startup_delay float

                Seconds to wait after starting server before connecting (default: 2.0)

                2.0
                mcp_servers Sequence[str | MCPServerConfig] | None

                MCP servers to connect

                None
                agent_pool AgentPool[Any] | None

                Agent pool for multi-agent coordination

                None
                enable_logging bool

                Whether to enable database logging

                True
                event_configs Sequence[EventConfig] | None

                Event trigger configurations

                None
                event_handlers Sequence[IndividualEventHandler] | None

                Sequence of event handlers to register

                None
                Source code in src/llmling_agent/agent/agui_agent.py
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                def __init__(
                    self,
                    endpoint: str,
                    *,
                    name: str = "agui-agent",
                    description: str | None = None,
                    display_name: str | None = None,
                    timeout: float = 60.0,
                    headers: dict[str, str] | None = None,
                    startup_command: str | None = None,
                    startup_delay: float = 2.0,
                    mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                    agent_pool: AgentPool[Any] | None = None,
                    enable_logging: bool = True,
                    event_configs: Sequence[EventConfig] | None = None,
                    event_handlers: Sequence[IndividualEventHandler] | None = None,
                ) -> None:
                    """Initialize AG-UI agent client.
                
                    Args:
                        endpoint: HTTP endpoint for the AG-UI agent
                        name: Agent name for identification
                        description: Agent description
                        display_name: Human-readable display name
                        timeout: Request timeout in seconds
                        headers: Additional HTTP headers
                        startup_command: Optional shell command to start server automatically.
                                       Useful for testing - server lifecycle is managed by the agent.
                                       Example: "ag ui agent config.yml"
                        startup_delay: Seconds to wait after starting server before connecting (default: 2.0)
                        mcp_servers: MCP servers to connect
                        agent_pool: Agent pool for multi-agent coordination
                        enable_logging: Whether to enable database logging
                        event_configs: Event trigger configurations
                        event_handlers: Sequence of event handlers to register
                    """
                    super().__init__(
                        name=name,
                        description=description,
                        display_name=display_name,
                        mcp_servers=mcp_servers,
                        agent_pool=agent_pool,
                        enable_logging=enable_logging,
                        event_configs=event_configs,
                    )
                    self.endpoint = endpoint
                    self.timeout = timeout
                    self.headers = headers or {}
                
                    # Startup command configuration
                    self._startup_command = startup_command
                    self._startup_delay = startup_delay
                    self._startup_process: Process | None = None
                
                    self._client: httpx.AsyncClient | None = None
                    self._state: AGUISessionState | None = None
                    self._message_count = 0
                    self.conversation = MessageHistory()
                    self._total_tokens = 0
                    self._event_queue: asyncio.Queue[RichAgentStreamEvent[Any]] = asyncio.Queue()
                    self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                

                get_stats async

                get_stats() -> MessageStats
                

                Get message statistics for this node.

                Source code in src/llmling_agent/agent/agui_agent.py
                609
                610
                611
                async def get_stats(self) -> MessageStats:
                    """Get message statistics for this node."""
                    return MessageStats()
                

                run async

                run(
                    *prompts: PromptCompatible,
                    message_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    **kwargs: Any
                ) -> ChatMessage[str]
                

                Execute prompt against AG-UI agent.

                Sends the prompt to the AG-UI server and waits for completion. Events are collected and the final text content is returned as a ChatMessage.

                Parameters:

                Name Type Description Default
                prompts PromptCompatible

                Prompts to send (will be joined with spaces)

                ()
                message_id str | None

                Optional message id for the returned message

                None
                message_history MessageHistory | None

                Optional MessageHistory to use instead of agent's own

                None
                **kwargs Any

                Additional arguments (ignored for compatibility)

                {}

                Returns:

                Type Description
                ChatMessage[str]

                ChatMessage containing the agent's aggregated text response

                Source code in src/llmling_agent/agent/agui_agent.py
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                async def run(
                    self,
                    *prompts: PromptCompatible,
                    message_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    **kwargs: Any,
                ) -> ChatMessage[str]:
                    """Execute prompt against AG-UI agent.
                
                    Sends the prompt to the AG-UI server and waits for completion.
                    Events are collected and the final text content is returned as a ChatMessage.
                
                    Args:
                        prompts: Prompts to send (will be joined with spaces)
                        message_id: Optional message id for the returned message
                        message_history: Optional MessageHistory to use instead of agent's own
                        **kwargs: Additional arguments (ignored for compatibility)
                
                    Returns:
                        ChatMessage containing the agent's aggregated text response
                    """
                    from llmling_agent.messaging.processing import prepare_prompts
                
                    if not self._client or not self._state:
                        msg = "Agent not initialized - use async context manager"
                        raise RuntimeError(msg)
                
                    # Determine which conversation to use
                    conversation = message_history if message_history is not None else self.conversation
                
                    # Prepare user message for history
                    user_msg, _processed_prompts, _original_message = await prepare_prompts(*prompts)
                
                    # Reset state for new run
                    self._state.text_chunks.clear()
                    self._state.thought_chunks.clear()
                    self._state.tool_calls.clear()
                    self._state.is_complete = False
                    self._state.error = None
                    self._state.run_id = str(uuid4())
                
                    # Collect all events
                    async for _ in self.run_stream(*prompts, message_id=message_id):
                        pass
                
                    # Create final message
                    self._message_count += 1
                    message = ChatMessage[str](
                        content="".join(self._state.text_chunks),
                        role="assistant",
                        name=self.name,
                        message_id=message_id or str(uuid4()),
                        conversation_id=self.conversation_id,
                        model_name=None,
                        cost_info=None,
                    )
                    self.message_sent.emit(message)
                
                    # Record to conversation history
                    conversation.add_chat_messages([user_msg, message])
                
                    return message
                

                run_iter async

                run_iter(
                    *prompt_groups: Sequence[PromptCompatible], message_id: str | None = None, **kwargs: Any
                ) -> AsyncIterator[ChatMessage[str]]
                

                Execute multiple prompt groups sequentially.

                Parameters:

                Name Type Description Default
                prompt_groups Sequence[PromptCompatible]

                Groups of prompts to execute

                ()
                message_id str | None

                Optional message ID base

                None
                **kwargs Any

                Additional arguments (ignored for compatibility)

                {}

                Yields:

                Type Description
                AsyncIterator[ChatMessage[str]]

                ChatMessage for each completed prompt group

                Source code in src/llmling_agent/agent/agui_agent.py
                565
                566
                567
                568
                569
                570
                571
                572
                573
                574
                575
                576
                577
                578
                579
                580
                581
                582
                583
                async def run_iter(
                    self,
                    *prompt_groups: Sequence[PromptCompatible],
                    message_id: str | None = None,
                    **kwargs: Any,
                ) -> AsyncIterator[ChatMessage[str]]:
                    """Execute multiple prompt groups sequentially.
                
                    Args:
                        prompt_groups: Groups of prompts to execute
                        message_id: Optional message ID base
                        **kwargs: Additional arguments (ignored for compatibility)
                
                    Yields:
                        ChatMessage for each completed prompt group
                    """
                    for i, prompts in enumerate(prompt_groups):
                        mid = f"{message_id or 'msg'}_{i}" if message_id else None
                        yield await self.run(*prompts, message_id=mid)
                

                run_stream async

                run_stream(
                    *prompts: PromptCompatible,
                    message_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    **kwargs: Any
                ) -> AsyncIterator[RichAgentStreamEvent[str]]
                

                Execute prompt with streaming events.

                Parameters:

                Name Type Description Default
                prompts PromptCompatible

                Prompts to send

                ()
                message_id str | None

                Optional message ID

                None
                message_history MessageHistory | None

                Optional MessageHistory to use instead of agent's own

                None
                **kwargs Any

                Additional arguments (ignored for compatibility)

                {}

                Yields:

                Type Description
                AsyncIterator[RichAgentStreamEvent[str]]

                Native streaming events converted from AG-UI protocol

                Source code in src/llmling_agent/agent/agui_agent.py
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                442
                443
                444
                445
                446
                447
                448
                449
                450
                451
                452
                453
                454
                455
                456
                457
                458
                459
                460
                461
                462
                463
                464
                465
                466
                467
                468
                469
                470
                471
                472
                473
                474
                475
                476
                477
                478
                479
                480
                481
                482
                483
                484
                485
                486
                487
                488
                489
                490
                491
                492
                493
                494
                495
                496
                497
                498
                499
                500
                501
                502
                503
                504
                505
                506
                507
                508
                509
                510
                511
                512
                513
                514
                515
                516
                517
                518
                519
                520
                521
                522
                523
                524
                525
                526
                527
                528
                async def run_stream(
                    self,
                    *prompts: PromptCompatible,
                    message_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    **kwargs: Any,
                ) -> AsyncIterator[RichAgentStreamEvent[str]]:
                    """Execute prompt with streaming events.
                
                    Args:
                        prompts: Prompts to send
                        message_id: Optional message ID
                        message_history: Optional MessageHistory to use instead of agent's own
                        **kwargs: Additional arguments (ignored for compatibility)
                
                    Yields:
                        Native streaming events converted from AG-UI protocol
                    """
                    from ag_ui.core import RunAgentInput, UserMessage
                
                    from llmling_agent.agent.agui_converters import (
                        agui_to_native_event,
                        extract_text_from_event,
                    )
                    from llmling_agent.agent.events import RunStartedEvent
                    from llmling_agent.messaging.processing import prepare_prompts
                
                    if not self._client or not self._state:
                        msg = "Agent not initialized - use async context manager"
                        raise RuntimeError(msg)
                
                    # Determine which conversation to use
                    conversation = message_history if message_history is not None else self.conversation
                
                    # Prepare user message for history
                    user_msg, _processed_prompts, _original_message = await prepare_prompts(*prompts)
                
                    # Reset state
                    self._state.text_chunks.clear()
                    self._state.thought_chunks.clear()
                    self._state.tool_calls.clear()
                    self._state.is_complete = False
                    self._state.error = None
                    self._state.run_id = str(uuid4())
                
                    # Emit run started event
                    run_started = RunStartedEvent(
                        thread_id=self._state.thread_id,
                        run_id=self._state.run_id,
                        agent_name=self.name,
                    )
                    for handler in self.event_handler._wrapped_handlers:
                        await handler(None, run_started)
                    yield run_started
                
                    # Build request with proper content conversion
                    content = await _convert_to_agui_content(prompts)
                    user_message = UserMessage(id=str(uuid4()), content=content)
                
                    request_data = RunAgentInput(
                        thread_id=self._state.thread_id,
                        run_id=self._state.run_id,
                        state={},
                        messages=[user_message],
                        tools=[],
                        context=[],
                        forwarded_props={},
                    )
                
                    self.log.debug("Sending prompt to AG-UI agent")
                
                    # Send request and stream events
                    try:
                        async with self._client.stream(
                            "POST",
                            self.endpoint,
                            json=request_data.model_dump(by_alias=True),
                        ) as response:
                            response.raise_for_status()
                
                            # Parse SSE stream
                            async for event in self._parse_sse_stream(response):
                                # Track text chunks
                                if text := extract_text_from_event(event):
                                    self._state.text_chunks.append(text)
                
                                # Convert to native event and distribute to handlers
                                if native_event := agui_to_native_event(event):
                                    # Check for queued custom events first
                                    while not self._event_queue.empty():
                                        try:
                                            custom_event = self._event_queue.get_nowait()
                                            for handler in self.event_handler._wrapped_handlers:
                                                await handler(None, custom_event)
                                            yield custom_event
                                        except asyncio.QueueEmpty:
                                            break
                                    # Distribute to handlers
                                    for handler in self.event_handler._wrapped_handlers:
                                        await handler(None, native_event)
                                    yield native_event
                
                    except httpx.HTTPError as e:
                        self._state.error = str(e)
                        self.log.exception("HTTP error during AG-UI run")
                        raise
                    finally:
                        self._state.is_complete = True
                
                        # Emit final message
                        final_message = ChatMessage[str](
                            content="".join(self._state.text_chunks),
                            role="assistant",
                            name=self.name,
                            message_id=message_id or str(uuid4()),
                            conversation_id=self.conversation_id,
                            model_name=None,
                            cost_info=None,
                        )
                        complete_event = StreamCompleteEvent(message=final_message)
                        for handler in self.event_handler._wrapped_handlers:
                            await handler(None, complete_event)
                        yield complete_event
                
                        # Record to conversation history
                        conversation.add_chat_messages([user_msg, final_message])
                

                to_tool

                to_tool(description: str | None = None) -> Callable[[str], Any]
                

                Convert agent to a callable tool.

                Parameters:

                Name Type Description Default
                description str | None

                Tool description

                None

                Returns:

                Type Description
                Callable[[str], Any]

                Async function that can be used as a tool

                Source code in src/llmling_agent/agent/agui_agent.py
                585
                586
                587
                588
                589
                590
                591
                592
                593
                594
                595
                596
                597
                598
                599
                600
                601
                602
                def to_tool(self, description: str | None = None) -> Callable[[str], Any]:
                    """Convert agent to a callable tool.
                
                    Args:
                        description: Tool description
                
                    Returns:
                        Async function that can be used as a tool
                    """
                
                    async def wrapped(prompt: str) -> str:
                        """Execute AG-UI agent with given prompt."""
                        result = await self.run(prompt)
                        return result.content
                
                    wrapped.__name__ = self.name
                    wrapped.__doc__ = description or f"Call {self.name} AG-UI agent"
                    return wrapped
                

                Agent

                Bases: MessageNode[TDeps, OutputDataT]

                The main agent class.

                Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

                Source code in src/llmling_agent/agent/agent.py
                 115
                 116
                 117
                 118
                 119
                 120
                 121
                 122
                 123
                 124
                 125
                 126
                 127
                 128
                 129
                 130
                 131
                 132
                 133
                 134
                 135
                 136
                 137
                 138
                 139
                 140
                 141
                 142
                 143
                 144
                 145
                 146
                 147
                 148
                 149
                 150
                 151
                 152
                 153
                 154
                 155
                 156
                 157
                 158
                 159
                 160
                 161
                 162
                 163
                 164
                 165
                 166
                 167
                 168
                 169
                 170
                 171
                 172
                 173
                 174
                 175
                 176
                 177
                 178
                 179
                 180
                 181
                 182
                 183
                 184
                 185
                 186
                 187
                 188
                 189
                 190
                 191
                 192
                 193
                 194
                 195
                 196
                 197
                 198
                 199
                 200
                 201
                 202
                 203
                 204
                 205
                 206
                 207
                 208
                 209
                 210
                 211
                 212
                 213
                 214
                 215
                 216
                 217
                 218
                 219
                 220
                 221
                 222
                 223
                 224
                 225
                 226
                 227
                 228
                 229
                 230
                 231
                 232
                 233
                 234
                 235
                 236
                 237
                 238
                 239
                 240
                 241
                 242
                 243
                 244
                 245
                 246
                 247
                 248
                 249
                 250
                 251
                 252
                 253
                 254
                 255
                 256
                 257
                 258
                 259
                 260
                 261
                 262
                 263
                 264
                 265
                 266
                 267
                 268
                 269
                 270
                 271
                 272
                 273
                 274
                 275
                 276
                 277
                 278
                 279
                 280
                 281
                 282
                 283
                 284
                 285
                 286
                 287
                 288
                 289
                 290
                 291
                 292
                 293
                 294
                 295
                 296
                 297
                 298
                 299
                 300
                 301
                 302
                 303
                 304
                 305
                 306
                 307
                 308
                 309
                 310
                 311
                 312
                 313
                 314
                 315
                 316
                 317
                 318
                 319
                 320
                 321
                 322
                 323
                 324
                 325
                 326
                 327
                 328
                 329
                 330
                 331
                 332
                 333
                 334
                 335
                 336
                 337
                 338
                 339
                 340
                 341
                 342
                 343
                 344
                 345
                 346
                 347
                 348
                 349
                 350
                 351
                 352
                 353
                 354
                 355
                 356
                 357
                 358
                 359
                 360
                 361
                 362
                 363
                 364
                 365
                 366
                 367
                 368
                 369
                 370
                 371
                 372
                 373
                 374
                 375
                 376
                 377
                 378
                 379
                 380
                 381
                 382
                 383
                 384
                 385
                 386
                 387
                 388
                 389
                 390
                 391
                 392
                 393
                 394
                 395
                 396
                 397
                 398
                 399
                 400
                 401
                 402
                 403
                 404
                 405
                 406
                 407
                 408
                 409
                 410
                 411
                 412
                 413
                 414
                 415
                 416
                 417
                 418
                 419
                 420
                 421
                 422
                 423
                 424
                 425
                 426
                 427
                 428
                 429
                 430
                 431
                 432
                 433
                 434
                 435
                 436
                 437
                 438
                 439
                 440
                 441
                 442
                 443
                 444
                 445
                 446
                 447
                 448
                 449
                 450
                 451
                 452
                 453
                 454
                 455
                 456
                 457
                 458
                 459
                 460
                 461
                 462
                 463
                 464
                 465
                 466
                 467
                 468
                 469
                 470
                 471
                 472
                 473
                 474
                 475
                 476
                 477
                 478
                 479
                 480
                 481
                 482
                 483
                 484
                 485
                 486
                 487
                 488
                 489
                 490
                 491
                 492
                 493
                 494
                 495
                 496
                 497
                 498
                 499
                 500
                 501
                 502
                 503
                 504
                 505
                 506
                 507
                 508
                 509
                 510
                 511
                 512
                 513
                 514
                 515
                 516
                 517
                 518
                 519
                 520
                 521
                 522
                 523
                 524
                 525
                 526
                 527
                 528
                 529
                 530
                 531
                 532
                 533
                 534
                 535
                 536
                 537
                 538
                 539
                 540
                 541
                 542
                 543
                 544
                 545
                 546
                 547
                 548
                 549
                 550
                 551
                 552
                 553
                 554
                 555
                 556
                 557
                 558
                 559
                 560
                 561
                 562
                 563
                 564
                 565
                 566
                 567
                 568
                 569
                 570
                 571
                 572
                 573
                 574
                 575
                 576
                 577
                 578
                 579
                 580
                 581
                 582
                 583
                 584
                 585
                 586
                 587
                 588
                 589
                 590
                 591
                 592
                 593
                 594
                 595
                 596
                 597
                 598
                 599
                 600
                 601
                 602
                 603
                 604
                 605
                 606
                 607
                 608
                 609
                 610
                 611
                 612
                 613
                 614
                 615
                 616
                 617
                 618
                 619
                 620
                 621
                 622
                 623
                 624
                 625
                 626
                 627
                 628
                 629
                 630
                 631
                 632
                 633
                 634
                 635
                 636
                 637
                 638
                 639
                 640
                 641
                 642
                 643
                 644
                 645
                 646
                 647
                 648
                 649
                 650
                 651
                 652
                 653
                 654
                 655
                 656
                 657
                 658
                 659
                 660
                 661
                 662
                 663
                 664
                 665
                 666
                 667
                 668
                 669
                 670
                 671
                 672
                 673
                 674
                 675
                 676
                 677
                 678
                 679
                 680
                 681
                 682
                 683
                 684
                 685
                 686
                 687
                 688
                 689
                 690
                 691
                 692
                 693
                 694
                 695
                 696
                 697
                 698
                 699
                 700
                 701
                 702
                 703
                 704
                 705
                 706
                 707
                 708
                 709
                 710
                 711
                 712
                 713
                 714
                 715
                 716
                 717
                 718
                 719
                 720
                 721
                 722
                 723
                 724
                 725
                 726
                 727
                 728
                 729
                 730
                 731
                 732
                 733
                 734
                 735
                 736
                 737
                 738
                 739
                 740
                 741
                 742
                 743
                 744
                 745
                 746
                 747
                 748
                 749
                 750
                 751
                 752
                 753
                 754
                 755
                 756
                 757
                 758
                 759
                 760
                 761
                 762
                 763
                 764
                 765
                 766
                 767
                 768
                 769
                 770
                 771
                 772
                 773
                 774
                 775
                 776
                 777
                 778
                 779
                 780
                 781
                 782
                 783
                 784
                 785
                 786
                 787
                 788
                 789
                 790
                 791
                 792
                 793
                 794
                 795
                 796
                 797
                 798
                 799
                 800
                 801
                 802
                 803
                 804
                 805
                 806
                 807
                 808
                 809
                 810
                 811
                 812
                 813
                 814
                 815
                 816
                 817
                 818
                 819
                 820
                 821
                 822
                 823
                 824
                 825
                 826
                 827
                 828
                 829
                 830
                 831
                 832
                 833
                 834
                 835
                 836
                 837
                 838
                 839
                 840
                 841
                 842
                 843
                 844
                 845
                 846
                 847
                 848
                 849
                 850
                 851
                 852
                 853
                 854
                 855
                 856
                 857
                 858
                 859
                 860
                 861
                 862
                 863
                 864
                 865
                 866
                 867
                 868
                 869
                 870
                 871
                 872
                 873
                 874
                 875
                 876
                 877
                 878
                 879
                 880
                 881
                 882
                 883
                 884
                 885
                 886
                 887
                 888
                 889
                 890
                 891
                 892
                 893
                 894
                 895
                 896
                 897
                 898
                 899
                 900
                 901
                 902
                 903
                 904
                 905
                 906
                 907
                 908
                 909
                 910
                 911
                 912
                 913
                 914
                 915
                 916
                 917
                 918
                 919
                 920
                 921
                 922
                 923
                 924
                 925
                 926
                 927
                 928
                 929
                 930
                 931
                 932
                 933
                 934
                 935
                 936
                 937
                 938
                 939
                 940
                 941
                 942
                 943
                 944
                 945
                 946
                 947
                 948
                 949
                 950
                 951
                 952
                 953
                 954
                 955
                 956
                 957
                 958
                 959
                 960
                 961
                 962
                 963
                 964
                 965
                 966
                 967
                 968
                 969
                 970
                 971
                 972
                 973
                 974
                 975
                 976
                 977
                 978
                 979
                 980
                 981
                 982
                 983
                 984
                 985
                 986
                 987
                 988
                 989
                 990
                 991
                 992
                 993
                 994
                 995
                 996
                 997
                 998
                 999
                1000
                1001
                1002
                1003
                1004
                1005
                1006
                1007
                1008
                1009
                1010
                1011
                1012
                1013
                1014
                1015
                1016
                1017
                1018
                1019
                1020
                1021
                1022
                1023
                1024
                1025
                1026
                1027
                1028
                1029
                1030
                1031
                1032
                1033
                1034
                1035
                1036
                1037
                1038
                1039
                1040
                1041
                1042
                1043
                1044
                1045
                1046
                1047
                1048
                1049
                1050
                1051
                1052
                1053
                1054
                1055
                1056
                1057
                1058
                1059
                1060
                1061
                1062
                1063
                1064
                1065
                1066
                1067
                1068
                1069
                1070
                1071
                1072
                1073
                1074
                1075
                1076
                1077
                1078
                1079
                1080
                1081
                1082
                1083
                1084
                1085
                1086
                1087
                1088
                1089
                1090
                1091
                1092
                1093
                1094
                1095
                1096
                1097
                1098
                1099
                1100
                1101
                1102
                1103
                1104
                1105
                1106
                1107
                1108
                1109
                1110
                1111
                1112
                1113
                1114
                1115
                1116
                1117
                1118
                1119
                1120
                1121
                1122
                1123
                1124
                1125
                1126
                1127
                1128
                1129
                1130
                1131
                1132
                1133
                1134
                1135
                1136
                1137
                1138
                1139
                1140
                1141
                1142
                1143
                1144
                1145
                1146
                1147
                1148
                1149
                1150
                1151
                1152
                1153
                1154
                1155
                1156
                1157
                1158
                1159
                1160
                1161
                1162
                1163
                1164
                1165
                1166
                1167
                1168
                1169
                1170
                1171
                1172
                1173
                1174
                1175
                1176
                class Agent[TDeps = None, OutputDataT = str](MessageNode[TDeps, OutputDataT]):
                    """The main agent class.
                
                    Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                    """
                
                    @dataclass(frozen=True)
                    class AgentReset:
                        """Emitted when agent is reset."""
                
                        agent_name: AgentName
                        previous_tools: dict[str, bool]
                        new_tools: dict[str, bool]
                        timestamp: datetime = field(default_factory=get_now)
                
                    run_failed = Signal(str, Exception)
                    agent_reset = Signal(AgentReset)
                
                    def __init__(  # noqa: PLR0915
                        # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                        self,
                        name: str = "llmling-agent",
                        *,
                        deps_type: type[TDeps] | None = None,
                        model: ModelType = None,
                        output_type: OutputSpec[OutputDataT] = str,  # type: ignore[assignment]
                        # context: AgentContext[TDeps] | None = None,
                        session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                        system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                        description: str | None = None,
                        display_name: str | None = None,
                        tools: Sequence[ToolType | Tool] | None = None,
                        toolsets: Sequence[ResourceProvider] | None = None,
                        mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                        resources: Sequence[PromptType | str] = (),
                        skills_paths: Sequence[JoinablePathLike] | None = None,
                        retries: int = 1,
                        output_retries: int | None = None,
                        end_strategy: EndStrategy = "early",
                        input_provider: InputProvider | None = None,
                        parallel_init: bool = True,
                        debug: bool = False,
                        event_handlers: Sequence[IndividualEventHandler] | None = None,
                        agent_pool: AgentPool[Any] | None = None,
                        tool_mode: ToolMode | None = None,
                        knowledge: Knowledge | None = None,
                        agent_config: AgentConfig | None = None,
                        env: ExecutionEnvironment | None = None,
                    ) -> None:
                        """Initialize agent.
                
                        Args:
                            name: Identifier for the agent (used for logging and lookups)
                            deps_type: Type of dependencies to use
                            model: The default model to use (defaults to GPT-5)
                            output_type: The default output type to use (defaults to str)
                            context: Agent context with configuration
                            session: Memory configuration.
                                - None: Default memory config
                                - False: Disable message history (max_messages=0)
                                - int: Max tokens for memory
                                - str/UUID: Session identifier
                                - MemoryConfig: Full memory configuration
                                - MemoryProvider: Custom memory provider
                                - SessionQuery: Session query
                
                            system_prompt: System prompts for the agent
                            description: Description of the Agent ("what it can do")
                            display_name: Human-readable display name (falls back to name)
                            tools: List of tools to register with the agent
                            toolsets: List of toolset resource providers for the agent
                            mcp_servers: MCP servers to connect to
                            resources: Additional resources to load
                            skills_paths: Local directories to search for agent-specific skills
                            retries: Default number of retries for failed operations
                            output_retries: Max retries for result validation (defaults to retries)
                            end_strategy: Strategy for handling tool calls that are requested alongside
                                          a final result
                            input_provider: Provider for human input (tool confirmation / HumanProviders)
                            parallel_init: Whether to initialize resources in parallel
                            debug: Whether to enable debug mode
                            event_handlers: Sequence of event handlers to register with the agent
                            agent_pool: AgentPool instance for managing agent resources
                            tool_mode: Tool execution mode (None or "codemode")
                            knowledge: Knowledge sources for this agent
                            agent_config: Agent configuration
                            env: Execution environment for code/command execution and filesystem access
                        """
                        from llmling_agent.agent import AgentContext
                        from llmling_agent.agent.conversation import MessageHistory
                        from llmling_agent.agent.interactions import Interactions
                        from llmling_agent.agent.sys_prompts import SystemPrompts
                        from llmling_agent.models.agents import AgentConfig
                        from llmling_agent.models.manifest import AgentsManifest
                        from llmling_agent.prompts.conversion_manager import ConversionManager
                        from llmling_agent_config.session import MemoryConfig
                
                        self.task_manager = TaskManager()
                        self._infinite = False
                        self.deps_type = deps_type
                        self._manifest = agent_pool.manifest if agent_pool else AgentsManifest()
                        ctx = AgentContext[TDeps](
                            node_name=name,
                            definition=self._manifest,
                            config=agent_config or AgentConfig(name=name),
                            input_provider=input_provider,
                            pool=agent_pool,
                        )
                        self._context = ctx
                        # TODO: use to_structured with tool_name / description?
                        self._output_type = to_type(output_type)
                        memory_cfg = (
                            session if isinstance(session, MemoryConfig) else MemoryConfig.from_value(session)
                        )
                        # Initialize progress queue before super().__init__()
                        self._event_queue = asyncio.Queue[RichAgentStreamEvent[Any]]()
                        mcp_servers = list(mcp_servers) if mcp_servers else []
                        if ctx and (cfg := ctx.config) and cfg.mcp_servers:
                            mcp_servers.extend(cfg.get_mcp_servers())
                        super().__init__(
                            name=name,
                            description=description,
                            display_name=display_name,
                            enable_logging=memory_cfg.enable,
                            mcp_servers=mcp_servers,
                            agent_pool=agent_pool,
                            event_configs=agent_config.triggers if agent_config else [],
                        )
                
                        # Initialize tool manager
                        self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                        all_tools = list(tools or [])
                        effective_tool_mode = tool_mode
                        self.tools = ToolManager(all_tools, tool_mode=effective_tool_mode)
                
                        # MCP manager will be initialized in __aenter__ and providers added there
                        for toolset_provider in toolsets or []:
                            self.tools.add_provider(toolset_provider)
                        aggregating_provider = self.mcp.get_aggregating_provider()
                        self.tools.add_provider(aggregating_provider)
                
                        # # Add local skills provider if directories specified
                        # if skills_paths:
                        #     from llmling_agent.resource_providers.skills import SkillsResourceProvider
                
                        #     skills_paths = [
                        #         Path(d) if isinstance(d, str) else d for d in skills_paths
                        #     ]
                        #     local_skills_provider = SkillsResourceProvider(
                        #         skills_dirs=skills_paths,
                        #         name=f"{name}_local_skills",
                        #         owner=name,
                        #     )
                        #     self.tools.add_provider(local_skills_provider)
                
                        # Initialize conversation manager
                        resources = list(resources)
                        effective_knowledge = knowledge or (ctx.config.knowledge if ctx else None)
                        if effective_knowledge:
                            resources.extend(effective_knowledge.get_resources())
                        storage = agent_pool.storage if agent_pool else StorageManager(self._manifest.storage)
                        self.conversation = MessageHistory(
                            storage=storage,
                            converter=ConversionManager(config=self._manifest.conversion),
                            session_config=memory_cfg,
                            resources=resources,
                        )
                        self._model = model
                        self._retries = retries
                        self._end_strategy: EndStrategy = end_strategy
                        self._output_retries = output_retries
                        # init variables
                        self._debug = debug
                        self.parallel_init = parallel_init
                        self._name = name
                        self._background_task: asyncio.Task[ChatMessage[Any]] | None = None
                        self.talk = Interactions(self)
                        from anyenv.code_execution import LocalExecutionEnvironment
                
                        self.env = env or LocalExecutionEnvironment()
                
                        # Set up system prompts
                        all_prompts: list[AnyPromptType] = []
                        if isinstance(system_prompt, (list, tuple)):
                            all_prompts.extend(system_prompt)
                        elif system_prompt:
                            all_prompts.append(system_prompt)
                        self.sys_prompts = SystemPrompts(all_prompts, prompt_manager=ctx.prompt_manager)
                
                    def __repr__(self) -> str:
                        desc = f", {self.description!r}" if self.description else ""
                        return f"Agent({self.name!r}, model={self._model!r}{desc})"
                
                    async def __prompt__(self) -> str:
                        typ = self.__class__.__name__
                        model = self.model_name or "default"
                        parts = [f"Agent: {self.name}", f"Type: {typ}", f"Model: {model}"]
                        if self.description:
                            parts.append(f"Description: {self.description}")
                        parts.extend([await self.tools.__prompt__(), self.conversation.__prompt__()])
                        return "\n".join(parts)
                
                    async def __aenter__(self) -> Self:
                        """Enter async context and set up MCP servers."""
                        try:
                            # Collect all coroutines that need to be run
                            coros: list[Coroutine[Any, Any, Any]] = []
                            coros.append(super().__aenter__())
                            coros.extend(self.conversation.get_initialization_tasks())
                            if self.parallel_init and coros:
                                await asyncio.gather(*coros)
                            else:
                                for coro in coros:
                                    await coro
                        except Exception as e:
                            msg = "Failed to initialize agent"
                            raise RuntimeError(msg) from e
                        else:
                            return self
                
                    async def __aexit__(
                        self,
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    ) -> None:
                        """Exit async context."""
                        await super().__aexit__(exc_type, exc_val, exc_tb)
                
                    @overload
                    def __and__(  # if other doesnt define deps, we take the agents one
                        self, other: ProcessorCallback[Any] | Team[TDeps] | Agent[TDeps, Any]
                    ) -> Team[TDeps]: ...
                
                    @overload
                    def __and__(  # otherwise, we dont know and deps is Any
                        self, other: ProcessorCallback[Any] | Team[Any] | Agent[Any, Any]
                    ) -> Team[Any]: ...
                
                    def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                        """Create sequential team using & operator.
                
                        Example:
                            group = analyzer & planner & executor  # Create group of 3
                            group = analyzer & existing_group  # Add to existing group
                        """
                        from llmling_agent.delegation.team import Team
                
                        match other:
                            case Team():
                                return Team([self, *other.nodes])
                            case Callable():
                                agent_2 = Agent.from_callback(other)
                                agent_2.context.pool = self.context.pool
                                return Team([self, agent_2])
                            case MessageNode():
                                return Team([self, other])
                            case _:
                                msg = f"Invalid agent type: {type(other)}"
                                raise ValueError(msg)
                
                    @overload
                    def __or__(self, other: MessageNode[TDeps, Any]) -> TeamRun[TDeps, Any]: ...
                
                    @overload
                    def __or__[TOtherDeps](self, other: MessageNode[TOtherDeps, Any]) -> TeamRun[Any, Any]: ...
                
                    @overload
                    def __or__(self, other: ProcessorCallback[Any]) -> TeamRun[Any, Any]: ...
                
                    def __or__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> TeamRun[Any, Any]:
                        # Create new execution with sequential mode (for piping)
                        from llmling_agent import TeamRun
                
                        if callable(other):
                            other = Agent.from_callback(other)
                            other.context.pool = self.context.pool
                
                        return TeamRun([self, other])
                
                    @overload
                    @classmethod
                    def from_callback(
                        cls,
                        callback: Callable[..., Awaitable[TResult]],
                        *,
                        name: str | None = None,
                        **kwargs: Any,
                    ) -> Agent[None, TResult]: ...
                
                    @overload
                    @classmethod
                    def from_callback(
                        cls,
                        callback: Callable[..., TResult],
                        *,
                        name: str | None = None,
                        **kwargs: Any,
                    ) -> Agent[None, TResult]: ...
                
                    @classmethod
                    def from_callback(
                        cls,
                        callback: ProcessorCallback[Any],
                        *,
                        name: str | None = None,
                        **kwargs: Any,
                    ) -> Agent[None, Any]:
                        """Create an agent from a processing callback.
                
                        Args:
                            callback: Function to process messages. Can be:
                                - sync or async
                                - with or without context
                                - must return str for pipeline compatibility
                            name: Optional name for the agent
                            kwargs: Additional arguments for agent
                        """
                        name = name or callback.__name__ or "processor"
                        model = function_to_model(callback)
                        return_type = _typing_extra.get_function_type_hints(callback).get("return")
                        if (  # If async, unwrap from Awaitable
                            return_type
                            and hasattr(return_type, "__origin__")
                            and return_type.__origin__ is Awaitable
                        ):
                            return_type = return_type.__args__[0]
                        return Agent(model=model, name=name, output_type=return_type or str, **kwargs)  # pyright: ignore[reportReturnType]
                
                    @property
                    def name(self) -> str:
                        """Get agent name."""
                        return self._name or "llmling-agent"
                
                    @name.setter
                    def name(self, value: str) -> None:
                        """Set agent name."""
                        self._name = value
                
                    @property  # type: ignore[override]
                    def context(self) -> AgentContext[TDeps]:
                        """Get agent context."""
                        return self._context
                
                    @context.setter
                    def context(self, value: AgentContext[TDeps]) -> None:
                        """Set agent context and propagate to provider."""
                        self._context = value
                
                    def to_structured[NewOutputDataT](
                        self,
                        output_type: type[NewOutputDataT],
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ) -> Agent[TDeps, NewOutputDataT]:
                        """Convert this agent to a structured agent.
                
                        Args:
                            output_type: Type for structured responses. Can be:
                                - A Python type (Pydantic model)
                            tool_name: Optional override for result tool name
                            tool_description: Optional override for result tool description
                
                        Returns:
                            Typed Agent
                        """
                        self.log.debug("Setting result type", output_type=output_type)
                        self._output_type = to_type(output_type)
                        return self  # type: ignore
                
                    def is_busy(self) -> bool:
                        """Check if agent is currently processing tasks."""
                        return bool(self.task_manager._pending_tasks or self._background_task)
                
                    @property
                    def model_name(self) -> str | None:
                        """Get the model name in a consistent format."""
                        return self._model.model_name if isinstance(self._model, Model) else self._model
                
                    def to_tool(
                        self,
                        *,
                        name: str | None = None,
                        reset_history_on_run: bool = True,
                        pass_message_history: bool = False,
                        parent: Agent[Any, Any] | None = None,
                    ) -> Tool[OutputDataT]:
                        """Create a tool from this agent.
                
                        Args:
                            name: Optional tool name override
                            reset_history_on_run: Clear agent's history before each run
                            pass_message_history: Pass parent's message history to agent
                            parent: Optional parent agent for history/context sharing
                        """
                
                        async def wrapped_tool(prompt: str) -> Any:
                            if pass_message_history and not parent:
                                msg = "Parent agent required for message history sharing"
                                raise ToolError(msg)
                
                            if reset_history_on_run:
                                self.conversation.clear()
                
                            history = None
                            if pass_message_history and parent:
                                history = parent.conversation.get_history()
                                old = self.conversation.get_history()
                                self.conversation.set_history(history)
                            result = await self.run(prompt)
                            if history:
                                self.conversation.set_history(old)
                            return result.data
                
                        # Set the correct return annotation dynamically
                        wrapped_tool.__annotations__ = {"prompt": str, "return": self._output_type or Any}
                
                        normalized_name = self.name.replace("_", " ").title()
                        docstring = f"Get expert answer from specialized agent: {normalized_name}"
                        if self.description:
                            docstring = f"{docstring}\n\n{self.description}"
                        tool_name = name or f"ask_{self.name}"
                        wrapped_tool.__doc__ = docstring
                        wrapped_tool.__name__ = tool_name
                
                        return Tool.from_callable(
                            wrapped_tool,
                            name_override=tool_name,
                            description_override=docstring,
                            source="agent",
                        )
                
                    async def get_agentlet[AgentOutputType](
                        self,
                        tool_choice: str | list[str] | None,
                        model: ModelType,
                        output_type: type[AgentOutputType] | None,
                        input_provider: InputProvider | None = None,
                    ) -> PydanticAgent[TDeps, AgentOutputType]:
                        """Create pydantic-ai agent from current state."""
                        # Monkey patch pydantic-ai to recognize AgentContext
                
                        from llmling_agent.agent.tool_wrapping import wrap_tool
                
                        tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                        final_type = to_type(output_type) if output_type not in [None, str] else self._output_type
                        actual_model = model or self._model
                        model_ = infer_model(actual_model) if isinstance(actual_model, str) else actual_model
                        agent = PydanticAgent(
                            name=self.name,
                            model=model_,
                            instructions=await self.sys_prompts.format_system_prompt(self),
                            retries=self._retries,
                            end_strategy=self._end_strategy,
                            output_retries=self._output_retries,
                            deps_type=self.deps_type or NoneType,
                            output_type=final_type,
                        )
                
                        context_for_tools = (
                            self.context
                            if input_provider is None
                            else replace(self.context, input_provider=input_provider)
                        )
                
                        for tool in tools:
                            wrapped = wrap_tool(tool, context_for_tools)
                            if get_argument_key(wrapped, RunContext):
                                logger.info("Registering tool: with context", tool_name=tool.name)
                                agent.tool(wrapped)
                            else:
                                logger.info("Registering tool: no context", tool_name=tool.name)
                                agent.tool_plain(wrapped)
                
                        return agent  # type: ignore[return-value]
                
                    @overload
                    async def run(
                        self,
                        *prompts: PromptCompatible | ChatMessage[Any],
                        output_type: None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                        tool_choice: str | list[str] | None = None,
                        usage_limits: UsageLimits | None = None,
                        message_id: str | None = None,
                        conversation_id: str | None = None,
                        message_history: MessageHistory | None = None,
                        deps: TDeps | None = None,
                        input_provider: InputProvider | None = None,
                        wait_for_connections: bool | None = None,
                        instructions: str | None = None,
                    ) -> ChatMessage[OutputDataT]: ...
                
                    @overload
                    async def run[OutputTypeT](
                        self,
                        *prompts: PromptCompatible | ChatMessage[Any],
                        output_type: type[OutputTypeT],
                        model: ModelType = None,
                        store_history: bool = True,
                        tool_choice: str | list[str] | None = None,
                        usage_limits: UsageLimits | None = None,
                        message_id: str | None = None,
                        conversation_id: str | None = None,
                        message_history: MessageHistory | None = None,
                        deps: TDeps | None = None,
                        input_provider: InputProvider | None = None,
                        wait_for_connections: bool | None = None,
                        instructions: str | None = None,
                    ) -> ChatMessage[OutputTypeT]: ...
                
                    @method_spawner  # type: ignore[misc]
                    async def run(
                        self,
                        *prompts: PromptCompatible | ChatMessage[Any],
                        output_type: type[Any] | None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                        tool_choice: str | list[str] | None = None,
                        usage_limits: UsageLimits | None = None,
                        message_id: str | None = None,
                        conversation_id: str | None = None,
                        message_history: MessageHistory | None = None,
                        deps: TDeps | None = None,
                        input_provider: InputProvider | None = None,
                        wait_for_connections: bool | None = None,
                        instructions: str | None = None,
                    ) -> ChatMessage[Any]:
                        """Run agent with prompt and get response.
                
                        Args:
                            prompts: User query or instruction
                            output_type: Optional type for structured responses
                            model: Optional model override
                            store_history: Whether the message exchange should be added to the
                                            context window
                            tool_choice: Filter tool choice by name
                            usage_limits: Optional usage limits for the model
                            message_id: Optional message id for the returned message.
                                        Automatically generated if not provided.
                            conversation_id: Optional conversation id for the returned message.
                            messages: Optional list of messages to replace the conversation history
                            message_history: Optional MessageHistory object to
                                             use instead of agent's own conversation
                            deps: Optional dependencies for the agent
                            input_provider: Optional input provider for the agent
                            wait_for_connections: Whether to wait for connected agents to complete
                            instructions: Optional instructions to override the agent's system prompt
                
                        Returns:
                            Result containing response and run information
                
                        Raises:
                            UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                        """
                        conversation = message_history if message_history is not None else self.conversation
                        # Prepare prompts and create user message
                        user_msg, processed_prompts, original_message = await prepare_prompts(*prompts)
                        self.message_received.emit(user_msg)
                        message_id = message_id or str(uuid4())
                        start_time = time.perf_counter()
                        message_history_list = list(conversation.chat_messages)
                        try:
                            agentlet = await self.get_agentlet(tool_choice, model, output_type, input_provider)
                            converted_prompts = await convert_prompts(processed_prompts)
                
                            async def event_distributor(  # Merge internal and external event handlers
                                ctx: RunContext[Any], events: AsyncIterable[AgentStreamEvent]
                            ) -> None:
                                async for event in events:
                                    # Check for queued custom events and distribute them first
                                    while not self._event_queue.empty():
                                        try:
                                            custom_event = self._event_queue.get_nowait()
                                            for handler in self.event_handler._wrapped_handlers:
                                                await handler(ctx, custom_event)
                                        except asyncio.QueueEmpty:
                                            break
                
                                    # Then distribute the original event
                                    for handler in self.event_handler._wrapped_handlers:
                                        await handler(ctx, event)
                
                            result = await agentlet.run(
                                [i if isinstance(i, str) else i.to_pydantic_ai() for i in converted_prompts],
                                deps=deps,  # type: ignore[arg-type]
                                message_history=[m for run in message_history_list for m in run.to_pydantic_ai()],
                                usage_limits=usage_limits,
                                event_stream_handler=event_distributor,
                                instructions=instructions,
                            )
                
                            response_time = time.perf_counter() - start_time
                            message = await ChatMessage.from_run_result(
                                result,
                                agent_name=self.name,
                                message_id=message_id,
                                conversation_id=user_msg.conversation_id,
                                response_time=response_time,
                            )
                
                        except Exception as e:
                            self.log.exception("Agent run failed")
                            self.run_failed.emit("Agent run failed", e)
                            raise
                
                        # Apply forwarding logic before storing history
                        if original_message:
                            message = message.forwarded(original_message)
                
                        # Handle history storage - add to the conversation we're using
                        if store_history:
                            conversation.add_chat_messages([user_msg, message])
                
                        # Finalize and route message (forwarding already applied)
                        return await finalize_message(
                            message,
                            user_msg,
                            self,
                            self.connections,
                            None,  # forwarding already applied above
                            wait_for_connections,
                        )
                
                    @method_spawner
                    async def run_stream(
                        self,
                        *prompt: PromptCompatible,
                        output_type: type[OutputDataT] | None = None,
                        model: ModelType = None,
                        tool_choice: str | list[str] | None = None,
                        store_history: bool = True,
                        usage_limits: UsageLimits | None = None,
                        message_id: str | None = None,
                        conversation_id: str | None = None,
                        messages: list[ChatMessage[Any]] | None = None,
                        input_provider: InputProvider | None = None,
                        wait_for_connections: bool | None = None,
                        deps: TDeps | None = None,
                        instructions: str | None = None,
                    ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
                        """Run agent with prompt and get a streaming response.
                
                        Args:
                            prompt: User query or instruction
                            output_type: Optional type for structured responses
                            model: Optional model override
                            tool_choice: Filter tool choice by name
                            store_history: Whether the message exchange should be added to the
                                           context window
                            usage_limits: Optional usage limits for the model
                            message_id: Optional message id for the returned message.
                                        Automatically generated if not provided.
                            conversation_id: Optional conversation id for the returned message.
                            messages: Optional list of messages to replace the conversation history
                            input_provider: Optional input provider for the agent
                            wait_for_connections: Whether to wait for connected agents to complete
                            deps: Optional dependencies for the agent
                            instructions: Optional instructions to override the agent's system prompt
                        Returns:
                            An async iterator yielding streaming events with final message embedded.
                
                        Raises:
                            UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                        """
                        message_id = message_id or str(uuid4())
                        run_id = str(uuid4())
                        user_msg, prompts, _original_message = await prepare_prompts(*prompt)
                        self.message_received.emit(user_msg)
                        start_time = time.perf_counter()
                        message_history = messages if messages is not None else self.conversation.get_history()
                        yield RunStartedEvent(thread_id=self.conversation_id, run_id=run_id, agent_name=self.name)
                        try:
                            agentlet = await self.get_agentlet(tool_choice, model, output_type, input_provider)
                            content = await convert_prompts(prompts)
                            response_msg: ChatMessage[Any] | None = None
                            # Create tool dict for signal emission
                            converted = [i if isinstance(i, str) else i.to_pydantic_ai() for i in content]
                            stream_events = agentlet.run_stream_events(
                                converted,
                                deps=deps,  # type: ignore[arg-type]
                                message_history=[m for run in message_history for m in run.to_pydantic_ai()],
                                usage_limits=usage_limits,
                                instructions=instructions,
                            )
                
                            # Stream events through merge_queue for progress events
                            async with merge_queue_into_iterator(stream_events, self._event_queue) as events:
                                # Track tool call starts to combine with results later
                                pending_tcs: dict[str, BaseToolCallPart] = {}
                                async for event in events:  # Call event handlers for all events
                                    for handler in self.event_handler._wrapped_handlers:
                                        await handler(None, event)
                
                                    yield event  # type: ignore[misc]
                                    match event:
                                        case (
                                            PartStartEvent(part=BaseToolCallPart() as tool_part)
                                            | FunctionToolCallEvent(part=tool_part)
                                        ):
                                            # Store tool call start info for later combination with result
                                            pending_tcs[tool_part.tool_call_id] = tool_part
                                        case FunctionToolResultEvent(tool_call_id=call_id) as result_event:
                                            # Check if we have a pending tool call to combine with
                                            if call_info := pending_tcs.pop(call_id, None):
                                                # Create and yield combined event
                                                combined_event = ToolCallCompleteEvent(
                                                    tool_name=call_info.tool_name,
                                                    tool_call_id=call_id,
                                                    tool_input=call_info.args_as_dict(),
                                                    tool_result=result_event.result.content
                                                    if isinstance(result_event.result, ToolReturnPart)
                                                    else result_event.result,
                                                    agent_name=self.name,
                                                    message_id=message_id,
                                                )
                                                yield combined_event
                                        case AgentRunResultEvent():
                                            # Capture final result data, Build final response message
                                            response_time = time.perf_counter() - start_time
                                            response_msg = await ChatMessage.from_run_result(
                                                event.result,
                                                agent_name=self.name,
                                                message_id=message_id,
                                                conversation_id=conversation_id,
                                                response_time=response_time,
                                            )
                
                            # Send additional enriched completion event
                            assert response_msg
                            yield StreamCompleteEvent(message=response_msg)  # pyright: ignore[reportReturnType]
                            self.message_sent.emit(response_msg)
                            await self.log_message(response_msg)
                            if store_history:
                                self.conversation.add_chat_messages([user_msg, response_msg])
                            await self.connections.route_message(response_msg, wait=wait_for_connections)
                
                        except Exception as e:
                            self.log.exception("Agent stream failed")
                            self.run_failed.emit("Agent stream failed", e)
                            raise
                
                    async def run_iter(
                        self,
                        *prompt_groups: Sequence[PromptCompatible],
                        output_type: type[OutputDataT] | None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                        wait_for_connections: bool | None = None,
                    ) -> AsyncIterator[ChatMessage[OutputDataT]]:
                        """Run agent sequentially on multiple prompt groups.
                
                        Args:
                            prompt_groups: Groups of prompts to process sequentially
                            output_type: Optional type for structured responses
                            model: Optional model override
                            store_history: Whether to store in conversation history
                            wait_for_connections: Whether to wait for connected agents
                
                        Yields:
                            Response messages in sequence
                
                        Example:
                            questions = [
                                ["What is your name?"],
                                ["How old are you?", image1],
                                ["Describe this image", image2],
                            ]
                            async for response in agent.run_iter(*questions):
                                print(response.content)
                        """
                        for prompts in prompt_groups:
                            response = await self.run(
                                *prompts,
                                output_type=output_type,
                                model=model,
                                store_history=store_history,
                                wait_for_connections=wait_for_connections,
                            )
                            yield response  # pyright: ignore
                
                    @method_spawner
                    async def run_job(
                        self,
                        job: Job[TDeps, str | None],
                        *,
                        store_history: bool = True,
                        include_agent_tools: bool = True,
                    ) -> ChatMessage[OutputDataT]:
                        """Execute a pre-defined task.
                
                        Args:
                            job: Job configuration to execute
                            store_history: Whether the message exchange should be added to the
                                           context window
                            include_agent_tools: Whether to include agent tools
                        Returns:
                            Job execution result
                
                        Raises:
                            JobError: If task execution fails
                            ValueError: If task configuration is invalid
                        """
                        from llmling_agent.tasks import JobError
                
                        if job.required_dependency is not None:  # noqa: SIM102
                            if not isinstance(self.context.data, job.required_dependency):
                                msg = (
                                    f"Agent dependencies ({type(self.context.data)}) "
                                    f"don't match job requirement ({job.required_dependency})"
                                )
                                raise JobError(msg)
                
                        # Load task knowledge
                        if job.knowledge:
                            # Add knowledge sources to context
                            for source in list(job.knowledge.paths):
                                await self.conversation.load_context_source(source)
                            for prompt in job.knowledge.prompts:
                                await self.conversation.load_context_source(prompt)
                        try:
                            # Register task tools temporarily
                            tools = job.get_tools()
                            async with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                # Execute job with job-specific tools
                                return await self.run(await job.get_prompt(), store_history=store_history)
                
                        except Exception as e:
                            self.log.exception("Task execution failed", error=str(e))
                            msg = f"Task execution failed: {e}"
                            raise JobError(msg) from e
                
                    async def run_in_background(
                        self,
                        *prompt: PromptCompatible,
                        max_count: int | None = None,
                        interval: float = 1.0,
                        **kwargs: Any,
                    ) -> asyncio.Task[ChatMessage[OutputDataT] | None]:
                        """Run agent continuously in background with prompt or dynamic prompt function.
                
                        Args:
                            prompt: Static prompt or function that generates prompts
                            max_count: Maximum number of runs (None = infinite)
                            interval: Seconds between runs
                            **kwargs: Arguments passed to run()
                        """
                        self._infinite = max_count is None
                
                        async def _continuous() -> ChatMessage[Any]:
                            count = 0
                            self.log.debug("Starting continuous run", max_count=max_count, interval=interval)
                            latest = None
                            while max_count is None or count < max_count:
                                try:
                                    current_prompts = [
                                        call_with_context(p, self.context, **kwargs) if callable(p) else p
                                        for p in prompt
                                    ]
                                    self.log.debug("Generated prompt", iteration=count)
                                    latest = await self.run(current_prompts, **kwargs)
                                    self.log.debug("Run continuous result", iteration=count)
                
                                    count += 1
                                    await asyncio.sleep(interval)
                                except asyncio.CancelledError:
                                    self.log.debug("Continuous run cancelled")
                                    break
                                except Exception:
                                    count += 1
                                    self.log.exception("Background run failed")
                                    await asyncio.sleep(interval)
                            self.log.debug("Continuous run completed", iterations=count)
                            return latest  # type: ignore[return-value]
                
                        await self.stop()  # Cancel any existing background task
                        task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                        self.log.debug("Started background task", task_name=task.get_name())
                        self._background_task = task
                        return task
                
                    async def stop(self) -> None:
                        """Stop continuous execution if running."""
                        if self._background_task and not self._background_task.done():
                            self._background_task.cancel()
                            await self._background_task
                            self._background_task = None
                
                    async def wait(self) -> ChatMessage[OutputDataT]:
                        """Wait for background execution to complete."""
                        if not self._background_task:
                            msg = "No background task running"
                            raise RuntimeError(msg)
                        if self._infinite:
                            msg = "Cannot wait on infinite execution"
                            raise RuntimeError(msg)
                        try:
                            return await self._background_task
                        finally:
                            self._background_task = None
                
                    async def share(
                        self,
                        target: Agent[TDeps, Any],
                        *,
                        tools: list[str] | None = None,
                        history: bool | int | None = None,  # bool or number of messages
                        token_limit: int | None = None,
                    ) -> None:
                        """Share capabilities and knowledge with another agent.
                
                        Args:
                            target: Agent to share with
                            tools: List of tool names to share
                            history: Share conversation history:
                                    - True: Share full history
                                    - int: Number of most recent messages to share
                                    - None: Don't share history
                            token_limit: Optional max tokens for history
                
                        Raises:
                            ValueError: If requested items don't exist
                            RuntimeError: If runtime not available for resources
                        """
                        # Share tools if requested
                        for name in tools or []:
                            tool = await self.tools.get_tool(name)
                            meta = {"shared_from": self.name}
                            target.tools.register_tool(tool.callable, metadata=meta)
                
                        # Share history if requested
                        if history:
                            history_text = await self.conversation.format_history(
                                max_tokens=token_limit,
                                num_messages=history if isinstance(history, int) else None,
                            )
                            target.conversation.add_context_message(
                                history_text, source=self.name, metadata={"type": "shared_history"}
                            )
                
                    def register_worker(
                        self,
                        worker: MessageNode[Any, Any],
                        *,
                        name: str | None = None,
                        reset_history_on_run: bool = True,
                        pass_message_history: bool = False,
                    ) -> Tool:
                        """Register another agent as a worker tool."""
                        return self.tools.register_worker(
                            worker,
                            name=name,
                            reset_history_on_run=reset_history_on_run,
                            pass_message_history=pass_message_history,
                            parent=self if pass_message_history else None,
                        )
                
                    def set_model(self, model: ModelType) -> None:
                        """Set the model for this agent.
                
                        Args:
                            model: New model to use (name or instance)
                
                        """
                        self._model = model
                
                    async def reset(self) -> None:
                        """Reset agent state (conversation history and tool states)."""
                        old_tools = await self.tools.list_tools()
                        self.conversation.clear()
                        await self.tools.reset_states()
                        new_tools = await self.tools.list_tools()
                
                        event = self.AgentReset(
                            agent_name=self.name,
                            previous_tools=old_tools,
                            new_tools=new_tools,
                        )
                        self.agent_reset.emit(event)
                
                    async def get_stats(self) -> MessageStats:
                        """Get message statistics (async version)."""
                        messages = await self.get_message_history()
                        return MessageStats(messages=messages)
                
                    @asynccontextmanager
                    async def temporary_state[T](
                        self,
                        *,
                        system_prompts: list[AnyPromptType] | None = None,
                        output_type: type[T] | None = None,
                        replace_prompts: bool = False,
                        tools: list[ToolType] | None = None,
                        replace_tools: bool = False,
                        history: list[AnyPromptType] | SessionQuery | None = None,
                        replace_history: bool = False,
                        pause_routing: bool = False,
                        model: ModelType | None = None,
                    ) -> AsyncIterator[Self | Agent[T]]:
                        """Temporarily modify agent state.
                
                        Args:
                            system_prompts: Temporary system prompts to use
                            output_type: Temporary output type to use
                            replace_prompts: Whether to replace existing prompts
                            tools: Temporary tools to make available
                            replace_tools: Whether to replace existing tools
                            history: Conversation history (prompts or query)
                            replace_history: Whether to replace existing history
                            pause_routing: Whether to pause message routing
                            model: Temporary model override
                        """
                        old_model = self._model
                        if output_type:
                            old_type = self._output_type
                            self.to_structured(output_type)
                        async with AsyncExitStack() as stack:
                            if system_prompts is not None:  # System prompts
                                await stack.enter_async_context(
                                    self.sys_prompts.temporary_prompt(system_prompts, exclusive=replace_prompts)
                                )
                
                            if tools is not None:  # Tools
                                await stack.enter_async_context(
                                    self.tools.temporary_tools(tools, exclusive=replace_tools)
                                )
                
                            if history is not None:  # History
                                await stack.enter_async_context(
                                    self.conversation.temporary_state(history, replace_history=replace_history)
                                )
                
                            if pause_routing:  # Routing
                                await stack.enter_async_context(self.connections.paused_routing())
                
                            elif model is not None:  # Model
                                self._model = model
                
                            try:
                                yield self
                            finally:  # Restore model
                                if model is not None and old_model:
                                    self._model = old_model
                                if output_type:
                                    self.to_structured(old_type)
                
                    async def validate_against(
                        self,
                        prompt: str,
                        criteria: type[OutputDataT],
                        **kwargs: Any,
                    ) -> bool:
                        """Check if agent's response satisfies stricter criteria."""
                        result = await self.run(prompt, **kwargs)
                        try:
                            criteria.model_validate(result.content.model_dump())  # type: ignore
                        except ValidationError:
                            return False
                        else:
                            return True
                

                context property writable

                context: AgentContext[TDeps]
                

                Get agent context.

                model_name property

                model_name: str | None
                

                Get the model name in a consistent format.

                name property writable

                name: str
                

                Get agent name.

                AgentReset dataclass

                Emitted when agent is reset.

                Source code in src/llmling_agent/agent/agent.py
                121
                122
                123
                124
                125
                126
                127
                128
                @dataclass(frozen=True)
                class AgentReset:
                    """Emitted when agent is reset."""
                
                    agent_name: AgentName
                    previous_tools: dict[str, bool]
                    new_tools: dict[str, bool]
                    timestamp: datetime = field(default_factory=get_now)
                

                __aenter__ async

                __aenter__() -> Self
                

                Enter async context and set up MCP servers.

                Source code in src/llmling_agent/agent/agent.py
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                async def __aenter__(self) -> Self:
                    """Enter async context and set up MCP servers."""
                    try:
                        # Collect all coroutines that need to be run
                        coros: list[Coroutine[Any, Any, Any]] = []
                        coros.append(super().__aenter__())
                        coros.extend(self.conversation.get_initialization_tasks())
                        if self.parallel_init and coros:
                            await asyncio.gather(*coros)
                        else:
                            for coro in coros:
                                await coro
                    except Exception as e:
                        msg = "Failed to initialize agent"
                        raise RuntimeError(msg) from e
                    else:
                        return self
                

                __aexit__ async

                __aexit__(
                    exc_type: type[BaseException] | None,
                    exc_val: BaseException | None,
                    exc_tb: TracebackType | None,
                ) -> None
                

                Exit async context.

                Source code in src/llmling_agent/agent/agent.py
                335
                336
                337
                338
                339
                340
                341
                342
                async def __aexit__(
                    self,
                    exc_type: type[BaseException] | None,
                    exc_val: BaseException | None,
                    exc_tb: TracebackType | None,
                ) -> None:
                    """Exit async context."""
                    await super().__aexit__(exc_type, exc_val, exc_tb)
                

                __and__

                __and__(other: ProcessorCallback[Any] | Team[TDeps] | Agent[TDeps, Any]) -> Team[TDeps]
                
                __and__(other: ProcessorCallback[Any] | Team[Any] | Agent[Any, Any]) -> Team[Any]
                
                __and__(other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                

                Create sequential team using & operator.

                Example

                group = analyzer & planner & executor # Create group of 3 group = analyzer & existing_group # Add to existing group

                Source code in src/llmling_agent/agent/agent.py
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                    """Create sequential team using & operator.
                
                    Example:
                        group = analyzer & planner & executor  # Create group of 3
                        group = analyzer & existing_group  # Add to existing group
                    """
                    from llmling_agent.delegation.team import Team
                
                    match other:
                        case Team():
                            return Team([self, *other.nodes])
                        case Callable():
                            agent_2 = Agent.from_callback(other)
                            agent_2.context.pool = self.context.pool
                            return Team([self, agent_2])
                        case MessageNode():
                            return Team([self, other])
                        case _:
                            msg = f"Invalid agent type: {type(other)}"
                            raise ValueError(msg)
                

                __init__

                __init__(
                    name: str = "llmling-agent",
                    *,
                    deps_type: type[TDeps] | None = None,
                    model: ModelType = None,
                    output_type: OutputSpec[OutputDataT] = str,
                    session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                    system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                    description: str | None = None,
                    display_name: str | None = None,
                    tools: Sequence[ToolType | Tool] | None = None,
                    toolsets: Sequence[ResourceProvider] | None = None,
                    mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                    resources: Sequence[PromptType | str] = (),
                    skills_paths: Sequence[JoinablePathLike] | None = None,
                    retries: int = 1,
                    output_retries: int | None = None,
                    end_strategy: EndStrategy = "early",
                    input_provider: InputProvider | None = None,
                    parallel_init: bool = True,
                    debug: bool = False,
                    event_handlers: Sequence[IndividualEventHandler] | None = None,
                    agent_pool: AgentPool[Any] | None = None,
                    tool_mode: ToolMode | None = None,
                    knowledge: Knowledge | None = None,
                    agent_config: AgentConfig | None = None,
                    env: ExecutionEnvironment | None = None
                ) -> None
                

                Initialize agent.

                Parameters:

                Name Type Description Default
                name str

                Identifier for the agent (used for logging and lookups)

                'llmling-agent'
                deps_type type[TDeps] | None

                Type of dependencies to use

                None
                model ModelType

                The default model to use (defaults to GPT-5)

                None
                output_type OutputSpec[OutputDataT]

                The default output type to use (defaults to str)

                str
                context

                Agent context with configuration

                required
                session SessionIdType | SessionQuery | MemoryConfig | bool | int

                Memory configuration. - None: Default memory config - False: Disable message history (max_messages=0) - int: Max tokens for memory - str/UUID: Session identifier - MemoryConfig: Full memory configuration - MemoryProvider: Custom memory provider - SessionQuery: Session query

                None
                system_prompt AnyPromptType | Sequence[AnyPromptType]

                System prompts for the agent

                ()
                description str | None

                Description of the Agent ("what it can do")

                None
                display_name str | None

                Human-readable display name (falls back to name)

                None
                tools Sequence[ToolType | Tool] | None

                List of tools to register with the agent

                None
                toolsets Sequence[ResourceProvider] | None

                List of toolset resource providers for the agent

                None
                mcp_servers Sequence[str | MCPServerConfig] | None

                MCP servers to connect to

                None
                resources Sequence[PromptType | str]

                Additional resources to load

                ()
                skills_paths Sequence[JoinablePathLike] | None

                Local directories to search for agent-specific skills

                None
                retries int

                Default number of retries for failed operations

                1
                output_retries int | None

                Max retries for result validation (defaults to retries)

                None
                end_strategy EndStrategy

                Strategy for handling tool calls that are requested alongside a final result

                'early'
                input_provider InputProvider | None

                Provider for human input (tool confirmation / HumanProviders)

                None
                parallel_init bool

                Whether to initialize resources in parallel

                True
                debug bool

                Whether to enable debug mode

                False
                event_handlers Sequence[IndividualEventHandler] | None

                Sequence of event handlers to register with the agent

                None
                agent_pool AgentPool[Any] | None

                AgentPool instance for managing agent resources

                None
                tool_mode ToolMode | None

                Tool execution mode (None or "codemode")

                None
                knowledge Knowledge | None

                Knowledge sources for this agent

                None
                agent_config AgentConfig | None

                Agent configuration

                None
                env ExecutionEnvironment | None

                Execution environment for code/command execution and filesystem access

                None
                Source code in src/llmling_agent/agent/agent.py
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                266
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                def __init__(  # noqa: PLR0915
                    # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                    self,
                    name: str = "llmling-agent",
                    *,
                    deps_type: type[TDeps] | None = None,
                    model: ModelType = None,
                    output_type: OutputSpec[OutputDataT] = str,  # type: ignore[assignment]
                    # context: AgentContext[TDeps] | None = None,
                    session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                    system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                    description: str | None = None,
                    display_name: str | None = None,
                    tools: Sequence[ToolType | Tool] | None = None,
                    toolsets: Sequence[ResourceProvider] | None = None,
                    mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                    resources: Sequence[PromptType | str] = (),
                    skills_paths: Sequence[JoinablePathLike] | None = None,
                    retries: int = 1,
                    output_retries: int | None = None,
                    end_strategy: EndStrategy = "early",
                    input_provider: InputProvider | None = None,
                    parallel_init: bool = True,
                    debug: bool = False,
                    event_handlers: Sequence[IndividualEventHandler] | None = None,
                    agent_pool: AgentPool[Any] | None = None,
                    tool_mode: ToolMode | None = None,
                    knowledge: Knowledge | None = None,
                    agent_config: AgentConfig | None = None,
                    env: ExecutionEnvironment | None = None,
                ) -> None:
                    """Initialize agent.
                
                    Args:
                        name: Identifier for the agent (used for logging and lookups)
                        deps_type: Type of dependencies to use
                        model: The default model to use (defaults to GPT-5)
                        output_type: The default output type to use (defaults to str)
                        context: Agent context with configuration
                        session: Memory configuration.
                            - None: Default memory config
                            - False: Disable message history (max_messages=0)
                            - int: Max tokens for memory
                            - str/UUID: Session identifier
                            - MemoryConfig: Full memory configuration
                            - MemoryProvider: Custom memory provider
                            - SessionQuery: Session query
                
                        system_prompt: System prompts for the agent
                        description: Description of the Agent ("what it can do")
                        display_name: Human-readable display name (falls back to name)
                        tools: List of tools to register with the agent
                        toolsets: List of toolset resource providers for the agent
                        mcp_servers: MCP servers to connect to
                        resources: Additional resources to load
                        skills_paths: Local directories to search for agent-specific skills
                        retries: Default number of retries for failed operations
                        output_retries: Max retries for result validation (defaults to retries)
                        end_strategy: Strategy for handling tool calls that are requested alongside
                                      a final result
                        input_provider: Provider for human input (tool confirmation / HumanProviders)
                        parallel_init: Whether to initialize resources in parallel
                        debug: Whether to enable debug mode
                        event_handlers: Sequence of event handlers to register with the agent
                        agent_pool: AgentPool instance for managing agent resources
                        tool_mode: Tool execution mode (None or "codemode")
                        knowledge: Knowledge sources for this agent
                        agent_config: Agent configuration
                        env: Execution environment for code/command execution and filesystem access
                    """
                    from llmling_agent.agent import AgentContext
                    from llmling_agent.agent.conversation import MessageHistory
                    from llmling_agent.agent.interactions import Interactions
                    from llmling_agent.agent.sys_prompts import SystemPrompts
                    from llmling_agent.models.agents import AgentConfig
                    from llmling_agent.models.manifest import AgentsManifest
                    from llmling_agent.prompts.conversion_manager import ConversionManager
                    from llmling_agent_config.session import MemoryConfig
                
                    self.task_manager = TaskManager()
                    self._infinite = False
                    self.deps_type = deps_type
                    self._manifest = agent_pool.manifest if agent_pool else AgentsManifest()
                    ctx = AgentContext[TDeps](
                        node_name=name,
                        definition=self._manifest,
                        config=agent_config or AgentConfig(name=name),
                        input_provider=input_provider,
                        pool=agent_pool,
                    )
                    self._context = ctx
                    # TODO: use to_structured with tool_name / description?
                    self._output_type = to_type(output_type)
                    memory_cfg = (
                        session if isinstance(session, MemoryConfig) else MemoryConfig.from_value(session)
                    )
                    # Initialize progress queue before super().__init__()
                    self._event_queue = asyncio.Queue[RichAgentStreamEvent[Any]]()
                    mcp_servers = list(mcp_servers) if mcp_servers else []
                    if ctx and (cfg := ctx.config) and cfg.mcp_servers:
                        mcp_servers.extend(cfg.get_mcp_servers())
                    super().__init__(
                        name=name,
                        description=description,
                        display_name=display_name,
                        enable_logging=memory_cfg.enable,
                        mcp_servers=mcp_servers,
                        agent_pool=agent_pool,
                        event_configs=agent_config.triggers if agent_config else [],
                    )
                
                    # Initialize tool manager
                    self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                    all_tools = list(tools or [])
                    effective_tool_mode = tool_mode
                    self.tools = ToolManager(all_tools, tool_mode=effective_tool_mode)
                
                    # MCP manager will be initialized in __aenter__ and providers added there
                    for toolset_provider in toolsets or []:
                        self.tools.add_provider(toolset_provider)
                    aggregating_provider = self.mcp.get_aggregating_provider()
                    self.tools.add_provider(aggregating_provider)
                
                    # # Add local skills provider if directories specified
                    # if skills_paths:
                    #     from llmling_agent.resource_providers.skills import SkillsResourceProvider
                
                    #     skills_paths = [
                    #         Path(d) if isinstance(d, str) else d for d in skills_paths
                    #     ]
                    #     local_skills_provider = SkillsResourceProvider(
                    #         skills_dirs=skills_paths,
                    #         name=f"{name}_local_skills",
                    #         owner=name,
                    #     )
                    #     self.tools.add_provider(local_skills_provider)
                
                    # Initialize conversation manager
                    resources = list(resources)
                    effective_knowledge = knowledge or (ctx.config.knowledge if ctx else None)
                    if effective_knowledge:
                        resources.extend(effective_knowledge.get_resources())
                    storage = agent_pool.storage if agent_pool else StorageManager(self._manifest.storage)
                    self.conversation = MessageHistory(
                        storage=storage,
                        converter=ConversionManager(config=self._manifest.conversion),
                        session_config=memory_cfg,
                        resources=resources,
                    )
                    self._model = model
                    self._retries = retries
                    self._end_strategy: EndStrategy = end_strategy
                    self._output_retries = output_retries
                    # init variables
                    self._debug = debug
                    self.parallel_init = parallel_init
                    self._name = name
                    self._background_task: asyncio.Task[ChatMessage[Any]] | None = None
                    self.talk = Interactions(self)
                    from anyenv.code_execution import LocalExecutionEnvironment
                
                    self.env = env or LocalExecutionEnvironment()
                
                    # Set up system prompts
                    all_prompts: list[AnyPromptType] = []
                    if isinstance(system_prompt, (list, tuple)):
                        all_prompts.extend(system_prompt)
                    elif system_prompt:
                        all_prompts.append(system_prompt)
                    self.sys_prompts = SystemPrompts(all_prompts, prompt_manager=ctx.prompt_manager)
                

                from_callback classmethod

                from_callback(
                    callback: Callable[..., Awaitable[TResult]], *, name: str | None = None, **kwargs: Any
                ) -> Agent[None, TResult]
                
                from_callback(
                    callback: Callable[..., TResult], *, name: str | None = None, **kwargs: Any
                ) -> Agent[None, TResult]
                
                from_callback(
                    callback: ProcessorCallback[Any], *, name: str | None = None, **kwargs: Any
                ) -> Agent[None, Any]
                

                Create an agent from a processing callback.

                Parameters:

                Name Type Description Default
                callback ProcessorCallback[Any]

                Function to process messages. Can be: - sync or async - with or without context - must return str for pipeline compatibility

                required
                name str | None

                Optional name for the agent

                None
                kwargs Any

                Additional arguments for agent

                {}
                Source code in src/llmling_agent/agent/agent.py
                415
                416
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                442
                @classmethod
                def from_callback(
                    cls,
                    callback: ProcessorCallback[Any],
                    *,
                    name: str | None = None,
                    **kwargs: Any,
                ) -> Agent[None, Any]:
                    """Create an agent from a processing callback.
                
                    Args:
                        callback: Function to process messages. Can be:
                            - sync or async
                            - with or without context
                            - must return str for pipeline compatibility
                        name: Optional name for the agent
                        kwargs: Additional arguments for agent
                    """
                    name = name or callback.__name__ or "processor"
                    model = function_to_model(callback)
                    return_type = _typing_extra.get_function_type_hints(callback).get("return")
                    if (  # If async, unwrap from Awaitable
                        return_type
                        and hasattr(return_type, "__origin__")
                        and return_type.__origin__ is Awaitable
                    ):
                        return_type = return_type.__args__[0]
                    return Agent(model=model, name=name, output_type=return_type or str, **kwargs)  # pyright: ignore[reportReturnType]
                

                get_agentlet async

                get_agentlet(
                    tool_choice: str | list[str] | None,
                    model: ModelType,
                    output_type: type[AgentOutputType] | None,
                    input_provider: InputProvider | None = None,
                ) -> Agent[TDeps, AgentOutputType]
                

                Create pydantic-ai agent from current state.

                Source code in src/llmling_agent/agent/agent.py
                548
                549
                550
                551
                552
                553
                554
                555
                556
                557
                558
                559
                560
                561
                562
                563
                564
                565
                566
                567
                568
                569
                570
                571
                572
                573
                574
                575
                576
                577
                578
                579
                580
                581
                582
                583
                584
                585
                586
                587
                588
                589
                590
                async def get_agentlet[AgentOutputType](
                    self,
                    tool_choice: str | list[str] | None,
                    model: ModelType,
                    output_type: type[AgentOutputType] | None,
                    input_provider: InputProvider | None = None,
                ) -> PydanticAgent[TDeps, AgentOutputType]:
                    """Create pydantic-ai agent from current state."""
                    # Monkey patch pydantic-ai to recognize AgentContext
                
                    from llmling_agent.agent.tool_wrapping import wrap_tool
                
                    tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                    final_type = to_type(output_type) if output_type not in [None, str] else self._output_type
                    actual_model = model or self._model
                    model_ = infer_model(actual_model) if isinstance(actual_model, str) else actual_model
                    agent = PydanticAgent(
                        name=self.name,
                        model=model_,
                        instructions=await self.sys_prompts.format_system_prompt(self),
                        retries=self._retries,
                        end_strategy=self._end_strategy,
                        output_retries=self._output_retries,
                        deps_type=self.deps_type or NoneType,
                        output_type=final_type,
                    )
                
                    context_for_tools = (
                        self.context
                        if input_provider is None
                        else replace(self.context, input_provider=input_provider)
                    )
                
                    for tool in tools:
                        wrapped = wrap_tool(tool, context_for_tools)
                        if get_argument_key(wrapped, RunContext):
                            logger.info("Registering tool: with context", tool_name=tool.name)
                            agent.tool(wrapped)
                        else:
                            logger.info("Registering tool: no context", tool_name=tool.name)
                            agent.tool_plain(wrapped)
                
                    return agent  # type: ignore[return-value]
                

                get_stats async

                get_stats() -> MessageStats
                

                Get message statistics (async version).

                Source code in src/llmling_agent/agent/agent.py
                1097
                1098
                1099
                1100
                async def get_stats(self) -> MessageStats:
                    """Get message statistics (async version)."""
                    messages = await self.get_message_history()
                    return MessageStats(messages=messages)
                

                is_busy

                is_busy() -> bool
                

                Check if agent is currently processing tasks.

                Source code in src/llmling_agent/agent/agent.py
                486
                487
                488
                def is_busy(self) -> bool:
                    """Check if agent is currently processing tasks."""
                    return bool(self.task_manager._pending_tasks or self._background_task)
                

                register_worker

                register_worker(
                    worker: MessageNode[Any, Any],
                    *,
                    name: str | None = None,
                    reset_history_on_run: bool = True,
                    pass_message_history: bool = False
                ) -> Tool
                

                Register another agent as a worker tool.

                Source code in src/llmling_agent/agent/agent.py
                1057
                1058
                1059
                1060
                1061
                1062
                1063
                1064
                1065
                1066
                1067
                1068
                1069
                1070
                1071
                1072
                def register_worker(
                    self,
                    worker: MessageNode[Any, Any],
                    *,
                    name: str | None = None,
                    reset_history_on_run: bool = True,
                    pass_message_history: bool = False,
                ) -> Tool:
                    """Register another agent as a worker tool."""
                    return self.tools.register_worker(
                        worker,
                        name=name,
                        reset_history_on_run=reset_history_on_run,
                        pass_message_history=pass_message_history,
                        parent=self if pass_message_history else None,
                    )
                

                reset async

                reset() -> None
                

                Reset agent state (conversation history and tool states).

                Source code in src/llmling_agent/agent/agent.py
                1083
                1084
                1085
                1086
                1087
                1088
                1089
                1090
                1091
                1092
                1093
                1094
                1095
                async def reset(self) -> None:
                    """Reset agent state (conversation history and tool states)."""
                    old_tools = await self.tools.list_tools()
                    self.conversation.clear()
                    await self.tools.reset_states()
                    new_tools = await self.tools.list_tools()
                
                    event = self.AgentReset(
                        agent_name=self.name,
                        previous_tools=old_tools,
                        new_tools=new_tools,
                    )
                    self.agent_reset.emit(event)
                

                run async

                run(
                    *prompts: PromptCompatible | ChatMessage[Any],
                    output_type: None = None,
                    model: ModelType = None,
                    store_history: bool = True,
                    tool_choice: str | list[str] | None = None,
                    usage_limits: UsageLimits | None = None,
                    message_id: str | None = None,
                    conversation_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    deps: TDeps | None = None,
                    input_provider: InputProvider | None = None,
                    wait_for_connections: bool | None = None,
                    instructions: str | None = None
                ) -> ChatMessage[OutputDataT]
                
                run(
                    *prompts: PromptCompatible | ChatMessage[Any],
                    output_type: type[OutputTypeT],
                    model: ModelType = None,
                    store_history: bool = True,
                    tool_choice: str | list[str] | None = None,
                    usage_limits: UsageLimits | None = None,
                    message_id: str | None = None,
                    conversation_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    deps: TDeps | None = None,
                    input_provider: InputProvider | None = None,
                    wait_for_connections: bool | None = None,
                    instructions: str | None = None
                ) -> ChatMessage[OutputTypeT]
                
                run(
                    *prompts: PromptCompatible | ChatMessage[Any],
                    output_type: type[Any] | None = None,
                    model: ModelType = None,
                    store_history: bool = True,
                    tool_choice: str | list[str] | None = None,
                    usage_limits: UsageLimits | None = None,
                    message_id: str | None = None,
                    conversation_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    deps: TDeps | None = None,
                    input_provider: InputProvider | None = None,
                    wait_for_connections: bool | None = None,
                    instructions: str | None = None
                ) -> ChatMessage[Any]
                

                Run agent with prompt and get response.

                Parameters:

                Name Type Description Default
                prompts PromptCompatible | ChatMessage[Any]

                User query or instruction

                ()
                output_type type[Any] | None

                Optional type for structured responses

                None
                model ModelType

                Optional model override

                None
                store_history bool

                Whether the message exchange should be added to the context window

                True
                tool_choice str | list[str] | None

                Filter tool choice by name

                None
                usage_limits UsageLimits | None

                Optional usage limits for the model

                None
                message_id str | None

                Optional message id for the returned message. Automatically generated if not provided.

                None
                conversation_id str | None

                Optional conversation id for the returned message.

                None
                messages

                Optional list of messages to replace the conversation history

                required
                message_history MessageHistory | None

                Optional MessageHistory object to use instead of agent's own conversation

                None
                deps TDeps | None

                Optional dependencies for the agent

                None
                input_provider InputProvider | None

                Optional input provider for the agent

                None
                wait_for_connections bool | None

                Whether to wait for connected agents to complete

                None
                instructions str | None

                Optional instructions to override the agent's system prompt

                None

                Returns:

                Type Description
                ChatMessage[Any]

                Result containing response and run information

                Raises:

                Type Description
                UnexpectedModelBehavior

                If the model fails or behaves unexpectedly

                Source code in src/llmling_agent/agent/agent.py
                628
                629
                630
                631
                632
                633
                634
                635
                636
                637
                638
                639
                640
                641
                642
                643
                644
                645
                646
                647
                648
                649
                650
                651
                652
                653
                654
                655
                656
                657
                658
                659
                660
                661
                662
                663
                664
                665
                666
                667
                668
                669
                670
                671
                672
                673
                674
                675
                676
                677
                678
                679
                680
                681
                682
                683
                684
                685
                686
                687
                688
                689
                690
                691
                692
                693
                694
                695
                696
                697
                698
                699
                700
                701
                702
                703
                704
                705
                706
                707
                708
                709
                710
                711
                712
                713
                714
                715
                716
                717
                718
                719
                720
                721
                722
                723
                724
                725
                726
                727
                728
                729
                730
                731
                732
                733
                734
                735
                736
                737
                738
                739
                @method_spawner  # type: ignore[misc]
                async def run(
                    self,
                    *prompts: PromptCompatible | ChatMessage[Any],
                    output_type: type[Any] | None = None,
                    model: ModelType = None,
                    store_history: bool = True,
                    tool_choice: str | list[str] | None = None,
                    usage_limits: UsageLimits | None = None,
                    message_id: str | None = None,
                    conversation_id: str | None = None,
                    message_history: MessageHistory | None = None,
                    deps: TDeps | None = None,
                    input_provider: InputProvider | None = None,
                    wait_for_connections: bool | None = None,
                    instructions: str | None = None,
                ) -> ChatMessage[Any]:
                    """Run agent with prompt and get response.
                
                    Args:
                        prompts: User query or instruction
                        output_type: Optional type for structured responses
                        model: Optional model override
                        store_history: Whether the message exchange should be added to the
                                        context window
                        tool_choice: Filter tool choice by name
                        usage_limits: Optional usage limits for the model
                        message_id: Optional message id for the returned message.
                                    Automatically generated if not provided.
                        conversation_id: Optional conversation id for the returned message.
                        messages: Optional list of messages to replace the conversation history
                        message_history: Optional MessageHistory object to
                                         use instead of agent's own conversation
                        deps: Optional dependencies for the agent
                        input_provider: Optional input provider for the agent
                        wait_for_connections: Whether to wait for connected agents to complete
                        instructions: Optional instructions to override the agent's system prompt
                
                    Returns:
                        Result containing response and run information
                
                    Raises:
                        UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                    """
                    conversation = message_history if message_history is not None else self.conversation
                    # Prepare prompts and create user message
                    user_msg, processed_prompts, original_message = await prepare_prompts(*prompts)
                    self.message_received.emit(user_msg)
                    message_id = message_id or str(uuid4())
                    start_time = time.perf_counter()
                    message_history_list = list(conversation.chat_messages)
                    try:
                        agentlet = await self.get_agentlet(tool_choice, model, output_type, input_provider)
                        converted_prompts = await convert_prompts(processed_prompts)
                
                        async def event_distributor(  # Merge internal and external event handlers
                            ctx: RunContext[Any], events: AsyncIterable[AgentStreamEvent]
                        ) -> None:
                            async for event in events:
                                # Check for queued custom events and distribute them first
                                while not self._event_queue.empty():
                                    try:
                                        custom_event = self._event_queue.get_nowait()
                                        for handler in self.event_handler._wrapped_handlers:
                                            await handler(ctx, custom_event)
                                    except asyncio.QueueEmpty:
                                        break
                
                                # Then distribute the original event
                                for handler in self.event_handler._wrapped_handlers:
                                    await handler(ctx, event)
                
                        result = await agentlet.run(
                            [i if isinstance(i, str) else i.to_pydantic_ai() for i in converted_prompts],
                            deps=deps,  # type: ignore[arg-type]
                            message_history=[m for run in message_history_list for m in run.to_pydantic_ai()],
                            usage_limits=usage_limits,
                            event_stream_handler=event_distributor,
                            instructions=instructions,
                        )
                
                        response_time = time.perf_counter() - start_time
                        message = await ChatMessage.from_run_result(
                            result,
                            agent_name=self.name,
                            message_id=message_id,
                            conversation_id=user_msg.conversation_id,
                            response_time=response_time,
                        )
                
                    except Exception as e:
                        self.log.exception("Agent run failed")
                        self.run_failed.emit("Agent run failed", e)
                        raise
                
                    # Apply forwarding logic before storing history
                    if original_message:
                        message = message.forwarded(original_message)
                
                    # Handle history storage - add to the conversation we're using
                    if store_history:
                        conversation.add_chat_messages([user_msg, message])
                
                    # Finalize and route message (forwarding already applied)
                    return await finalize_message(
                        message,
                        user_msg,
                        self,
                        self.connections,
                        None,  # forwarding already applied above
                        wait_for_connections,
                    )
                

                run_in_background async

                run_in_background(
                    *prompt: PromptCompatible, max_count: int | None = None, interval: float = 1.0, **kwargs: Any
                ) -> Task[ChatMessage[OutputDataT] | None]
                

                Run agent continuously in background with prompt or dynamic prompt function.

                Parameters:

                Name Type Description Default
                prompt PromptCompatible

                Static prompt or function that generates prompts

                ()
                max_count int | None

                Maximum number of runs (None = infinite)

                None
                interval float

                Seconds between runs

                1.0
                **kwargs Any

                Arguments passed to run()

                {}
                Source code in src/llmling_agent/agent/agent.py
                949
                950
                951
                952
                953
                954
                955
                956
                957
                958
                959
                960
                961
                962
                963
                964
                965
                966
                967
                968
                969
                970
                971
                972
                973
                974
                975
                976
                977
                978
                979
                980
                981
                982
                983
                984
                985
                986
                987
                988
                989
                990
                991
                992
                993
                994
                995
                996
                async def run_in_background(
                    self,
                    *prompt: PromptCompatible,
                    max_count: int | None = None,
                    interval: float = 1.0,
                    **kwargs: Any,
                ) -> asyncio.Task[ChatMessage[OutputDataT] | None]:
                    """Run agent continuously in background with prompt or dynamic prompt function.
                
                    Args:
                        prompt: Static prompt or function that generates prompts
                        max_count: Maximum number of runs (None = infinite)
                        interval: Seconds between runs
                        **kwargs: Arguments passed to run()
                    """
                    self._infinite = max_count is None
                
                    async def _continuous() -> ChatMessage[Any]:
                        count = 0
                        self.log.debug("Starting continuous run", max_count=max_count, interval=interval)
                        latest = None
                        while max_count is None or count < max_count:
                            try:
                                current_prompts = [
                                    call_with_context(p, self.context, **kwargs) if callable(p) else p
                                    for p in prompt
                                ]
                                self.log.debug("Generated prompt", iteration=count)
                                latest = await self.run(current_prompts, **kwargs)
                                self.log.debug("Run continuous result", iteration=count)
                
                                count += 1
                                await asyncio.sleep(interval)
                            except asyncio.CancelledError:
                                self.log.debug("Continuous run cancelled")
                                break
                            except Exception:
                                count += 1
                                self.log.exception("Background run failed")
                                await asyncio.sleep(interval)
                        self.log.debug("Continuous run completed", iterations=count)
                        return latest  # type: ignore[return-value]
                
                    await self.stop()  # Cancel any existing background task
                    task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                    self.log.debug("Started background task", task_name=task.get_name())
                    self._background_task = task
                    return task
                

                run_iter async

                run_iter(
                    *prompt_groups: Sequence[PromptCompatible],
                    output_type: type[OutputDataT] | None = None,
                    model: ModelType = None,
                    store_history: bool = True,
                    wait_for_connections: bool | None = None
                ) -> AsyncIterator[ChatMessage[OutputDataT]]
                

                Run agent sequentially on multiple prompt groups.

                Parameters:

                Name Type Description Default
                prompt_groups Sequence[PromptCompatible]

                Groups of prompts to process sequentially

                ()
                output_type type[OutputDataT] | None

                Optional type for structured responses

                None
                model ModelType

                Optional model override

                None
                store_history bool

                Whether to store in conversation history

                True
                wait_for_connections bool | None

                Whether to wait for connected agents

                None

                Yields:

                Type Description
                AsyncIterator[ChatMessage[OutputDataT]]

                Response messages in sequence

                Example

                questions = [ ["What is your name?"], ["How old are you?", image1], ["Describe this image", image2], ] async for response in agent.run_iter(*questions): print(response.content)

                Source code in src/llmling_agent/agent/agent.py
                859
                860
                861
                862
                863
                864
                865
                866
                867
                868
                869
                870
                871
                872
                873
                874
                875
                876
                877
                878
                879
                880
                881
                882
                883
                884
                885
                886
                887
                888
                889
                890
                891
                892
                893
                894
                895
                896
                async def run_iter(
                    self,
                    *prompt_groups: Sequence[PromptCompatible],
                    output_type: type[OutputDataT] | None = None,
                    model: ModelType = None,
                    store_history: bool = True,
                    wait_for_connections: bool | None = None,
                ) -> AsyncIterator[ChatMessage[OutputDataT]]:
                    """Run agent sequentially on multiple prompt groups.
                
                    Args:
                        prompt_groups: Groups of prompts to process sequentially
                        output_type: Optional type for structured responses
                        model: Optional model override
                        store_history: Whether to store in conversation history
                        wait_for_connections: Whether to wait for connected agents
                
                    Yields:
                        Response messages in sequence
                
                    Example:
                        questions = [
                            ["What is your name?"],
                            ["How old are you?", image1],
                            ["Describe this image", image2],
                        ]
                        async for response in agent.run_iter(*questions):
                            print(response.content)
                    """
                    for prompts in prompt_groups:
                        response = await self.run(
                            *prompts,
                            output_type=output_type,
                            model=model,
                            store_history=store_history,
                            wait_for_connections=wait_for_connections,
                        )
                        yield response  # pyright: ignore
                

                run_job async

                run_job(
                    job: Job[TDeps, str | None], *, store_history: bool = True, include_agent_tools: bool = True
                ) -> ChatMessage[OutputDataT]
                

                Execute a pre-defined task.

                Parameters:

                Name Type Description Default
                job Job[TDeps, str | None]

                Job configuration to execute

                required
                store_history bool

                Whether the message exchange should be added to the context window

                True
                include_agent_tools bool

                Whether to include agent tools

                True

                Returns: Job execution result

                Raises:

                Type Description
                JobError

                If task execution fails

                ValueError

                If task configuration is invalid

                Source code in src/llmling_agent/agent/agent.py
                898
                899
                900
                901
                902
                903
                904
                905
                906
                907
                908
                909
                910
                911
                912
                913
                914
                915
                916
                917
                918
                919
                920
                921
                922
                923
                924
                925
                926
                927
                928
                929
                930
                931
                932
                933
                934
                935
                936
                937
                938
                939
                940
                941
                942
                943
                944
                945
                946
                947
                @method_spawner
                async def run_job(
                    self,
                    job: Job[TDeps, str | None],
                    *,
                    store_history: bool = True,
                    include_agent_tools: bool = True,
                ) -> ChatMessage[OutputDataT]:
                    """Execute a pre-defined task.
                
                    Args:
                        job: Job configuration to execute
                        store_history: Whether the message exchange should be added to the
                                       context window
                        include_agent_tools: Whether to include agent tools
                    Returns:
                        Job execution result
                
                    Raises:
                        JobError: If task execution fails
                        ValueError: If task configuration is invalid
                    """
                    from llmling_agent.tasks import JobError
                
                    if job.required_dependency is not None:  # noqa: SIM102
                        if not isinstance(self.context.data, job.required_dependency):
                            msg = (
                                f"Agent dependencies ({type(self.context.data)}) "
                                f"don't match job requirement ({job.required_dependency})"
                            )
                            raise JobError(msg)
                
                    # Load task knowledge
                    if job.knowledge:
                        # Add knowledge sources to context
                        for source in list(job.knowledge.paths):
                            await self.conversation.load_context_source(source)
                        for prompt in job.knowledge.prompts:
                            await self.conversation.load_context_source(prompt)
                    try:
                        # Register task tools temporarily
                        tools = job.get_tools()
                        async with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                            # Execute job with job-specific tools
                            return await self.run(await job.get_prompt(), store_history=store_history)
                
                    except Exception as e:
                        self.log.exception("Task execution failed", error=str(e))
                        msg = f"Task execution failed: {e}"
                        raise JobError(msg) from e
                

                run_stream async

                run_stream(
                    *prompt: PromptCompatible,
                    output_type: type[OutputDataT] | None = None,
                    model: ModelType = None,
                    tool_choice: str | list[str] | None = None,
                    store_history: bool = True,
                    usage_limits: UsageLimits | None = None,
                    message_id: str | None = None,
                    conversation_id: str | None = None,
                    messages: list[ChatMessage[Any]] | None = None,
                    input_provider: InputProvider | None = None,
                    wait_for_connections: bool | None = None,
                    deps: TDeps | None = None,
                    instructions: str | None = None
                ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]
                

                Run agent with prompt and get a streaming response.

                Parameters:

                Name Type Description Default
                prompt PromptCompatible

                User query or instruction

                ()
                output_type type[OutputDataT] | None

                Optional type for structured responses

                None
                model ModelType

                Optional model override

                None
                tool_choice str | list[str] | None

                Filter tool choice by name

                None
                store_history bool

                Whether the message exchange should be added to the context window

                True
                usage_limits UsageLimits | None

                Optional usage limits for the model

                None
                message_id str | None

                Optional message id for the returned message. Automatically generated if not provided.

                None
                conversation_id str | None

                Optional conversation id for the returned message.

                None
                messages list[ChatMessage[Any]] | None

                Optional list of messages to replace the conversation history

                None
                input_provider InputProvider | None

                Optional input provider for the agent

                None
                wait_for_connections bool | None

                Whether to wait for connected agents to complete

                None
                deps TDeps | None

                Optional dependencies for the agent

                None
                instructions str | None

                Optional instructions to override the agent's system prompt

                None

                Returns: An async iterator yielding streaming events with final message embedded.

                Raises:

                Type Description
                UnexpectedModelBehavior

                If the model fails or behaves unexpectedly

                Source code in src/llmling_agent/agent/agent.py
                741
                742
                743
                744
                745
                746
                747
                748
                749
                750
                751
                752
                753
                754
                755
                756
                757
                758
                759
                760
                761
                762
                763
                764
                765
                766
                767
                768
                769
                770
                771
                772
                773
                774
                775
                776
                777
                778
                779
                780
                781
                782
                783
                784
                785
                786
                787
                788
                789
                790
                791
                792
                793
                794
                795
                796
                797
                798
                799
                800
                801
                802
                803
                804
                805
                806
                807
                808
                809
                810
                811
                812
                813
                814
                815
                816
                817
                818
                819
                820
                821
                822
                823
                824
                825
                826
                827
                828
                829
                830
                831
                832
                833
                834
                835
                836
                837
                838
                839
                840
                841
                842
                843
                844
                845
                846
                847
                848
                849
                850
                851
                852
                853
                854
                855
                856
                857
                @method_spawner
                async def run_stream(
                    self,
                    *prompt: PromptCompatible,
                    output_type: type[OutputDataT] | None = None,
                    model: ModelType = None,
                    tool_choice: str | list[str] | None = None,
                    store_history: bool = True,
                    usage_limits: UsageLimits | None = None,
                    message_id: str | None = None,
                    conversation_id: str | None = None,
                    messages: list[ChatMessage[Any]] | None = None,
                    input_provider: InputProvider | None = None,
                    wait_for_connections: bool | None = None,
                    deps: TDeps | None = None,
                    instructions: str | None = None,
                ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
                    """Run agent with prompt and get a streaming response.
                
                    Args:
                        prompt: User query or instruction
                        output_type: Optional type for structured responses
                        model: Optional model override
                        tool_choice: Filter tool choice by name
                        store_history: Whether the message exchange should be added to the
                                       context window
                        usage_limits: Optional usage limits for the model
                        message_id: Optional message id for the returned message.
                                    Automatically generated if not provided.
                        conversation_id: Optional conversation id for the returned message.
                        messages: Optional list of messages to replace the conversation history
                        input_provider: Optional input provider for the agent
                        wait_for_connections: Whether to wait for connected agents to complete
                        deps: Optional dependencies for the agent
                        instructions: Optional instructions to override the agent's system prompt
                    Returns:
                        An async iterator yielding streaming events with final message embedded.
                
                    Raises:
                        UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                    """
                    message_id = message_id or str(uuid4())
                    run_id = str(uuid4())
                    user_msg, prompts, _original_message = await prepare_prompts(*prompt)
                    self.message_received.emit(user_msg)
                    start_time = time.perf_counter()
                    message_history = messages if messages is not None else self.conversation.get_history()
                    yield RunStartedEvent(thread_id=self.conversation_id, run_id=run_id, agent_name=self.name)
                    try:
                        agentlet = await self.get_agentlet(tool_choice, model, output_type, input_provider)
                        content = await convert_prompts(prompts)
                        response_msg: ChatMessage[Any] | None = None
                        # Create tool dict for signal emission
                        converted = [i if isinstance(i, str) else i.to_pydantic_ai() for i in content]
                        stream_events = agentlet.run_stream_events(
                            converted,
                            deps=deps,  # type: ignore[arg-type]
                            message_history=[m for run in message_history for m in run.to_pydantic_ai()],
                            usage_limits=usage_limits,
                            instructions=instructions,
                        )
                
                        # Stream events through merge_queue for progress events
                        async with merge_queue_into_iterator(stream_events, self._event_queue) as events:
                            # Track tool call starts to combine with results later
                            pending_tcs: dict[str, BaseToolCallPart] = {}
                            async for event in events:  # Call event handlers for all events
                                for handler in self.event_handler._wrapped_handlers:
                                    await handler(None, event)
                
                                yield event  # type: ignore[misc]
                                match event:
                                    case (
                                        PartStartEvent(part=BaseToolCallPart() as tool_part)
                                        | FunctionToolCallEvent(part=tool_part)
                                    ):
                                        # Store tool call start info for later combination with result
                                        pending_tcs[tool_part.tool_call_id] = tool_part
                                    case FunctionToolResultEvent(tool_call_id=call_id) as result_event:
                                        # Check if we have a pending tool call to combine with
                                        if call_info := pending_tcs.pop(call_id, None):
                                            # Create and yield combined event
                                            combined_event = ToolCallCompleteEvent(
                                                tool_name=call_info.tool_name,
                                                tool_call_id=call_id,
                                                tool_input=call_info.args_as_dict(),
                                                tool_result=result_event.result.content
                                                if isinstance(result_event.result, ToolReturnPart)
                                                else result_event.result,
                                                agent_name=self.name,
                                                message_id=message_id,
                                            )
                                            yield combined_event
                                    case AgentRunResultEvent():
                                        # Capture final result data, Build final response message
                                        response_time = time.perf_counter() - start_time
                                        response_msg = await ChatMessage.from_run_result(
                                            event.result,
                                            agent_name=self.name,
                                            message_id=message_id,
                                            conversation_id=conversation_id,
                                            response_time=response_time,
                                        )
                
                        # Send additional enriched completion event
                        assert response_msg
                        yield StreamCompleteEvent(message=response_msg)  # pyright: ignore[reportReturnType]
                        self.message_sent.emit(response_msg)
                        await self.log_message(response_msg)
                        if store_history:
                            self.conversation.add_chat_messages([user_msg, response_msg])
                        await self.connections.route_message(response_msg, wait=wait_for_connections)
                
                    except Exception as e:
                        self.log.exception("Agent stream failed")
                        self.run_failed.emit("Agent stream failed", e)
                        raise
                

                set_model

                set_model(model: ModelType) -> None
                

                Set the model for this agent.

                Parameters:

                Name Type Description Default
                model ModelType

                New model to use (name or instance)

                required
                Source code in src/llmling_agent/agent/agent.py
                1074
                1075
                1076
                1077
                1078
                1079
                1080
                1081
                def set_model(self, model: ModelType) -> None:
                    """Set the model for this agent.
                
                    Args:
                        model: New model to use (name or instance)
                
                    """
                    self._model = model
                

                share async

                share(
                    target: Agent[TDeps, Any],
                    *,
                    tools: list[str] | None = None,
                    history: bool | int | None = None,
                    token_limit: int | None = None
                ) -> None
                

                Share capabilities and knowledge with another agent.

                Parameters:

                Name Type Description Default
                target Agent[TDeps, Any]

                Agent to share with

                required
                tools list[str] | None

                List of tool names to share

                None
                history bool | int | None

                Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

                None
                token_limit int | None

                Optional max tokens for history

                None

                Raises:

                Type Description
                ValueError

                If requested items don't exist

                RuntimeError

                If runtime not available for resources

                Source code in src/llmling_agent/agent/agent.py
                1018
                1019
                1020
                1021
                1022
                1023
                1024
                1025
                1026
                1027
                1028
                1029
                1030
                1031
                1032
                1033
                1034
                1035
                1036
                1037
                1038
                1039
                1040
                1041
                1042
                1043
                1044
                1045
                1046
                1047
                1048
                1049
                1050
                1051
                1052
                1053
                1054
                1055
                async def share(
                    self,
                    target: Agent[TDeps, Any],
                    *,
                    tools: list[str] | None = None,
                    history: bool | int | None = None,  # bool or number of messages
                    token_limit: int | None = None,
                ) -> None:
                    """Share capabilities and knowledge with another agent.
                
                    Args:
                        target: Agent to share with
                        tools: List of tool names to share
                        history: Share conversation history:
                                - True: Share full history
                                - int: Number of most recent messages to share
                                - None: Don't share history
                        token_limit: Optional max tokens for history
                
                    Raises:
                        ValueError: If requested items don't exist
                        RuntimeError: If runtime not available for resources
                    """
                    # Share tools if requested
                    for name in tools or []:
                        tool = await self.tools.get_tool(name)
                        meta = {"shared_from": self.name}
                        target.tools.register_tool(tool.callable, metadata=meta)
                
                    # Share history if requested
                    if history:
                        history_text = await self.conversation.format_history(
                            max_tokens=token_limit,
                            num_messages=history if isinstance(history, int) else None,
                        )
                        target.conversation.add_context_message(
                            history_text, source=self.name, metadata={"type": "shared_history"}
                        )
                

                stop async

                stop() -> None
                

                Stop continuous execution if running.

                Source code in src/llmling_agent/agent/agent.py
                 998
                 999
                1000
                1001
                1002
                1003
                async def stop(self) -> None:
                    """Stop continuous execution if running."""
                    if self._background_task and not self._background_task.done():
                        self._background_task.cancel()
                        await self._background_task
                        self._background_task = None
                

                temporary_state async

                temporary_state(
                    *,
                    system_prompts: list[AnyPromptType] | None = None,
                    output_type: type[T] | None = None,
                    replace_prompts: bool = False,
                    tools: list[ToolType] | None = None,
                    replace_tools: bool = False,
                    history: list[AnyPromptType] | SessionQuery | None = None,
                    replace_history: bool = False,
                    pause_routing: bool = False,
                    model: ModelType | None = None
                ) -> AsyncIterator[Self | Agent[T]]
                

                Temporarily modify agent state.

                Parameters:

                Name Type Description Default
                system_prompts list[AnyPromptType] | None

                Temporary system prompts to use

                None
                output_type type[T] | None

                Temporary output type to use

                None
                replace_prompts bool

                Whether to replace existing prompts

                False
                tools list[ToolType] | None

                Temporary tools to make available

                None
                replace_tools bool

                Whether to replace existing tools

                False
                history list[AnyPromptType] | SessionQuery | None

                Conversation history (prompts or query)

                None
                replace_history bool

                Whether to replace existing history

                False
                pause_routing bool

                Whether to pause message routing

                False
                model ModelType | None

                Temporary model override

                None
                Source code in src/llmling_agent/agent/agent.py
                1102
                1103
                1104
                1105
                1106
                1107
                1108
                1109
                1110
                1111
                1112
                1113
                1114
                1115
                1116
                1117
                1118
                1119
                1120
                1121
                1122
                1123
                1124
                1125
                1126
                1127
                1128
                1129
                1130
                1131
                1132
                1133
                1134
                1135
                1136
                1137
                1138
                1139
                1140
                1141
                1142
                1143
                1144
                1145
                1146
                1147
                1148
                1149
                1150
                1151
                1152
                1153
                1154
                1155
                1156
                1157
                1158
                1159
                1160
                1161
                @asynccontextmanager
                async def temporary_state[T](
                    self,
                    *,
                    system_prompts: list[AnyPromptType] | None = None,
                    output_type: type[T] | None = None,
                    replace_prompts: bool = False,
                    tools: list[ToolType] | None = None,
                    replace_tools: bool = False,
                    history: list[AnyPromptType] | SessionQuery | None = None,
                    replace_history: bool = False,
                    pause_routing: bool = False,
                    model: ModelType | None = None,
                ) -> AsyncIterator[Self | Agent[T]]:
                    """Temporarily modify agent state.
                
                    Args:
                        system_prompts: Temporary system prompts to use
                        output_type: Temporary output type to use
                        replace_prompts: Whether to replace existing prompts
                        tools: Temporary tools to make available
                        replace_tools: Whether to replace existing tools
                        history: Conversation history (prompts or query)
                        replace_history: Whether to replace existing history
                        pause_routing: Whether to pause message routing
                        model: Temporary model override
                    """
                    old_model = self._model
                    if output_type:
                        old_type = self._output_type
                        self.to_structured(output_type)
                    async with AsyncExitStack() as stack:
                        if system_prompts is not None:  # System prompts
                            await stack.enter_async_context(
                                self.sys_prompts.temporary_prompt(system_prompts, exclusive=replace_prompts)
                            )
                
                        if tools is not None:  # Tools
                            await stack.enter_async_context(
                                self.tools.temporary_tools(tools, exclusive=replace_tools)
                            )
                
                        if history is not None:  # History
                            await stack.enter_async_context(
                                self.conversation.temporary_state(history, replace_history=replace_history)
                            )
                
                        if pause_routing:  # Routing
                            await stack.enter_async_context(self.connections.paused_routing())
                
                        elif model is not None:  # Model
                            self._model = model
                
                        try:
                            yield self
                        finally:  # Restore model
                            if model is not None and old_model:
                                self._model = old_model
                            if output_type:
                                self.to_structured(old_type)
                

                to_structured

                to_structured(
                    output_type: type[NewOutputDataT],
                    *,
                    tool_name: str | None = None,
                    tool_description: str | None = None
                ) -> Agent[TDeps, NewOutputDataT]
                

                Convert this agent to a structured agent.

                Parameters:

                Name Type Description Default
                output_type type[NewOutputDataT]

                Type for structured responses. Can be: - A Python type (Pydantic model)

                required
                tool_name str | None

                Optional override for result tool name

                None
                tool_description str | None

                Optional override for result tool description

                None

                Returns:

                Type Description
                Agent[TDeps, NewOutputDataT]

                Typed Agent

                Source code in src/llmling_agent/agent/agent.py
                464
                465
                466
                467
                468
                469
                470
                471
                472
                473
                474
                475
                476
                477
                478
                479
                480
                481
                482
                483
                484
                def to_structured[NewOutputDataT](
                    self,
                    output_type: type[NewOutputDataT],
                    *,
                    tool_name: str | None = None,
                    tool_description: str | None = None,
                ) -> Agent[TDeps, NewOutputDataT]:
                    """Convert this agent to a structured agent.
                
                    Args:
                        output_type: Type for structured responses. Can be:
                            - A Python type (Pydantic model)
                        tool_name: Optional override for result tool name
                        tool_description: Optional override for result tool description
                
                    Returns:
                        Typed Agent
                    """
                    self.log.debug("Setting result type", output_type=output_type)
                    self._output_type = to_type(output_type)
                    return self  # type: ignore
                

                to_tool

                to_tool(
                    *,
                    name: str | None = None,
                    reset_history_on_run: bool = True,
                    pass_message_history: bool = False,
                    parent: Agent[Any, Any] | None = None
                ) -> Tool[OutputDataT]
                

                Create a tool from this agent.

                Parameters:

                Name Type Description Default
                name str | None

                Optional tool name override

                None
                reset_history_on_run bool

                Clear agent's history before each run

                True
                pass_message_history bool

                Pass parent's message history to agent

                False
                parent Agent[Any, Any] | None

                Optional parent agent for history/context sharing

                None
                Source code in src/llmling_agent/agent/agent.py
                495
                496
                497
                498
                499
                500
                501
                502
                503
                504
                505
                506
                507
                508
                509
                510
                511
                512
                513
                514
                515
                516
                517
                518
                519
                520
                521
                522
                523
                524
                525
                526
                527
                528
                529
                530
                531
                532
                533
                534
                535
                536
                537
                538
                539
                540
                541
                542
                543
                544
                545
                546
                def to_tool(
                    self,
                    *,
                    name: str | None = None,
                    reset_history_on_run: bool = True,
                    pass_message_history: bool = False,
                    parent: Agent[Any, Any] | None = None,
                ) -> Tool[OutputDataT]:
                    """Create a tool from this agent.
                
                    Args:
                        name: Optional tool name override
                        reset_history_on_run: Clear agent's history before each run
                        pass_message_history: Pass parent's message history to agent
                        parent: Optional parent agent for history/context sharing
                    """
                
                    async def wrapped_tool(prompt: str) -> Any:
                        if pass_message_history and not parent:
                            msg = "Parent agent required for message history sharing"
                            raise ToolError(msg)
                
                        if reset_history_on_run:
                            self.conversation.clear()
                
                        history = None
                        if pass_message_history and parent:
                            history = parent.conversation.get_history()
                            old = self.conversation.get_history()
                            self.conversation.set_history(history)
                        result = await self.run(prompt)
                        if history:
                            self.conversation.set_history(old)
                        return result.data
                
                    # Set the correct return annotation dynamically
                    wrapped_tool.__annotations__ = {"prompt": str, "return": self._output_type or Any}
                
                    normalized_name = self.name.replace("_", " ").title()
                    docstring = f"Get expert answer from specialized agent: {normalized_name}"
                    if self.description:
                        docstring = f"{docstring}\n\n{self.description}"
                    tool_name = name or f"ask_{self.name}"
                    wrapped_tool.__doc__ = docstring
                    wrapped_tool.__name__ = tool_name
                
                    return Tool.from_callable(
                        wrapped_tool,
                        name_override=tool_name,
                        description_override=docstring,
                        source="agent",
                    )
                

                validate_against async

                validate_against(prompt: str, criteria: type[OutputDataT], **kwargs: Any) -> bool
                

                Check if agent's response satisfies stricter criteria.

                Source code in src/llmling_agent/agent/agent.py
                1163
                1164
                1165
                1166
                1167
                1168
                1169
                1170
                1171
                1172
                1173
                1174
                1175
                1176
                async def validate_against(
                    self,
                    prompt: str,
                    criteria: type[OutputDataT],
                    **kwargs: Any,
                ) -> bool:
                    """Check if agent's response satisfies stricter criteria."""
                    result = await self.run(prompt, **kwargs)
                    try:
                        criteria.model_validate(result.content.model_dump())  # type: ignore
                    except ValidationError:
                        return False
                    else:
                        return True
                

                wait async

                wait() -> ChatMessage[OutputDataT]
                

                Wait for background execution to complete.

                Source code in src/llmling_agent/agent/agent.py
                1005
                1006
                1007
                1008
                1009
                1010
                1011
                1012
                1013
                1014
                1015
                1016
                async def wait(self) -> ChatMessage[OutputDataT]:
                    """Wait for background execution to complete."""
                    if not self._background_task:
                        msg = "No background task running"
                        raise RuntimeError(msg)
                    if self._infinite:
                        msg = "Cannot wait on infinite execution"
                        raise RuntimeError(msg)
                    try:
                        return await self._background_task
                    finally:
                        self._background_task = None
                

                AgentContext dataclass

                Bases: NodeContext[TDeps]

                Runtime context for agent execution.

                Generically typed with AgentContext[Type of Dependencies]

                Source code in src/llmling_agent/agent/context.py
                25
                26
                27
                28
                29
                30
                31
                32
                33
                34
                35
                36
                37
                38
                39
                40
                41
                42
                43
                44
                45
                46
                47
                48
                49
                50
                51
                52
                53
                54
                55
                56
                57
                58
                59
                60
                61
                62
                63
                64
                65
                66
                67
                68
                69
                70
                71
                72
                73
                74
                75
                76
                77
                78
                79
                80
                81
                82
                83
                84
                85
                86
                87
                88
                89
                90
                91
                92
                93
                94
                @dataclass(kw_only=True)
                class AgentContext[TDeps = Any](NodeContext[TDeps]):
                    """Runtime context for agent execution.
                
                    Generically typed with AgentContext[Type of Dependencies]
                    """
                
                    config: AgentConfig
                    """Current agent's specific configuration."""
                
                    tool_name: str | None = None
                    """Name of the currently executing tool."""
                
                    tool_call_id: str | None = None
                    """ID of the current tool call."""
                
                    tool_input: dict[str, Any] = field(default_factory=dict)
                    """Input arguments for the current tool call."""
                
                    # TODO: perhaps add agent directly to context?
                    @property
                    def agent(self) -> Agent[TDeps, Any]:
                        """Get the agent instance from the pool."""
                        assert self.pool, "No agent pool available"
                        assert self.node_name, "No agent name available"
                        return self.pool.agents[self.node_name]
                
                    async def handle_confirmation(self, tool: Tool, args: dict[str, Any]) -> ConfirmationResult:
                        """Handle tool execution confirmation.
                
                        Returns True if:
                        - No confirmation handler is set
                        - Handler confirms the execution
                        """
                        provider = self.get_input_provider()
                        mode = self.config.requires_tool_confirmation
                        if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                            return "allow"
                        history = self.agent.conversation.get_history() if self.pool else []
                        return await provider.get_tool_confirmation(self, tool, args, history)
                
                    async def handle_elicitation(
                        self,
                        params: types.ElicitRequestParams,
                    ) -> types.ElicitResult | types.ErrorData:
                        """Handle elicitation request for additional information."""
                        provider = self.get_input_provider()
                        return await provider.get_elicitation(params)
                
                    async def report_progress(self, progress: float, total: float | None, message: str) -> None:
                        """Report progress by emitting event into the agent's stream."""
                        from llmling_agent.agent.events import ToolCallProgressEvent
                
                        logger.info("Reporting tool call progress", progress=progress, total=total, message=message)
                        progress_event = ToolCallProgressEvent(
                            progress=int(progress),
                            total=int(total) if total is not None else 100,
                            message=message,
                            tool_name=self.tool_name or "",
                            tool_call_id=self.tool_call_id or "",
                            tool_input=self.tool_input,
                        )
                        await self.agent._event_queue.put(progress_event)
                
                    @property
                    def events(self) -> StreamEventEmitter:
                        """Get event emitter with context automatically injected."""
                        from llmling_agent.agent.event_emitter import StreamEventEmitter
                
                        return StreamEventEmitter(self)
                

                agent property

                agent: Agent[TDeps, Any]
                

                Get the agent instance from the pool.

                config instance-attribute

                config: AgentConfig
                

                Current agent's specific configuration.

                events property

                Get event emitter with context automatically injected.

                tool_call_id class-attribute instance-attribute

                tool_call_id: str | None = None
                

                ID of the current tool call.

                tool_input class-attribute instance-attribute

                tool_input: dict[str, Any] = field(default_factory=dict)
                

                Input arguments for the current tool call.

                tool_name class-attribute instance-attribute

                tool_name: str | None = None
                

                Name of the currently executing tool.

                handle_confirmation async

                handle_confirmation(tool: Tool, args: dict[str, Any]) -> ConfirmationResult
                

                Handle tool execution confirmation.

                Returns True if: - No confirmation handler is set - Handler confirms the execution

                Source code in src/llmling_agent/agent/context.py
                52
                53
                54
                55
                56
                57
                58
                59
                60
                61
                62
                63
                64
                async def handle_confirmation(self, tool: Tool, args: dict[str, Any]) -> ConfirmationResult:
                    """Handle tool execution confirmation.
                
                    Returns True if:
                    - No confirmation handler is set
                    - Handler confirms the execution
                    """
                    provider = self.get_input_provider()
                    mode = self.config.requires_tool_confirmation
                    if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                        return "allow"
                    history = self.agent.conversation.get_history() if self.pool else []
                    return await provider.get_tool_confirmation(self, tool, args, history)
                

                handle_elicitation async

                handle_elicitation(params: ElicitRequestParams) -> ElicitResult | ErrorData
                

                Handle elicitation request for additional information.

                Source code in src/llmling_agent/agent/context.py
                66
                67
                68
                69
                70
                71
                72
                async def handle_elicitation(
                    self,
                    params: types.ElicitRequestParams,
                ) -> types.ElicitResult | types.ErrorData:
                    """Handle elicitation request for additional information."""
                    provider = self.get_input_provider()
                    return await provider.get_elicitation(params)
                

                report_progress async

                report_progress(progress: float, total: float | None, message: str) -> None
                

                Report progress by emitting event into the agent's stream.

                Source code in src/llmling_agent/agent/context.py
                74
                75
                76
                77
                78
                79
                80
                81
                82
                83
                84
                85
                86
                87
                async def report_progress(self, progress: float, total: float | None, message: str) -> None:
                    """Report progress by emitting event into the agent's stream."""
                    from llmling_agent.agent.events import ToolCallProgressEvent
                
                    logger.info("Reporting tool call progress", progress=progress, total=total, message=message)
                    progress_event = ToolCallProgressEvent(
                        progress=int(progress),
                        total=int(total) if total is not None else 100,
                        message=message,
                        tool_name=self.tool_name or "",
                        tool_call_id=self.tool_call_id or "",
                        tool_input=self.tool_input,
                    )
                    await self.agent._event_queue.put(progress_event)
                

                Interactions

                Manages agent communication patterns.

                Source code in src/llmling_agent/agent/interactions.py
                 87
                 88
                 89
                 90
                 91
                 92
                 93
                 94
                 95
                 96
                 97
                 98
                 99
                100
                101
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                122
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                266
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                303
                304
                305
                306
                307
                308
                309
                310
                311
                312
                313
                314
                315
                316
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                class Interactions:
                    """Manages agent communication patterns."""
                
                    def __init__(self, agent: SupportsStructuredOutput) -> None:
                        self.agent = agent
                
                    # async def conversation(
                    #     self,
                    #     other: MessageNode[Any, Any],
                    #     initial_message: AnyPromptType,
                    #     *,
                    #     max_rounds: int | None = None,
                    #     end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool] | None = None,
                    # ) -> AsyncIterator[ChatMessage[Any]]:
                    #     """Maintain conversation between two agents.
                
                    #     Args:
                    #         other: Agent to converse with
                    #         initial_message: Message to start conversation with
                    #         max_rounds: Optional maximum number of exchanges
                    #         end_condition: Optional predicate to check for conversation end
                
                    #     Yields:
                    #         Messages from both agents in conversation order
                    #     """
                    #     rounds = 0
                    #     messages: list[ChatMessage[Any]] = []
                    #     current_message = initial_message
                    #     current_node: MessageNode[Any, Any] = self.agent
                
                    #     while True:
                    #         if max_rounds and rounds >= max_rounds:
                    #             logger.debug("Conversation ended", max_rounds=max_rounds)
                    #             return
                
                    #         response = await current_node.run(current_message)
                    #         messages.append(response)
                    #         yield response
                
                    #         if end_condition and end_condition(messages, response):
                    #             logger.debug("Conversation ended: end condition met")
                    #             return
                
                    #         # Switch agents for next round
                    #         current_node = other if current_node == self.agent else self.agent
                    #         current_message = response.content
                    #         rounds += 1
                
                    @overload
                    async def pick(
                        self,
                        selections: AgentPool,
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[Agent[Any, Any]]: ...
                
                    @overload
                    async def pick(
                        self,
                        selections: BaseTeam[Any, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[MessageNode[Any, Any]]: ...
                
                    @overload
                    async def pick[T: AnyPromptType](
                        self,
                        selections: Sequence[T] | Mapping[str, T],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]: ...
                
                    async def pick[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]:
                        """Pick from available options with reasoning.
                
                        Args:
                            selections: What to pick from:
                                - Sequence of items (auto-labeled)
                                - Dict mapping labels to items
                                - AgentPool
                                - Team
                            task: Task/decision description
                            prompt: Optional custom selection prompt
                
                        Returns:
                            Decision with selected item and reasoning
                
                        Raises:
                            ValueError: If no choices available or invalid selection
                        """
                        # Get items and create label mapping
                        from toprompt import to_prompt
                
                        from llmling_agent import AgentPool
                        from llmling_agent.delegation.base_team import BaseTeam
                
                        match selections:
                            case dict():
                                label_map = selections
                                items: list[Any] = list(selections.values())
                            case BaseTeam():
                                items = list(selections.nodes)
                                label_map = {get_label(item): item for item in items}
                            case AgentPool():
                                items = list(selections.agents.values())
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        # Get descriptions for all items
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                Select ONE option by its exact label."""
                
                        # Get LLM's string-based decision
                        result = await self.agent.run(prompt or default_prompt, output_type=LLMPick)
                
                        # Convert to type-safe decision
                        if result.content.selection not in label_map:
                            msg = f"Invalid selection: {result.content.selection}"
                            raise ValueError(msg)
                
                        selected = cast(T, label_map[result.content.selection])
                        return Pick(selection=selected, reason=result.content.reason)
                
                    @overload
                    async def pick_multiple(
                        self,
                        selections: BaseTeam[Any, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[MessageNode[Any, Any]]: ...
                
                    @overload
                    async def pick_multiple(
                        self,
                        selections: AgentPool,
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[Agent[Any, Any]]: ...
                
                    @overload
                    async def pick_multiple[T: AnyPromptType](
                        self,
                        selections: Sequence[T] | Mapping[str, T],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]: ...
                
                    async def pick_multiple[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]:
                        """Pick multiple options from available choices.
                
                        Args:
                            selections: What to pick from
                            task: Task/decision description
                            min_picks: Minimum number of selections required
                            max_picks: Maximum number of selections (None for unlimited)
                            prompt: Optional custom selection prompt
                        """
                        from toprompt import to_prompt
                
                        from llmling_agent import AgentPool
                        from llmling_agent.delegation.base_team import BaseTeam
                
                        match selections:
                            case AgentPool():
                                items: list[Any] = list(selections.agents.values())
                                label_map: Mapping[str, Any] = {get_label(item): item for item in items}
                            case Mapping():
                                label_map = selections
                                items = list(selections.values())
                            case BaseTeam():
                                items = list(selections.nodes)
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        if max_picks is not None and max_picks < min_picks:
                            msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                            raise ValueError(msg)
                
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        picks_info = (
                            f"Select between {min_picks} and {max_picks}"
                            if max_picks is not None
                            else f"Select at least {min_picks}"
                        )
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                {picks_info} options by their exact labels.
                List your selections, one per line, followed by your reasoning."""
                
                        result = await self.agent.run(prompt or default_prompt, output_type=LLMMultiPick)
                
                        # Validate selections
                        invalid = [s for s in result.content.selections if s not in label_map]
                        if invalid:
                            msg = f"Invalid selections: {', '.join(invalid)}"
                            raise ValueError(msg)
                        num_picks = len(result.content.selections)
                        if num_picks < min_picks:
                            msg = f"Too few selections: got {num_picks}, need {min_picks}"
                            raise ValueError(msg)
                
                        if max_picks and num_picks > max_picks:
                            msg = f"Too many selections: got {num_picks}, max {max_picks}"
                            raise ValueError(msg)
                
                        selected = [cast(T, label_map[label]) for label in result.content.selections]
                        return MultiPick(selections=selected, reason=result.content.reason)
                
                    async def extract[T](
                        self,
                        text: str,
                        as_type: type[T],
                        *,
                        prompt: AnyPromptType | None = None,
                    ) -> T:
                        """Extract single instance of type from text.
                
                        Args:
                            text: Text to extract from
                            as_type: Type to extract
                            prompt: Optional custom prompt
                        """
                        item_model = Schema.for_class_ctor(as_type)
                        final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                
                        class Extraction(Schema):
                            instance: item_model  # type: ignore
                            # explanation: str | None = None
                
                        result = await self.agent.run(final_prompt, output_type=Extraction)
                        return as_type(**result.content.instance.model_dump())
                
                    async def extract_multiple[T](
                        self,
                        text: str,
                        as_type: type[T],
                        *,
                        min_items: int = 1,
                        max_items: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> list[T]:
                        """Extract multiple instances of type from text.
                
                        Args:
                            text: Text to extract from
                            as_type: Type to extract
                            min_items: Minimum number of instances to extract
                            max_items: Maximum number of instances (None=unlimited)
                            prompt: Optional custom prompt
                        """
                        item_model = Schema.for_class_ctor(as_type)
                        final_prompt = prompt or "\n".join([
                            f"Extract {as_type.__name__} instances from text.",
                            # "Requirements:",
                            # f"- Extract at least {min_items} instances",
                            # f"- Extract at most {max_items} instances" if max_items else "",
                            "\nText to analyze:",
                            text,
                        ])
                        # Create model for individual instance
                
                        class Extraction(Schema):
                            instances: list[item_model]  # type: ignore
                            # explanation: str | None = None
                
                        result = await self.agent.run(final_prompt, output_type=Extraction)
                        num_instances = len(result.content.instances)  # Validate counts
                        if len(result.content.instances) < min_items:
                            msg = f"Found only {num_instances} instances, need {min_items}"
                            raise ValueError(msg)
                
                        if max_items and num_instances > max_items:
                            msg = f"Found {num_instances} instances, max is {max_items}"
                            raise ValueError(msg)
                        return [
                            as_type(**instance.data if hasattr(instance, "data") else instance.model_dump())
                            for instance in result.content.instances
                        ]
                

                extract async

                extract(text: str, as_type: type[T], *, prompt: AnyPromptType | None = None) -> T
                

                Extract single instance of type from text.

                Parameters:

                Name Type Description Default
                text str

                Text to extract from

                required
                as_type type[T]

                Type to extract

                required
                prompt AnyPromptType | None

                Optional custom prompt

                None
                Source code in src/llmling_agent/agent/interactions.py
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                async def extract[T](
                    self,
                    text: str,
                    as_type: type[T],
                    *,
                    prompt: AnyPromptType | None = None,
                ) -> T:
                    """Extract single instance of type from text.
                
                    Args:
                        text: Text to extract from
                        as_type: Type to extract
                        prompt: Optional custom prompt
                    """
                    item_model = Schema.for_class_ctor(as_type)
                    final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                
                    class Extraction(Schema):
                        instance: item_model  # type: ignore
                        # explanation: str | None = None
                
                    result = await self.agent.run(final_prompt, output_type=Extraction)
                    return as_type(**result.content.instance.model_dump())
                

                extract_multiple async

                extract_multiple(
                    text: str,
                    as_type: type[T],
                    *,
                    min_items: int = 1,
                    max_items: int | None = None,
                    prompt: AnyPromptType | None = None
                ) -> list[T]
                

                Extract multiple instances of type from text.

                Parameters:

                Name Type Description Default
                text str

                Text to extract from

                required
                as_type type[T]

                Type to extract

                required
                min_items int

                Minimum number of instances to extract

                1
                max_items int | None

                Maximum number of instances (None=unlimited)

                None
                prompt AnyPromptType | None

                Optional custom prompt

                None
                Source code in src/llmling_agent/agent/interactions.py
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                async def extract_multiple[T](
                    self,
                    text: str,
                    as_type: type[T],
                    *,
                    min_items: int = 1,
                    max_items: int | None = None,
                    prompt: AnyPromptType | None = None,
                ) -> list[T]:
                    """Extract multiple instances of type from text.
                
                    Args:
                        text: Text to extract from
                        as_type: Type to extract
                        min_items: Minimum number of instances to extract
                        max_items: Maximum number of instances (None=unlimited)
                        prompt: Optional custom prompt
                    """
                    item_model = Schema.for_class_ctor(as_type)
                    final_prompt = prompt or "\n".join([
                        f"Extract {as_type.__name__} instances from text.",
                        # "Requirements:",
                        # f"- Extract at least {min_items} instances",
                        # f"- Extract at most {max_items} instances" if max_items else "",
                        "\nText to analyze:",
                        text,
                    ])
                    # Create model for individual instance
                
                    class Extraction(Schema):
                        instances: list[item_model]  # type: ignore
                        # explanation: str | None = None
                
                    result = await self.agent.run(final_prompt, output_type=Extraction)
                    num_instances = len(result.content.instances)  # Validate counts
                    if len(result.content.instances) < min_items:
                        msg = f"Found only {num_instances} instances, need {min_items}"
                        raise ValueError(msg)
                
                    if max_items and num_instances > max_items:
                        msg = f"Found {num_instances} instances, max is {max_items}"
                        raise ValueError(msg)
                    return [
                        as_type(**instance.data if hasattr(instance, "data") else instance.model_dump())
                        for instance in result.content.instances
                    ]
                

                pick async

                pick(
                    selections: AgentPool, task: str, prompt: AnyPromptType | None = None
                ) -> Pick[Agent[Any, Any]]
                
                pick(
                    selections: BaseTeam[Any, Any], task: str, prompt: AnyPromptType | None = None
                ) -> Pick[MessageNode[Any, Any]]
                
                pick(
                    selections: Sequence[T] | Mapping[str, T], task: str, prompt: AnyPromptType | None = None
                ) -> Pick[T]
                
                pick(
                    selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any],
                    task: str,
                    prompt: AnyPromptType | None = None,
                ) -> Pick[T]
                

                Pick from available options with reasoning.

                Parameters:

                Name Type Description Default
                selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any]

                What to pick from: - Sequence of items (auto-labeled) - Dict mapping labels to items - AgentPool - Team

                required
                task str

                Task/decision description

                required
                prompt AnyPromptType | None

                Optional custom selection prompt

                None

                Returns:

                Type Description
                Pick[T]

                Decision with selected item and reasoning

                Raises:

                Type Description
                ValueError

                If no choices available or invalid selection

                Source code in src/llmling_agent/agent/interactions.py
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                    async def pick[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]:
                        """Pick from available options with reasoning.
                
                        Args:
                            selections: What to pick from:
                                - Sequence of items (auto-labeled)
                                - Dict mapping labels to items
                                - AgentPool
                                - Team
                            task: Task/decision description
                            prompt: Optional custom selection prompt
                
                        Returns:
                            Decision with selected item and reasoning
                
                        Raises:
                            ValueError: If no choices available or invalid selection
                        """
                        # Get items and create label mapping
                        from toprompt import to_prompt
                
                        from llmling_agent import AgentPool
                        from llmling_agent.delegation.base_team import BaseTeam
                
                        match selections:
                            case dict():
                                label_map = selections
                                items: list[Any] = list(selections.values())
                            case BaseTeam():
                                items = list(selections.nodes)
                                label_map = {get_label(item): item for item in items}
                            case AgentPool():
                                items = list(selections.agents.values())
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        # Get descriptions for all items
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                Select ONE option by its exact label."""
                
                        # Get LLM's string-based decision
                        result = await self.agent.run(prompt or default_prompt, output_type=LLMPick)
                
                        # Convert to type-safe decision
                        if result.content.selection not in label_map:
                            msg = f"Invalid selection: {result.content.selection}"
                            raise ValueError(msg)
                
                        selected = cast(T, label_map[result.content.selection])
                        return Pick(selection=selected, reason=result.content.reason)
                

                pick_multiple async

                pick_multiple(
                    selections: BaseTeam[Any, Any],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None
                ) -> MultiPick[MessageNode[Any, Any]]
                
                pick_multiple(
                    selections: AgentPool,
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None
                ) -> MultiPick[Agent[Any, Any]]
                
                pick_multiple(
                    selections: Sequence[T] | Mapping[str, T],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None
                ) -> MultiPick[T]
                
                pick_multiple(
                    selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None
                ) -> MultiPick[T]
                

                Pick multiple options from available choices.

                Parameters:

                Name Type Description Default
                selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any]

                What to pick from

                required
                task str

                Task/decision description

                required
                min_picks int

                Minimum number of selections required

                1
                max_picks int | None

                Maximum number of selections (None for unlimited)

                None
                prompt AnyPromptType | None

                Optional custom selection prompt

                None
                Source code in src/llmling_agent/agent/interactions.py
                265
                266
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                303
                304
                305
                306
                307
                308
                309
                310
                311
                312
                313
                314
                315
                316
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                    async def pick_multiple[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[Any, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]:
                        """Pick multiple options from available choices.
                
                        Args:
                            selections: What to pick from
                            task: Task/decision description
                            min_picks: Minimum number of selections required
                            max_picks: Maximum number of selections (None for unlimited)
                            prompt: Optional custom selection prompt
                        """
                        from toprompt import to_prompt
                
                        from llmling_agent import AgentPool
                        from llmling_agent.delegation.base_team import BaseTeam
                
                        match selections:
                            case AgentPool():
                                items: list[Any] = list(selections.agents.values())
                                label_map: Mapping[str, Any] = {get_label(item): item for item in items}
                            case Mapping():
                                label_map = selections
                                items = list(selections.values())
                            case BaseTeam():
                                items = list(selections.nodes)
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        if max_picks is not None and max_picks < min_picks:
                            msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                            raise ValueError(msg)
                
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        picks_info = (
                            f"Select between {min_picks} and {max_picks}"
                            if max_picks is not None
                            else f"Select at least {min_picks}"
                        )
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                {picks_info} options by their exact labels.
                List your selections, one per line, followed by your reasoning."""
                
                        result = await self.agent.run(prompt or default_prompt, output_type=LLMMultiPick)
                
                        # Validate selections
                        invalid = [s for s in result.content.selections if s not in label_map]
                        if invalid:
                            msg = f"Invalid selections: {', '.join(invalid)}"
                            raise ValueError(msg)
                        num_picks = len(result.content.selections)
                        if num_picks < min_picks:
                            msg = f"Too few selections: got {num_picks}, need {min_picks}"
                            raise ValueError(msg)
                
                        if max_picks and num_picks > max_picks:
                            msg = f"Too many selections: got {num_picks}, max {max_picks}"
                            raise ValueError(msg)
                
                        selected = [cast(T, label_map[label]) for label in result.content.selections]
                        return MultiPick(selections=selected, reason=result.content.reason)
                

                MessageHistory

                Manages conversation state and system prompts.

                Source code in src/llmling_agent/agent/conversation.py
                 39
                 40
                 41
                 42
                 43
                 44
                 45
                 46
                 47
                 48
                 49
                 50
                 51
                 52
                 53
                 54
                 55
                 56
                 57
                 58
                 59
                 60
                 61
                 62
                 63
                 64
                 65
                 66
                 67
                 68
                 69
                 70
                 71
                 72
                 73
                 74
                 75
                 76
                 77
                 78
                 79
                 80
                 81
                 82
                 83
                 84
                 85
                 86
                 87
                 88
                 89
                 90
                 91
                 92
                 93
                 94
                 95
                 96
                 97
                 98
                 99
                100
                101
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                122
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                266
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                303
                304
                305
                306
                307
                308
                309
                310
                311
                312
                313
                314
                315
                316
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                442
                443
                444
                445
                446
                447
                448
                449
                450
                451
                452
                453
                454
                455
                456
                457
                458
                459
                460
                461
                462
                463
                464
                465
                466
                467
                468
                469
                470
                471
                472
                473
                474
                475
                class MessageHistory:
                    """Manages conversation state and system prompts."""
                
                    @dataclass(frozen=True)
                    class HistoryCleared:
                        """Emitted when chat history is cleared."""
                
                        session_id: str
                        timestamp: datetime = field(default_factory=get_now)
                
                    history_cleared = Signal(HistoryCleared)
                
                    def __init__(
                        self,
                        storage: StorageManager | None = None,
                        converter: ConversionManager | None = None,
                        *,
                        messages: list[ChatMessage[Any]] | None = None,
                        session_config: MemoryConfig | None = None,
                        resources: Sequence[PromptType | str] = (),
                    ) -> None:
                        """Initialize conversation manager.
                
                        Args:
                            storage: Storage manager for persistence
                            converter: Content converter for file processing
                            messages: Optional list of initial messages
                            session_config: Optional MemoryConfig
                            resources: Optional paths to load as context
                        """
                        from llmling_agent.messaging import ChatMessageList
                        from llmling_agent.prompts.conversion_manager import ConversionManager
                        from llmling_agent_config.storage import MemoryStorageConfig, StorageConfig
                
                        self._storage = storage or StorageManager(
                            config=StorageConfig(providers=[MemoryStorageConfig()])
                        )
                        self._converter = converter or ConversionManager([])
                        self.chat_messages = ChatMessageList()
                        if messages:
                            self.chat_messages.extend(messages)
                        self._last_messages: list[ChatMessage[Any]] = []
                        self._pending_messages: deque[ChatMessage[Any]] = deque()
                        self._config = session_config
                        self._resources = list(resources)  # Store for async loading
                        # Generate new ID if none provided
                        self.id = str(uuid4())
                
                        if session_config and session_config.session:
                            self._current_history = self.storage.filter_messages.sync(session_config.session)
                            if session_config.session.name:
                                self.id = session_config.session.name
                
                        # Note: max_messages and max_tokens will be handled in add_message/get_history
                        # to maintain the rolling window during conversation
                
                    @property
                    def storage(self) -> StorageManager:
                        return self._storage
                
                    def get_initialization_tasks(self) -> list[Coroutine[Any, Any, Any]]:
                        """Get all initialization coroutines."""
                        self._resources = []  # Clear so we dont load again on async init
                        return [self.load_context_source(source) for source in self._resources]
                
                    async def __aenter__(self) -> Self:
                        """Initialize when used standalone."""
                        if tasks := self.get_initialization_tasks():
                            await asyncio.gather(*tasks)
                        return self
                
                    async def __aexit__(
                        self,
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    ) -> None:
                        """Clean up any pending messages."""
                        self._pending_messages.clear()
                
                    def __bool__(self) -> bool:
                        return bool(self._pending_messages) or bool(self.chat_messages)
                
                    def __repr__(self) -> str:
                        return f"MessageHistory(id={self.id!r})"
                
                    def __prompt__(self) -> str:
                        if not self.chat_messages:
                            return "No conversation history"
                
                        last_msgs = self.chat_messages[-2:]
                        parts = ["Recent conversation:"]
                        parts.extend(msg.format() for msg in last_msgs)
                        return "\n".join(parts)
                
                    def __contains__(self, item: Any) -> bool:
                        """Check if item is in history."""
                        return item in self.chat_messages
                
                    def __len__(self) -> int:
                        """Get length of history."""
                        return len(self.chat_messages)
                
                    def get_message_tokens(self, message: ChatMessage[Any]) -> int:
                        """Get token count for a single message."""
                        content = "\n".join(message.format())
                        return count_tokens(content, message.model_name)
                
                    async def format_history(
                        self,
                        *,
                        max_tokens: int | None = None,
                        include_system: bool = False,
                        format_template: str | None = None,
                        num_messages: int | None = None,  # Add this parameter
                    ) -> str:
                        """Format conversation history as a single context message.
                
                        Args:
                            max_tokens: Optional limit to include only last N tokens
                            include_system: Whether to include system messages
                            format_template: Optional custom format (defaults to agent/message pairs)
                            num_messages: Optional limit to include only last N messages
                        """
                        template = format_template or "Agent {agent}: {content}\n"
                        messages: list[str] = []
                        token_count = 0
                
                        # Get messages, optionally limited
                        history: Sequence[ChatMessage[Any]] = self.chat_messages
                        if num_messages:
                            history = history[-num_messages:]
                
                        if max_tokens:
                            history = list(reversed(history))  # Start from newest when token limited
                
                        for msg in history:
                            name = msg.name or msg.role.title()
                            formatted = template.format(agent=name, content=str(msg.content))
                
                            if max_tokens:
                                # Count tokens in this message
                                if msg.cost_info:
                                    msg_tokens = msg.cost_info.token_usage.total_tokens
                                else:
                                    # Fallback to tiktoken if no cost info
                                    msg_tokens = self.get_message_tokens(msg)
                
                                if token_count + msg_tokens > max_tokens:
                                    break
                                token_count += msg_tokens
                                # Add to front since we're going backwards
                                messages.insert(0, formatted)
                            else:
                                messages.append(formatted)
                
                        return "\n".join(messages)
                
                    async def load_context_source(self, source: PromptType | str) -> None:
                        """Load context from a single source."""
                        from llmling_agent.prompts.prompts import BasePrompt
                
                        try:
                            match source:
                                case str():
                                    await self.add_context_from_path(source)
                                case BasePrompt():
                                    await self.add_context_from_prompt(source)
                        except Exception:
                            logger.exception(
                                "Failed to load context",
                                source="file" if isinstance(source, str) else source.type,
                            )
                
                    def load_history_from_database(
                        self,
                        session: SessionIdType | SessionQuery = None,
                        *,
                        since: datetime | None = None,
                        until: datetime | None = None,
                        roles: set[MessageRole] | None = None,
                        limit: int | None = None,
                    ) -> None:
                        """Load conversation history from database.
                
                        Args:
                            session: Session ID or query config
                            since: Only include messages after this time (override)
                            until: Only include messages before this time (override)
                            roles: Only include messages with these roles (override)
                            limit: Maximum number of messages to return (override)
                        """
                        from llmling_agent_config.session import SessionQuery
                
                        match session:
                            case SessionQuery() as query:
                                # Override query params if provided
                                if since is not None or until is not None or roles or limit:
                                    update = {
                                        "since": since.isoformat() if since else None,
                                        "until": until.isoformat() if until else None,
                                        "roles": roles,
                                        "limit": limit,
                                    }
                                    query = query.model_copy(update=update)
                                if query.name:
                                    self.id = query.name
                            case str() | UUID():
                                self.id = str(session)
                                query = SessionQuery(
                                    name=self.id,
                                    since=since.isoformat() if since else None,
                                    until=until.isoformat() if until else None,
                                    roles=roles,
                                    limit=limit,
                                )
                            case None:
                                # Use current session ID
                                query = SessionQuery(
                                    name=self.id,
                                    since=since.isoformat() if since else None,
                                    until=until.isoformat() if until else None,
                                    roles=roles,
                                    limit=limit,
                                )
                            case _ as unreachable:
                                assert_never(unreachable)
                        self.chat_messages.clear()
                        self.chat_messages.extend(self.storage.filter_messages.sync(query))
                
                    def get_history(
                        self,
                        include_pending: bool = True,
                        do_filter: bool = True,
                    ) -> list[ChatMessage[Any]]:
                        """Get conversation history.
                
                        Args:
                            include_pending: Whether to include pending messages
                            do_filter: Whether to apply memory config limits (max_tokens, max_messages)
                
                        Returns:
                            Filtered list of messages in chronological order
                        """
                        if include_pending and self._pending_messages:
                            self.chat_messages.extend(self._pending_messages)
                            self._pending_messages.clear()
                
                        # 2. Start with original history
                        history: Sequence[ChatMessage[Any]] = self.chat_messages
                
                        # 3. Only filter if needed
                        if do_filter and self._config:
                            # First filter by message count (simple slice)
                            if self._config.max_messages:
                                history = history[-self._config.max_messages :]
                
                            # Then filter by tokens if needed
                            if self._config.max_tokens:
                                token_count = 0
                                filtered = []
                                # Collect messages from newest to oldest until we hit the limit
                                for msg in reversed(history):
                                    msg_tokens = self.get_message_tokens(msg)
                                    if token_count + msg_tokens > self._config.max_tokens:
                                        break
                                    token_count += msg_tokens
                                    filtered.append(msg)
                                history = list(reversed(filtered))
                
                        return list(history)
                
                    def get_pending_messages(self) -> list[ChatMessage[Any]]:
                        """Get messages that will be included in next interaction."""
                        return list(self._pending_messages)
                
                    def clear_pending(self) -> None:
                        """Clear pending messages without adding them to history."""
                        self._pending_messages.clear()
                
                    def set_history(self, history: list[ChatMessage[Any]]) -> None:
                        """Update conversation history after run."""
                        self.chat_messages.clear()
                        self.chat_messages.extend(history)
                
                    def clear(self) -> None:
                        """Clear conversation history and prompts."""
                        from llmling_agent.messaging import ChatMessageList
                
                        self.chat_messages = ChatMessageList()
                        self._last_messages = []
                        event = self.HistoryCleared(session_id=str(self.id))
                        self.history_cleared.emit(event)
                
                    @asynccontextmanager
                    async def temporary_state(
                        self,
                        history: list[AnyPromptType] | SessionQuery | None = None,
                        *,
                        replace_history: bool = False,
                    ) -> AsyncIterator[Self]:
                        """Temporarily set conversation history.
                
                        Args:
                            history: Optional list of prompts to use as temporary history.
                                    Can be strings, BasePrompts, or other prompt types.
                            replace_history: If True, only use provided history. If False, append
                                    to existing history.
                        """
                        from toprompt import to_prompt
                
                        from llmling_agent.messaging import ChatMessage, ChatMessageList
                
                        old_history = self.chat_messages.copy()
                        try:
                            messages: Sequence[ChatMessage[Any]] = ChatMessageList()
                            if history is not None:
                                if isinstance(history, SessionQuery):
                                    messages = await self.storage.filter_messages(history)
                                else:
                                    messages = [
                                        ChatMessage.user_prompt(message=prompt)
                                        for p in history
                                        if (prompt := await to_prompt(p))
                                    ]
                
                            if replace_history:
                                self.chat_messages = ChatMessageList(messages)
                            else:
                                self.chat_messages.extend(messages)
                
                            yield self
                
                        finally:
                            self.chat_messages = old_history
                
                    def add_chat_messages(self, messages: Sequence[ChatMessage[Any]]) -> None:
                        """Add new messages to history and update last_messages."""
                        self._last_messages = list(messages)
                        self.chat_messages.extend(messages)
                
                    @property
                    def last_run_messages(self) -> list[ChatMessage[Any]]:
                        """Get messages from the last run converted to our format."""
                        return self._last_messages
                
                    def add_context_message(
                        self,
                        content: str,
                        source: str | None = None,
                        **metadata: Any,
                    ) -> None:
                        """Add a context message.
                
                        Args:
                            content: Text content to add
                            source: Description of content source
                            **metadata: Additional metadata to include with the message
                        """
                        from llmling_agent.messaging import ChatMessage
                
                        meta_str = ""
                        if metadata:
                            meta_str = "\n".join(f"{k}: {v}" for k, v in metadata.items())
                            meta_str = f"\nMetadata:\n{meta_str}\n"
                
                        header = f"Content from {source}:" if source else "Additional context:"
                        formatted = f"{header}{meta_str}\n{content}\n"
                
                        chat_message = ChatMessage(
                            content=formatted,
                            role="user",
                            name="user",
                            metadata=metadata,
                            conversation_id="context",  # TODO: should probably allow DB field to be NULL
                        )
                        self._pending_messages.append(chat_message)
                
                    async def add_context_from_path(
                        self,
                        path: JoinablePathLike,
                        *,
                        convert_to_md: bool = False,
                        **metadata: Any,
                    ) -> None:
                        """Add file or URL content as context message.
                
                        Args:
                            path: Any UPath-supported path
                            convert_to_md: Whether to convert content to markdown
                            **metadata: Additional metadata to include with the message
                
                        Raises:
                            ValueError: If content cannot be loaded or converted
                        """
                        path_obj = to_upath(path)
                        if convert_to_md:
                            content = await self._converter.convert_file(path)
                            source = f"markdown:{path_obj.name}"
                        else:
                            content = await read_path(path)
                            source = f"{path_obj.protocol}:{path_obj.name}"
                        self.add_context_message(content, source=source, **metadata)
                
                    async def add_context_from_prompt(
                        self,
                        prompt: PromptType,
                        metadata: dict[str, Any] | None = None,
                        **kwargs: Any,
                    ) -> None:
                        """Add rendered prompt content as context message.
                
                        Args:
                            prompt: LLMling prompt (static, dynamic, or file-based)
                            metadata: Additional metadata to include with the message
                            kwargs: Optional kwargs for prompt formatting
                        """
                        try:
                            # Format the prompt using LLMling's prompt system
                            messages = await prompt.format(kwargs)
                            # Extract text content from all messages
                            content = "\n\n".join(msg.get_text_content() for msg in messages)
                
                            self.add_context_message(
                                content,
                                source=f"prompt:{prompt.name or prompt.type}",
                                prompt_args=kwargs,
                                **(metadata or {}),
                            )
                        except Exception as e:
                            msg = f"Failed to format prompt: {e}"
                            raise ValueError(msg) from e
                
                    def get_history_tokens(self) -> int:
                        """Get token count for current history."""
                        # Use cost_info if available
                        return self.chat_messages.get_history_tokens()
                

                last_run_messages property

                last_run_messages: list[ChatMessage[Any]]
                

                Get messages from the last run converted to our format.

                HistoryCleared dataclass

                Emitted when chat history is cleared.

                Source code in src/llmling_agent/agent/conversation.py
                42
                43
                44
                45
                46
                47
                @dataclass(frozen=True)
                class HistoryCleared:
                    """Emitted when chat history is cleared."""
                
                    session_id: str
                    timestamp: datetime = field(default_factory=get_now)
                

                __aenter__ async

                __aenter__() -> Self
                

                Initialize when used standalone.

                Source code in src/llmling_agent/agent/conversation.py
                104
                105
                106
                107
                108
                async def __aenter__(self) -> Self:
                    """Initialize when used standalone."""
                    if tasks := self.get_initialization_tasks():
                        await asyncio.gather(*tasks)
                    return self
                

                __aexit__ async

                __aexit__(
                    exc_type: type[BaseException] | None,
                    exc_val: BaseException | None,
                    exc_tb: TracebackType | None,
                ) -> None
                

                Clean up any pending messages.

                Source code in src/llmling_agent/agent/conversation.py
                110
                111
                112
                113
                114
                115
                116
                117
                async def __aexit__(
                    self,
                    exc_type: type[BaseException] | None,
                    exc_val: BaseException | None,
                    exc_tb: TracebackType | None,
                ) -> None:
                    """Clean up any pending messages."""
                    self._pending_messages.clear()
                

                __contains__

                __contains__(item: Any) -> bool
                

                Check if item is in history.

                Source code in src/llmling_agent/agent/conversation.py
                134
                135
                136
                def __contains__(self, item: Any) -> bool:
                    """Check if item is in history."""
                    return item in self.chat_messages
                

                __init__

                __init__(
                    storage: StorageManager | None = None,
                    converter: ConversionManager | None = None,
                    *,
                    messages: list[ChatMessage[Any]] | None = None,
                    session_config: MemoryConfig | None = None,
                    resources: Sequence[PromptType | str] = ()
                ) -> None
                

                Initialize conversation manager.

                Parameters:

                Name Type Description Default
                storage StorageManager | None

                Storage manager for persistence

                None
                converter ConversionManager | None

                Content converter for file processing

                None
                messages list[ChatMessage[Any]] | None

                Optional list of initial messages

                None
                session_config MemoryConfig | None

                Optional MemoryConfig

                None
                resources Sequence[PromptType | str]

                Optional paths to load as context

                ()
                Source code in src/llmling_agent/agent/conversation.py
                51
                52
                53
                54
                55
                56
                57
                58
                59
                60
                61
                62
                63
                64
                65
                66
                67
                68
                69
                70
                71
                72
                73
                74
                75
                76
                77
                78
                79
                80
                81
                82
                83
                84
                85
                86
                87
                88
                89
                90
                def __init__(
                    self,
                    storage: StorageManager | None = None,
                    converter: ConversionManager | None = None,
                    *,
                    messages: list[ChatMessage[Any]] | None = None,
                    session_config: MemoryConfig | None = None,
                    resources: Sequence[PromptType | str] = (),
                ) -> None:
                    """Initialize conversation manager.
                
                    Args:
                        storage: Storage manager for persistence
                        converter: Content converter for file processing
                        messages: Optional list of initial messages
                        session_config: Optional MemoryConfig
                        resources: Optional paths to load as context
                    """
                    from llmling_agent.messaging import ChatMessageList
                    from llmling_agent.prompts.conversion_manager import ConversionManager
                    from llmling_agent_config.storage import MemoryStorageConfig, StorageConfig
                
                    self._storage = storage or StorageManager(
                        config=StorageConfig(providers=[MemoryStorageConfig()])
                    )
                    self._converter = converter or ConversionManager([])
                    self.chat_messages = ChatMessageList()
                    if messages:
                        self.chat_messages.extend(messages)
                    self._last_messages: list[ChatMessage[Any]] = []
                    self._pending_messages: deque[ChatMessage[Any]] = deque()
                    self._config = session_config
                    self._resources = list(resources)  # Store for async loading
                    # Generate new ID if none provided
                    self.id = str(uuid4())
                
                    if session_config and session_config.session:
                        self._current_history = self.storage.filter_messages.sync(session_config.session)
                        if session_config.session.name:
                            self.id = session_config.session.name
                

                __len__

                __len__() -> int
                

                Get length of history.

                Source code in src/llmling_agent/agent/conversation.py
                138
                139
                140
                def __len__(self) -> int:
                    """Get length of history."""
                    return len(self.chat_messages)
                

                add_chat_messages

                add_chat_messages(messages: Sequence[ChatMessage[Any]]) -> None
                

                Add new messages to history and update last_messages.

                Source code in src/llmling_agent/agent/conversation.py
                375
                376
                377
                378
                def add_chat_messages(self, messages: Sequence[ChatMessage[Any]]) -> None:
                    """Add new messages to history and update last_messages."""
                    self._last_messages = list(messages)
                    self.chat_messages.extend(messages)
                

                add_context_from_path async

                add_context_from_path(
                    path: JoinablePathLike, *, convert_to_md: bool = False, **metadata: Any
                ) -> None
                

                Add file or URL content as context message.

                Parameters:

                Name Type Description Default
                path JoinablePathLike

                Any UPath-supported path

                required
                convert_to_md bool

                Whether to convert content to markdown

                False
                **metadata Any

                Additional metadata to include with the message

                {}

                Raises:

                Type Description
                ValueError

                If content cannot be loaded or converted

                Source code in src/llmling_agent/agent/conversation.py
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                async def add_context_from_path(
                    self,
                    path: JoinablePathLike,
                    *,
                    convert_to_md: bool = False,
                    **metadata: Any,
                ) -> None:
                    """Add file or URL content as context message.
                
                    Args:
                        path: Any UPath-supported path
                        convert_to_md: Whether to convert content to markdown
                        **metadata: Additional metadata to include with the message
                
                    Raises:
                        ValueError: If content cannot be loaded or converted
                    """
                    path_obj = to_upath(path)
                    if convert_to_md:
                        content = await self._converter.convert_file(path)
                        source = f"markdown:{path_obj.name}"
                    else:
                        content = await read_path(path)
                        source = f"{path_obj.protocol}:{path_obj.name}"
                    self.add_context_message(content, source=source, **metadata)
                

                add_context_from_prompt async

                add_context_from_prompt(
                    prompt: PromptType, metadata: dict[str, Any] | None = None, **kwargs: Any
                ) -> None
                

                Add rendered prompt content as context message.

                Parameters:

                Name Type Description Default
                prompt PromptType

                LLMling prompt (static, dynamic, or file-based)

                required
                metadata dict[str, Any] | None

                Additional metadata to include with the message

                None
                kwargs Any

                Optional kwargs for prompt formatting

                {}
                Source code in src/llmling_agent/agent/conversation.py
                443
                444
                445
                446
                447
                448
                449
                450
                451
                452
                453
                454
                455
                456
                457
                458
                459
                460
                461
                462
                463
                464
                465
                466
                467
                468
                469
                470
                async def add_context_from_prompt(
                    self,
                    prompt: PromptType,
                    metadata: dict[str, Any] | None = None,
                    **kwargs: Any,
                ) -> None:
                    """Add rendered prompt content as context message.
                
                    Args:
                        prompt: LLMling prompt (static, dynamic, or file-based)
                        metadata: Additional metadata to include with the message
                        kwargs: Optional kwargs for prompt formatting
                    """
                    try:
                        # Format the prompt using LLMling's prompt system
                        messages = await prompt.format(kwargs)
                        # Extract text content from all messages
                        content = "\n\n".join(msg.get_text_content() for msg in messages)
                
                        self.add_context_message(
                            content,
                            source=f"prompt:{prompt.name or prompt.type}",
                            prompt_args=kwargs,
                            **(metadata or {}),
                        )
                    except Exception as e:
                        msg = f"Failed to format prompt: {e}"
                        raise ValueError(msg) from e
                

                add_context_message

                add_context_message(content: str, source: str | None = None, **metadata: Any) -> None
                

                Add a context message.

                Parameters:

                Name Type Description Default
                content str

                Text content to add

                required
                source str | None

                Description of content source

                None
                **metadata Any

                Additional metadata to include with the message

                {}
                Source code in src/llmling_agent/agent/conversation.py
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                def add_context_message(
                    self,
                    content: str,
                    source: str | None = None,
                    **metadata: Any,
                ) -> None:
                    """Add a context message.
                
                    Args:
                        content: Text content to add
                        source: Description of content source
                        **metadata: Additional metadata to include with the message
                    """
                    from llmling_agent.messaging import ChatMessage
                
                    meta_str = ""
                    if metadata:
                        meta_str = "\n".join(f"{k}: {v}" for k, v in metadata.items())
                        meta_str = f"\nMetadata:\n{meta_str}\n"
                
                    header = f"Content from {source}:" if source else "Additional context:"
                    formatted = f"{header}{meta_str}\n{content}\n"
                
                    chat_message = ChatMessage(
                        content=formatted,
                        role="user",
                        name="user",
                        metadata=metadata,
                        conversation_id="context",  # TODO: should probably allow DB field to be NULL
                    )
                    self._pending_messages.append(chat_message)
                

                clear

                clear() -> None
                

                Clear conversation history and prompts.

                Source code in src/llmling_agent/agent/conversation.py
                324
                325
                326
                327
                328
                329
                330
                331
                def clear(self) -> None:
                    """Clear conversation history and prompts."""
                    from llmling_agent.messaging import ChatMessageList
                
                    self.chat_messages = ChatMessageList()
                    self._last_messages = []
                    event = self.HistoryCleared(session_id=str(self.id))
                    self.history_cleared.emit(event)
                

                clear_pending

                clear_pending() -> None
                

                Clear pending messages without adding them to history.

                Source code in src/llmling_agent/agent/conversation.py
                315
                316
                317
                def clear_pending(self) -> None:
                    """Clear pending messages without adding them to history."""
                    self._pending_messages.clear()
                

                format_history async

                format_history(
                    *,
                    max_tokens: int | None = None,
                    include_system: bool = False,
                    format_template: str | None = None,
                    num_messages: int | None = None
                ) -> str
                

                Format conversation history as a single context message.

                Parameters:

                Name Type Description Default
                max_tokens int | None

                Optional limit to include only last N tokens

                None
                include_system bool

                Whether to include system messages

                False
                format_template str | None

                Optional custom format (defaults to agent/message pairs)

                None
                num_messages int | None

                Optional limit to include only last N messages

                None
                Source code in src/llmling_agent/agent/conversation.py
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                async def format_history(
                    self,
                    *,
                    max_tokens: int | None = None,
                    include_system: bool = False,
                    format_template: str | None = None,
                    num_messages: int | None = None,  # Add this parameter
                ) -> str:
                    """Format conversation history as a single context message.
                
                    Args:
                        max_tokens: Optional limit to include only last N tokens
                        include_system: Whether to include system messages
                        format_template: Optional custom format (defaults to agent/message pairs)
                        num_messages: Optional limit to include only last N messages
                    """
                    template = format_template or "Agent {agent}: {content}\n"
                    messages: list[str] = []
                    token_count = 0
                
                    # Get messages, optionally limited
                    history: Sequence[ChatMessage[Any]] = self.chat_messages
                    if num_messages:
                        history = history[-num_messages:]
                
                    if max_tokens:
                        history = list(reversed(history))  # Start from newest when token limited
                
                    for msg in history:
                        name = msg.name or msg.role.title()
                        formatted = template.format(agent=name, content=str(msg.content))
                
                        if max_tokens:
                            # Count tokens in this message
                            if msg.cost_info:
                                msg_tokens = msg.cost_info.token_usage.total_tokens
                            else:
                                # Fallback to tiktoken if no cost info
                                msg_tokens = self.get_message_tokens(msg)
                
                            if token_count + msg_tokens > max_tokens:
                                break
                            token_count += msg_tokens
                            # Add to front since we're going backwards
                            messages.insert(0, formatted)
                        else:
                            messages.append(formatted)
                
                    return "\n".join(messages)
                

                get_history

                get_history(include_pending: bool = True, do_filter: bool = True) -> list[ChatMessage[Any]]
                

                Get conversation history.

                Parameters:

                Name Type Description Default
                include_pending bool

                Whether to include pending messages

                True
                do_filter bool

                Whether to apply memory config limits (max_tokens, max_messages)

                True

                Returns:

                Type Description
                list[ChatMessage[Any]]

                Filtered list of messages in chronological order

                Source code in src/llmling_agent/agent/conversation.py
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                303
                304
                305
                306
                307
                308
                309
                def get_history(
                    self,
                    include_pending: bool = True,
                    do_filter: bool = True,
                ) -> list[ChatMessage[Any]]:
                    """Get conversation history.
                
                    Args:
                        include_pending: Whether to include pending messages
                        do_filter: Whether to apply memory config limits (max_tokens, max_messages)
                
                    Returns:
                        Filtered list of messages in chronological order
                    """
                    if include_pending and self._pending_messages:
                        self.chat_messages.extend(self._pending_messages)
                        self._pending_messages.clear()
                
                    # 2. Start with original history
                    history: Sequence[ChatMessage[Any]] = self.chat_messages
                
                    # 3. Only filter if needed
                    if do_filter and self._config:
                        # First filter by message count (simple slice)
                        if self._config.max_messages:
                            history = history[-self._config.max_messages :]
                
                        # Then filter by tokens if needed
                        if self._config.max_tokens:
                            token_count = 0
                            filtered = []
                            # Collect messages from newest to oldest until we hit the limit
                            for msg in reversed(history):
                                msg_tokens = self.get_message_tokens(msg)
                                if token_count + msg_tokens > self._config.max_tokens:
                                    break
                                token_count += msg_tokens
                                filtered.append(msg)
                            history = list(reversed(filtered))
                
                    return list(history)
                

                get_history_tokens

                get_history_tokens() -> int
                

                Get token count for current history.

                Source code in src/llmling_agent/agent/conversation.py
                472
                473
                474
                475
                def get_history_tokens(self) -> int:
                    """Get token count for current history."""
                    # Use cost_info if available
                    return self.chat_messages.get_history_tokens()
                

                get_initialization_tasks

                get_initialization_tasks() -> list[Coroutine[Any, Any, Any]]
                

                Get all initialization coroutines.

                Source code in src/llmling_agent/agent/conversation.py
                 99
                100
                101
                102
                def get_initialization_tasks(self) -> list[Coroutine[Any, Any, Any]]:
                    """Get all initialization coroutines."""
                    self._resources = []  # Clear so we dont load again on async init
                    return [self.load_context_source(source) for source in self._resources]
                

                get_message_tokens

                get_message_tokens(message: ChatMessage[Any]) -> int
                

                Get token count for a single message.

                Source code in src/llmling_agent/agent/conversation.py
                142
                143
                144
                145
                def get_message_tokens(self, message: ChatMessage[Any]) -> int:
                    """Get token count for a single message."""
                    content = "\n".join(message.format())
                    return count_tokens(content, message.model_name)
                

                get_pending_messages

                get_pending_messages() -> list[ChatMessage[Any]]
                

                Get messages that will be included in next interaction.

                Source code in src/llmling_agent/agent/conversation.py
                311
                312
                313
                def get_pending_messages(self) -> list[ChatMessage[Any]]:
                    """Get messages that will be included in next interaction."""
                    return list(self._pending_messages)
                

                load_context_source async

                load_context_source(source: PromptType | str) -> None
                

                Load context from a single source.

                Source code in src/llmling_agent/agent/conversation.py
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                async def load_context_source(self, source: PromptType | str) -> None:
                    """Load context from a single source."""
                    from llmling_agent.prompts.prompts import BasePrompt
                
                    try:
                        match source:
                            case str():
                                await self.add_context_from_path(source)
                            case BasePrompt():
                                await self.add_context_from_prompt(source)
                    except Exception:
                        logger.exception(
                            "Failed to load context",
                            source="file" if isinstance(source, str) else source.type,
                        )
                

                load_history_from_database

                load_history_from_database(
                    session: SessionIdType | SessionQuery = None,
                    *,
                    since: datetime | None = None,
                    until: datetime | None = None,
                    roles: set[MessageRole] | None = None,
                    limit: int | None = None
                ) -> None
                

                Load conversation history from database.

                Parameters:

                Name Type Description Default
                session SessionIdType | SessionQuery

                Session ID or query config

                None
                since datetime | None

                Only include messages after this time (override)

                None
                until datetime | None

                Only include messages before this time (override)

                None
                roles set[MessageRole] | None

                Only include messages with these roles (override)

                None
                limit int | None

                Maximum number of messages to return (override)

                None
                Source code in src/llmling_agent/agent/conversation.py
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                266
                267
                def load_history_from_database(
                    self,
                    session: SessionIdType | SessionQuery = None,
                    *,
                    since: datetime | None = None,
                    until: datetime | None = None,
                    roles: set[MessageRole] | None = None,
                    limit: int | None = None,
                ) -> None:
                    """Load conversation history from database.
                
                    Args:
                        session: Session ID or query config
                        since: Only include messages after this time (override)
                        until: Only include messages before this time (override)
                        roles: Only include messages with these roles (override)
                        limit: Maximum number of messages to return (override)
                    """
                    from llmling_agent_config.session import SessionQuery
                
                    match session:
                        case SessionQuery() as query:
                            # Override query params if provided
                            if since is not None or until is not None or roles or limit:
                                update = {
                                    "since": since.isoformat() if since else None,
                                    "until": until.isoformat() if until else None,
                                    "roles": roles,
                                    "limit": limit,
                                }
                                query = query.model_copy(update=update)
                            if query.name:
                                self.id = query.name
                        case str() | UUID():
                            self.id = str(session)
                            query = SessionQuery(
                                name=self.id,
                                since=since.isoformat() if since else None,
                                until=until.isoformat() if until else None,
                                roles=roles,
                                limit=limit,
                            )
                        case None:
                            # Use current session ID
                            query = SessionQuery(
                                name=self.id,
                                since=since.isoformat() if since else None,
                                until=until.isoformat() if until else None,
                                roles=roles,
                                limit=limit,
                            )
                        case _ as unreachable:
                            assert_never(unreachable)
                    self.chat_messages.clear()
                    self.chat_messages.extend(self.storage.filter_messages.sync(query))
                

                set_history

                set_history(history: list[ChatMessage[Any]]) -> None
                

                Update conversation history after run.

                Source code in src/llmling_agent/agent/conversation.py
                319
                320
                321
                322
                def set_history(self, history: list[ChatMessage[Any]]) -> None:
                    """Update conversation history after run."""
                    self.chat_messages.clear()
                    self.chat_messages.extend(history)
                

                temporary_state async

                temporary_state(
                    history: list[AnyPromptType] | SessionQuery | None = None, *, replace_history: bool = False
                ) -> AsyncIterator[Self]
                

                Temporarily set conversation history.

                Parameters:

                Name Type Description Default
                history list[AnyPromptType] | SessionQuery | None

                Optional list of prompts to use as temporary history. Can be strings, BasePrompts, or other prompt types.

                None
                replace_history bool

                If True, only use provided history. If False, append to existing history.

                False
                Source code in src/llmling_agent/agent/conversation.py
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                @asynccontextmanager
                async def temporary_state(
                    self,
                    history: list[AnyPromptType] | SessionQuery | None = None,
                    *,
                    replace_history: bool = False,
                ) -> AsyncIterator[Self]:
                    """Temporarily set conversation history.
                
                    Args:
                        history: Optional list of prompts to use as temporary history.
                                Can be strings, BasePrompts, or other prompt types.
                        replace_history: If True, only use provided history. If False, append
                                to existing history.
                    """
                    from toprompt import to_prompt
                
                    from llmling_agent.messaging import ChatMessage, ChatMessageList
                
                    old_history = self.chat_messages.copy()
                    try:
                        messages: Sequence[ChatMessage[Any]] = ChatMessageList()
                        if history is not None:
                            if isinstance(history, SessionQuery):
                                messages = await self.storage.filter_messages(history)
                            else:
                                messages = [
                                    ChatMessage.user_prompt(message=prompt)
                                    for p in history
                                    if (prompt := await to_prompt(p))
                                ]
                
                        if replace_history:
                            self.chat_messages = ChatMessageList(messages)
                        else:
                            self.chat_messages.extend(messages)
                
                        yield self
                
                    finally:
                        self.chat_messages = old_history
                

                SlashedAgent

                Wrapper around Agent that handles slash commands in streams.

                Uses the "commands first" strategy from the ACP adapter: 1. Execute all slash commands first 2. Then process remaining content through wrapped agent 3. If only commands, end without LLM processing

                Source code in src/llmling_agent/agent/slashed_agent.py
                 50
                 51
                 52
                 53
                 54
                 55
                 56
                 57
                 58
                 59
                 60
                 61
                 62
                 63
                 64
                 65
                 66
                 67
                 68
                 69
                 70
                 71
                 72
                 73
                 74
                 75
                 76
                 77
                 78
                 79
                 80
                 81
                 82
                 83
                 84
                 85
                 86
                 87
                 88
                 89
                 90
                 91
                 92
                 93
                 94
                 95
                 96
                 97
                 98
                 99
                100
                101
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                122
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                class SlashedAgent[TDeps, OutputDataT]:
                    """Wrapper around Agent that handles slash commands in streams.
                
                    Uses the "commands first" strategy from the ACP adapter:
                    1. Execute all slash commands first
                    2. Then process remaining content through wrapped agent
                    3. If only commands, end without LLM processing
                    """
                
                    def __init__(
                        self,
                        agent: Agent[TDeps, OutputDataT] | ACPAgent | AGUIAgent,
                        command_store: CommandStore | None = None,
                        *,
                        context_data_factory: Callable[[], Any] | None = None,
                    ) -> None:
                        """Initialize with wrapped agent and command store.
                
                        Args:
                            agent: The agent to wrap
                            command_store: Command store for slash commands (creates default if None)
                            context_data_factory: Optional factory for creating command context data
                        """
                        self.agent = agent
                        self._context_data_factory = context_data_factory
                        self._event_queue: asyncio.Queue[CommandStoreEvent] | None = None
                
                        # Create store with our streaming event handler
                        if command_store is None:
                            from slashed import CommandStore
                
                            from llmling_agent_commands import get_commands
                
                            cmds = get_commands()
                            self.command_store = CommandStore(event_handler=self._emit_event, commands=cmds)
                        else:
                            self.command_store = command_store
                
                    async def _emit_event(self, event: CommandStoreEvent) -> None:
                        """Bridge store events to async queue during command execution."""
                        if self._event_queue:
                            await self._event_queue.put(event)
                
                    def _is_slash_command(self, text: str) -> bool:
                        """Check if text starts with a slash command.
                
                        Args:
                            text: Text to check
                
                        Returns:
                            True if text is a slash command
                        """
                        return bool(SLASH_PATTERN.match(text.strip()))
                
                    async def _execute_slash_command_streaming(
                        self, command_text: str
                    ) -> AsyncGenerator[CommandOutputEvent | CommandCompleteEvent]:
                        """Execute a single slash command and yield events as they happen.
                
                        Args:
                            command_text: Full command text including slash
                
                        Yields:
                            Command output and completion events
                        """
                        parsed = _parse_slash_command(command_text)
                        if not parsed:
                            logger.warning("Invalid slash command", command=command_text)
                            yield CommandCompleteEvent(command="unknown", success=False)
                            return
                
                        cmd_name, args = parsed
                
                        # Set up event queue for this command execution
                        self._event_queue = asyncio.Queue()
                        context_data = (  # Create command context
                            self._context_data_factory() if self._context_data_factory else self.agent.context
                        )
                
                        cmd_ctx = self.command_store.create_context(data=context_data)
                        command_str = f"{cmd_name} {args}".strip()
                        execute_task = asyncio.create_task(self.command_store.execute_command(command_str, cmd_ctx))
                
                        success = True
                        try:
                            # Yield events from queue as command runs
                            while not execute_task.done():
                                try:
                                    # Wait for events with short timeout to check task completion
                                    event = await asyncio.wait_for(self._event_queue.get(), timeout=0.1)
                                    # Convert store events to our stream events
                                    match event:
                                        case SlashedCommandOutputEvent(output=output):
                                            yield CommandOutputEvent(command=cmd_name, output=output)
                                        case CommandExecutedEvent(success=False, error=error) if error:
                                            output = f"Command error: {error}"
                                            yield CommandOutputEvent(command=cmd_name, output=output)
                                            success = False
                                except TimeoutError:
                                    continue
                
                            # Ensure command task completes and handle any remaining events
                            try:
                                await execute_task
                            except Exception as e:
                                logger.exception("Command execution failed", command=cmd_name)
                                success = False
                                yield CommandOutputEvent(command=cmd_name, output=f"Command error: {e}")
                
                            # Drain any remaining events from queue
                            while not self._event_queue.empty():
                                try:
                                    match self._event_queue.get_nowait():
                                        case SlashedCommandOutputEvent(output=output):
                                            yield CommandOutputEvent(command=cmd_name, output=output)
                                except asyncio.QueueEmpty:
                                    break
                
                            # Always yield completion event
                            yield CommandCompleteEvent(command=cmd_name, success=success)
                
                        finally:
                            # Clean up event queue
                            self._event_queue = None
                
                    async def run_stream(
                        self,
                        *prompts: PromptCompatible,
                        **kwargs: Any,
                    ) -> AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]:
                        """Run agent with slash command support.
                
                        Separates slash commands from regular prompts, executes commands first,
                        then processes remaining content through the wrapped agent.
                
                        Args:
                            *prompts: Input prompts (may include slash commands)
                            **kwargs: Additional arguments passed to agent.run_stream
                
                        Yields:
                            Stream events from command execution and agent processing
                        """
                        # Separate slash commands from regular content
                        commands: list[str] = []
                        regular_prompts: list[Any] = []
                
                        for prompt in prompts:
                            if isinstance(prompt, str) and self._is_slash_command(prompt):
                                logger.debug("Found slash command", command=prompt)
                                commands.append(prompt.strip())
                            else:
                                regular_prompts.append(prompt)
                
                        # Execute all commands first with streaming
                        if commands:
                            for command in commands:
                                logger.info("Processing slash command", command=command)
                                async for cmd_event in self._execute_slash_command_streaming(command):
                                    yield cmd_event
                
                        # If we have regular content, process it through the agent
                        if regular_prompts:
                            logger.debug("Processing prompts through agent", num_prompts=len(regular_prompts))
                            async for event in self.agent.run_stream(*regular_prompts, **kwargs):
                                # ACPAgent always returns str, cast to match OutputDataT
                                yield cast("SlashedAgentStreamEvent[OutputDataT]", event)
                
                        # If we only had commands and no regular content, we're done
                        # (no additional events needed)
                
                    def __getattr__(self, name: str) -> Any:
                        """Delegate attribute access to wrapped agent.
                
                        Args:
                            name: Attribute name
                
                        Returns:
                            Attribute value from wrapped agent
                        """
                        return getattr(self.agent, name)
                

                __getattr__

                __getattr__(name: str) -> Any
                

                Delegate attribute access to wrapped agent.

                Parameters:

                Name Type Description Default
                name str

                Attribute name

                required

                Returns:

                Type Description
                Any

                Attribute value from wrapped agent

                Source code in src/llmling_agent/agent/slashed_agent.py
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                def __getattr__(self, name: str) -> Any:
                    """Delegate attribute access to wrapped agent.
                
                    Args:
                        name: Attribute name
                
                    Returns:
                        Attribute value from wrapped agent
                    """
                    return getattr(self.agent, name)
                

                __init__

                __init__(
                    agent: Agent[TDeps, OutputDataT] | ACPAgent | AGUIAgent,
                    command_store: CommandStore | None = None,
                    *,
                    context_data_factory: Callable[[], Any] | None = None
                ) -> None
                

                Initialize with wrapped agent and command store.

                Parameters:

                Name Type Description Default
                agent Agent[TDeps, OutputDataT] | ACPAgent | AGUIAgent

                The agent to wrap

                required
                command_store CommandStore | None

                Command store for slash commands (creates default if None)

                None
                context_data_factory Callable[[], Any] | None

                Optional factory for creating command context data

                None
                Source code in src/llmling_agent/agent/slashed_agent.py
                59
                60
                61
                62
                63
                64
                65
                66
                67
                68
                69
                70
                71
                72
                73
                74
                75
                76
                77
                78
                79
                80
                81
                82
                83
                84
                85
                86
                def __init__(
                    self,
                    agent: Agent[TDeps, OutputDataT] | ACPAgent | AGUIAgent,
                    command_store: CommandStore | None = None,
                    *,
                    context_data_factory: Callable[[], Any] | None = None,
                ) -> None:
                    """Initialize with wrapped agent and command store.
                
                    Args:
                        agent: The agent to wrap
                        command_store: Command store for slash commands (creates default if None)
                        context_data_factory: Optional factory for creating command context data
                    """
                    self.agent = agent
                    self._context_data_factory = context_data_factory
                    self._event_queue: asyncio.Queue[CommandStoreEvent] | None = None
                
                    # Create store with our streaming event handler
                    if command_store is None:
                        from slashed import CommandStore
                
                        from llmling_agent_commands import get_commands
                
                        cmds = get_commands()
                        self.command_store = CommandStore(event_handler=self._emit_event, commands=cmds)
                    else:
                        self.command_store = command_store
                

                run_stream async

                run_stream(
                    *prompts: PromptCompatible, **kwargs: Any
                ) -> AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]
                

                Run agent with slash command support.

                Separates slash commands from regular prompts, executes commands first, then processes remaining content through the wrapped agent.

                Parameters:

                Name Type Description Default
                *prompts PromptCompatible

                Input prompts (may include slash commands)

                ()
                **kwargs Any

                Additional arguments passed to agent.run_stream

                {}

                Yields:

                Type Description
                AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]

                Stream events from command execution and agent processing

                Source code in src/llmling_agent/agent/slashed_agent.py
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                async def run_stream(
                    self,
                    *prompts: PromptCompatible,
                    **kwargs: Any,
                ) -> AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]:
                    """Run agent with slash command support.
                
                    Separates slash commands from regular prompts, executes commands first,
                    then processes remaining content through the wrapped agent.
                
                    Args:
                        *prompts: Input prompts (may include slash commands)
                        **kwargs: Additional arguments passed to agent.run_stream
                
                    Yields:
                        Stream events from command execution and agent processing
                    """
                    # Separate slash commands from regular content
                    commands: list[str] = []
                    regular_prompts: list[Any] = []
                
                    for prompt in prompts:
                        if isinstance(prompt, str) and self._is_slash_command(prompt):
                            logger.debug("Found slash command", command=prompt)
                            commands.append(prompt.strip())
                        else:
                            regular_prompts.append(prompt)
                
                    # Execute all commands first with streaming
                    if commands:
                        for command in commands:
                            logger.info("Processing slash command", command=command)
                            async for cmd_event in self._execute_slash_command_streaming(command):
                                yield cmd_event
                
                    # If we have regular content, process it through the agent
                    if regular_prompts:
                        logger.debug("Processing prompts through agent", num_prompts=len(regular_prompts))
                        async for event in self.agent.run_stream(*regular_prompts, **kwargs):
                            # ACPAgent always returns str, cast to match OutputDataT
                            yield cast("SlashedAgentStreamEvent[OutputDataT]", event)
                

                SystemPrompts

                Manages system prompts for an agent.

                Source code in src/llmling_agent/agent/sys_prompts.py
                 55
                 56
                 57
                 58
                 59
                 60
                 61
                 62
                 63
                 64
                 65
                 66
                 67
                 68
                 69
                 70
                 71
                 72
                 73
                 74
                 75
                 76
                 77
                 78
                 79
                 80
                 81
                 82
                 83
                 84
                 85
                 86
                 87
                 88
                 89
                 90
                 91
                 92
                 93
                 94
                 95
                 96
                 97
                 98
                 99
                100
                101
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                122
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                class SystemPrompts:
                    """Manages system prompts for an agent."""
                
                    def __init__(
                        self,
                        prompts: AnyPromptType | list[AnyPromptType] | None = None,
                        template: str | None = None,
                        dynamic: bool = True,
                        prompt_manager: PromptManager | None = None,
                        inject_agent_info: bool = True,
                        inject_tools: ToolInjectionMode = "off",
                        tool_usage_style: ToolUsageStyle = "suggestive",
                    ) -> None:
                        """Initialize prompt manager."""
                        from jinjarope import Environment
                        from toprompt import to_prompt
                
                        match prompts:
                            case list():
                                self.prompts = prompts
                            case None:
                                self.prompts = []
                            case _:
                                self.prompts = [prompts]
                        self.prompt_manager = prompt_manager
                        self.template = template
                        self.dynamic = dynamic
                        self.inject_agent_info = inject_agent_info
                        self.inject_tools = inject_tools
                        self.tool_usage_style = tool_usage_style
                        self._cached = False
                        self._env = Environment(enable_async=True)
                        self._env.filters["to_prompt"] = to_prompt
                
                    def __repr__(self) -> str:
                        return (
                            f"SystemPrompts(prompts={len(self.prompts)}, "
                            f"dynamic={self.dynamic}, inject_agent_info={self.inject_agent_info}, "
                            f"inject_tools={self.inject_tools!r})"
                        )
                
                    def __len__(self) -> int:
                        return len(self.prompts)
                
                    def __getitem__(self, idx: int | slice) -> AnyPromptType | list[AnyPromptType]:
                        return self.prompts[idx]
                
                    async def add_by_reference(self, reference: str) -> None:
                        """Add a system prompt using reference syntax.
                
                        Args:
                            reference: [provider:]identifier[@version][?var1=val1,...]
                
                        Examples:
                            await sys_prompts.add_by_reference("code_review?language=python")
                            await sys_prompts.add_by_reference("langfuse:expert@v2")
                        """
                        if not self.prompt_manager:
                            msg = "No prompt_manager available to resolve prompts"
                            raise RuntimeError(msg)
                
                        try:
                            content = await self.prompt_manager.get(reference)
                            self.prompts.append(content)
                        except Exception as e:
                            msg = f"Failed to add prompt {reference!r}"
                            raise RuntimeError(msg) from e
                
                    async def add(
                        self,
                        identifier: str,
                        *,
                        provider: str | None = None,
                        version: str | None = None,
                        variables: dict[str, Any] | None = None,
                    ) -> None:
                        """Add a system prompt.
                
                        Args:
                            identifier: Prompt identifier/name
                            provider: Provider name (None = builtin)
                            version: Optional version string
                            variables: Optional template variables
                
                        Examples:
                            await sys_prompts.add("code_review", variables={"language": "python"})
                            await sys_prompts.add("expert", provider="langfuse", version="v2")
                        """
                        if not self.prompt_manager:
                            msg = "No prompt_manager available to resolve prompts"
                            raise RuntimeError(msg)
                
                        try:
                            content = await self.prompt_manager.get_from(
                                identifier,
                                provider=provider,
                                version=version,
                                variables=variables,
                            )
                            self.prompts.append(content)
                        except Exception as e:
                            ref = f"{provider + ':' if provider else ''}{identifier}"
                            msg = f"Failed to add prompt {ref!r}"
                            raise RuntimeError(msg) from e
                
                    def clear(self) -> None:
                        """Clear all system prompts."""
                        self.prompts = []
                
                    async def refresh_cache(self) -> None:
                        """Force re-evaluation of prompts."""
                        from toprompt import to_prompt
                
                        evaluated = []
                        for prompt in self.prompts:
                            result = await to_prompt(prompt)
                            evaluated.append(result)
                        self.prompts = evaluated
                        self._cached = True
                
                    @asynccontextmanager
                    async def temporary_prompt(
                        self, prompt: AnyPromptType, exclusive: bool = False
                    ) -> AsyncIterator[None]:
                        """Temporarily override system prompts.
                
                        Args:
                            prompt: Single prompt or sequence of prompts to use temporarily
                            exclusive: Whether to only use given prompt. If False, prompt will be
                                       appended to the agents prompts temporarily.
                        """
                        from toprompt import to_prompt
                
                        original_prompts = self.prompts.copy()
                        new_prompt = await to_prompt(prompt)
                        self.prompts = [new_prompt] if not exclusive else [*self.prompts, new_prompt]
                        try:
                            yield
                        finally:
                            self.prompts = original_prompts
                
                    async def format_system_prompt(self, agent: Agent[Any, Any]) -> str:
                        """Format complete system prompt."""
                        if not self.dynamic and not self._cached:
                            await self.refresh_cache()
                
                        template = self._env.from_string(self.template or DEFAULT_TEMPLATE)
                        result = await template.render_async(
                            agent=agent,
                            prompts=self.prompts,
                            dynamic=self.dynamic,
                            inject_agent_info=self.inject_agent_info,
                            inject_tools=self.inject_tools,
                            tool_usage_style=self.tool_usage_style,
                        )
                        return result.strip()
                

                __init__

                __init__(
                    prompts: AnyPromptType | list[AnyPromptType] | None = None,
                    template: str | None = None,
                    dynamic: bool = True,
                    prompt_manager: PromptManager | None = None,
                    inject_agent_info: bool = True,
                    inject_tools: ToolInjectionMode = "off",
                    tool_usage_style: ToolUsageStyle = "suggestive",
                ) -> None
                

                Initialize prompt manager.

                Source code in src/llmling_agent/agent/sys_prompts.py
                58
                59
                60
                61
                62
                63
                64
                65
                66
                67
                68
                69
                70
                71
                72
                73
                74
                75
                76
                77
                78
                79
                80
                81
                82
                83
                84
                85
                86
                87
                def __init__(
                    self,
                    prompts: AnyPromptType | list[AnyPromptType] | None = None,
                    template: str | None = None,
                    dynamic: bool = True,
                    prompt_manager: PromptManager | None = None,
                    inject_agent_info: bool = True,
                    inject_tools: ToolInjectionMode = "off",
                    tool_usage_style: ToolUsageStyle = "suggestive",
                ) -> None:
                    """Initialize prompt manager."""
                    from jinjarope import Environment
                    from toprompt import to_prompt
                
                    match prompts:
                        case list():
                            self.prompts = prompts
                        case None:
                            self.prompts = []
                        case _:
                            self.prompts = [prompts]
                    self.prompt_manager = prompt_manager
                    self.template = template
                    self.dynamic = dynamic
                    self.inject_agent_info = inject_agent_info
                    self.inject_tools = inject_tools
                    self.tool_usage_style = tool_usage_style
                    self._cached = False
                    self._env = Environment(enable_async=True)
                    self._env.filters["to_prompt"] = to_prompt
                

                add async

                add(
                    identifier: str,
                    *,
                    provider: str | None = None,
                    version: str | None = None,
                    variables: dict[str, Any] | None = None
                ) -> None
                

                Add a system prompt.

                Parameters:

                Name Type Description Default
                identifier str

                Prompt identifier/name

                required
                provider str | None

                Provider name (None = builtin)

                None
                version str | None

                Optional version string

                None
                variables dict[str, Any] | None

                Optional template variables

                None

                Examples:

                await sys_prompts.add("code_review", variables={"language": "python"}) await sys_prompts.add("expert", provider="langfuse", version="v2")

                Source code in src/llmling_agent/agent/sys_prompts.py
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                async def add(
                    self,
                    identifier: str,
                    *,
                    provider: str | None = None,
                    version: str | None = None,
                    variables: dict[str, Any] | None = None,
                ) -> None:
                    """Add a system prompt.
                
                    Args:
                        identifier: Prompt identifier/name
                        provider: Provider name (None = builtin)
                        version: Optional version string
                        variables: Optional template variables
                
                    Examples:
                        await sys_prompts.add("code_review", variables={"language": "python"})
                        await sys_prompts.add("expert", provider="langfuse", version="v2")
                    """
                    if not self.prompt_manager:
                        msg = "No prompt_manager available to resolve prompts"
                        raise RuntimeError(msg)
                
                    try:
                        content = await self.prompt_manager.get_from(
                            identifier,
                            provider=provider,
                            version=version,
                            variables=variables,
                        )
                        self.prompts.append(content)
                    except Exception as e:
                        ref = f"{provider + ':' if provider else ''}{identifier}"
                        msg = f"Failed to add prompt {ref!r}"
                        raise RuntimeError(msg) from e
                

                add_by_reference async

                add_by_reference(reference: str) -> None
                

                Add a system prompt using reference syntax.

                Parameters:

                Name Type Description Default
                reference str

                [provider:]identifier[@version][?var1=val1,...]

                required

                Examples:

                await sys_prompts.add_by_reference("code_review?language=python") await sys_prompts.add_by_reference("langfuse:expert@v2")

                Source code in src/llmling_agent/agent/sys_prompts.py
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                async def add_by_reference(self, reference: str) -> None:
                    """Add a system prompt using reference syntax.
                
                    Args:
                        reference: [provider:]identifier[@version][?var1=val1,...]
                
                    Examples:
                        await sys_prompts.add_by_reference("code_review?language=python")
                        await sys_prompts.add_by_reference("langfuse:expert@v2")
                    """
                    if not self.prompt_manager:
                        msg = "No prompt_manager available to resolve prompts"
                        raise RuntimeError(msg)
                
                    try:
                        content = await self.prompt_manager.get(reference)
                        self.prompts.append(content)
                    except Exception as e:
                        msg = f"Failed to add prompt {reference!r}"
                        raise RuntimeError(msg) from e
                

                clear

                clear() -> None
                

                Clear all system prompts.

                Source code in src/llmling_agent/agent/sys_prompts.py
                160
                161
                162
                def clear(self) -> None:
                    """Clear all system prompts."""
                    self.prompts = []
                

                format_system_prompt async

                format_system_prompt(agent: Agent[Any, Any]) -> str
                

                Format complete system prompt.

                Source code in src/llmling_agent/agent/sys_prompts.py
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                async def format_system_prompt(self, agent: Agent[Any, Any]) -> str:
                    """Format complete system prompt."""
                    if not self.dynamic and not self._cached:
                        await self.refresh_cache()
                
                    template = self._env.from_string(self.template or DEFAULT_TEMPLATE)
                    result = await template.render_async(
                        agent=agent,
                        prompts=self.prompts,
                        dynamic=self.dynamic,
                        inject_agent_info=self.inject_agent_info,
                        inject_tools=self.inject_tools,
                        tool_usage_style=self.tool_usage_style,
                    )
                    return result.strip()
                

                refresh_cache async

                refresh_cache() -> None
                

                Force re-evaluation of prompts.

                Source code in src/llmling_agent/agent/sys_prompts.py
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                async def refresh_cache(self) -> None:
                    """Force re-evaluation of prompts."""
                    from toprompt import to_prompt
                
                    evaluated = []
                    for prompt in self.prompts:
                        result = await to_prompt(prompt)
                        evaluated.append(result)
                    self.prompts = evaluated
                    self._cached = True
                

                temporary_prompt async

                temporary_prompt(prompt: AnyPromptType, exclusive: bool = False) -> AsyncIterator[None]
                

                Temporarily override system prompts.

                Parameters:

                Name Type Description Default
                prompt AnyPromptType

                Single prompt or sequence of prompts to use temporarily

                required
                exclusive bool

                Whether to only use given prompt. If False, prompt will be appended to the agents prompts temporarily.

                False
                Source code in src/llmling_agent/agent/sys_prompts.py
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                @asynccontextmanager
                async def temporary_prompt(
                    self, prompt: AnyPromptType, exclusive: bool = False
                ) -> AsyncIterator[None]:
                    """Temporarily override system prompts.
                
                    Args:
                        prompt: Single prompt or sequence of prompts to use temporarily
                        exclusive: Whether to only use given prompt. If False, prompt will be
                                   appended to the agents prompts temporarily.
                    """
                    from toprompt import to_prompt
                
                    original_prompts = self.prompts.copy()
                    new_prompt = await to_prompt(prompt)
                    self.prompts = [new_prompt] if not exclusive else [*self.prompts, new_prompt]
                    try:
                        yield
                    finally:
                        self.prompts = original_prompts