Skip to content

agent

Class info

Classes

Name Children Inherits
Agent
llmling_agent.agent.agent
The main agent class.
    AgentContext
    llmling_agent.agent.context
    Runtime context for agent execution.
      ConversationManager
      llmling_agent.agent.conversation
      Manages conversation state and system prompts.
        Interactions
        llmling_agent.agent.interactions
        Manages agent communication patterns.
          SlashedAgent
          llmling_agent.agent.slashed_agent
          Wrapper around Agent that handles slash commands in streams.
            SystemPrompts
            llmling_agent.agent.sys_prompts
            Manages system prompts for an agent.

              🛈 DocStrings

              CLI commands for llmling-agent.

              Agent

              Bases: MessageNode[TDeps, OutputDataT]

              The main agent class.

              Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

              Source code in src/llmling_agent/agent/agent.py
               107
               108
               109
               110
               111
               112
               113
               114
               115
               116
               117
               118
               119
               120
               121
               122
               123
               124
               125
               126
               127
               128
               129
               130
               131
               132
               133
               134
               135
               136
               137
               138
               139
               140
               141
               142
               143
               144
               145
               146
               147
               148
               149
               150
               151
               152
               153
               154
               155
               156
               157
               158
               159
               160
               161
               162
               163
               164
               165
               166
               167
               168
               169
               170
               171
               172
               173
               174
               175
               176
               177
               178
               179
               180
               181
               182
               183
               184
               185
               186
               187
               188
               189
               190
               191
               192
               193
               194
               195
               196
               197
               198
               199
               200
               201
               202
               203
               204
               205
               206
               207
               208
               209
               210
               211
               212
               213
               214
               215
               216
               217
               218
               219
               220
               221
               222
               223
               224
               225
               226
               227
               228
               229
               230
               231
               232
               233
               234
               235
               236
               237
               238
               239
               240
               241
               242
               243
               244
               245
               246
               247
               248
               249
               250
               251
               252
               253
               254
               255
               256
               257
               258
               259
               260
               261
               262
               263
               264
               265
               266
               267
               268
               269
               270
               271
               272
               273
               274
               275
               276
               277
               278
               279
               280
               281
               282
               283
               284
               285
               286
               287
               288
               289
               290
               291
               292
               293
               294
               295
               296
               297
               298
               299
               300
               301
               302
               303
               304
               305
               306
               307
               308
               309
               310
               311
               312
               313
               314
               315
               316
               317
               318
               319
               320
               321
               322
               323
               324
               325
               326
               327
               328
               329
               330
               331
               332
               333
               334
               335
               336
               337
               338
               339
               340
               341
               342
               343
               344
               345
               346
               347
               348
               349
               350
               351
               352
               353
               354
               355
               356
               357
               358
               359
               360
               361
               362
               363
               364
               365
               366
               367
               368
               369
               370
               371
               372
               373
               374
               375
               376
               377
               378
               379
               380
               381
               382
               383
               384
               385
               386
               387
               388
               389
               390
               391
               392
               393
               394
               395
               396
               397
               398
               399
               400
               401
               402
               403
               404
               405
               406
               407
               408
               409
               410
               411
               412
               413
               414
               415
               416
               417
               418
               419
               420
               421
               422
               423
               424
               425
               426
               427
               428
               429
               430
               431
               432
               433
               434
               435
               436
               437
               438
               439
               440
               441
               442
               443
               444
               445
               446
               447
               448
               449
               450
               451
               452
               453
               454
               455
               456
               457
               458
               459
               460
               461
               462
               463
               464
               465
               466
               467
               468
               469
               470
               471
               472
               473
               474
               475
               476
               477
               478
               479
               480
               481
               482
               483
               484
               485
               486
               487
               488
               489
               490
               491
               492
               493
               494
               495
               496
               497
               498
               499
               500
               501
               502
               503
               504
               505
               506
               507
               508
               509
               510
               511
               512
               513
               514
               515
               516
               517
               518
               519
               520
               521
               522
               523
               524
               525
               526
               527
               528
               529
               530
               531
               532
               533
               534
               535
               536
               537
               538
               539
               540
               541
               542
               543
               544
               545
               546
               547
               548
               549
               550
               551
               552
               553
               554
               555
               556
               557
               558
               559
               560
               561
               562
               563
               564
               565
               566
               567
               568
               569
               570
               571
               572
               573
               574
               575
               576
               577
               578
               579
               580
               581
               582
               583
               584
               585
               586
               587
               588
               589
               590
               591
               592
               593
               594
               595
               596
               597
               598
               599
               600
               601
               602
               603
               604
               605
               606
               607
               608
               609
               610
               611
               612
               613
               614
               615
               616
               617
               618
               619
               620
               621
               622
               623
               624
               625
               626
               627
               628
               629
               630
               631
               632
               633
               634
               635
               636
               637
               638
               639
               640
               641
               642
               643
               644
               645
               646
               647
               648
               649
               650
               651
               652
               653
               654
               655
               656
               657
               658
               659
               660
               661
               662
               663
               664
               665
               666
               667
               668
               669
               670
               671
               672
               673
               674
               675
               676
               677
               678
               679
               680
               681
               682
               683
               684
               685
               686
               687
               688
               689
               690
               691
               692
               693
               694
               695
               696
               697
               698
               699
               700
               701
               702
               703
               704
               705
               706
               707
               708
               709
               710
               711
               712
               713
               714
               715
               716
               717
               718
               719
               720
               721
               722
               723
               724
               725
               726
               727
               728
               729
               730
               731
               732
               733
               734
               735
               736
               737
               738
               739
               740
               741
               742
               743
               744
               745
               746
               747
               748
               749
               750
               751
               752
               753
               754
               755
               756
               757
               758
               759
               760
               761
               762
               763
               764
               765
               766
               767
               768
               769
               770
               771
               772
               773
               774
               775
               776
               777
               778
               779
               780
               781
               782
               783
               784
               785
               786
               787
               788
               789
               790
               791
               792
               793
               794
               795
               796
               797
               798
               799
               800
               801
               802
               803
               804
               805
               806
               807
               808
               809
               810
               811
               812
               813
               814
               815
               816
               817
               818
               819
               820
               821
               822
               823
               824
               825
               826
               827
               828
               829
               830
               831
               832
               833
               834
               835
               836
               837
               838
               839
               840
               841
               842
               843
               844
               845
               846
               847
               848
               849
               850
               851
               852
               853
               854
               855
               856
               857
               858
               859
               860
               861
               862
               863
               864
               865
               866
               867
               868
               869
               870
               871
               872
               873
               874
               875
               876
               877
               878
               879
               880
               881
               882
               883
               884
               885
               886
               887
               888
               889
               890
               891
               892
               893
               894
               895
               896
               897
               898
               899
               900
               901
               902
               903
               904
               905
               906
               907
               908
               909
               910
               911
               912
               913
               914
               915
               916
               917
               918
               919
               920
               921
               922
               923
               924
               925
               926
               927
               928
               929
               930
               931
               932
               933
               934
               935
               936
               937
               938
               939
               940
               941
               942
               943
               944
               945
               946
               947
               948
               949
               950
               951
               952
               953
               954
               955
               956
               957
               958
               959
               960
               961
               962
               963
               964
               965
               966
               967
               968
               969
               970
               971
               972
               973
               974
               975
               976
               977
               978
               979
               980
               981
               982
               983
               984
               985
               986
               987
               988
               989
               990
               991
               992
               993
               994
               995
               996
               997
               998
               999
              1000
              1001
              1002
              1003
              1004
              1005
              1006
              1007
              1008
              1009
              1010
              1011
              1012
              1013
              1014
              1015
              1016
              1017
              1018
              1019
              1020
              1021
              1022
              1023
              1024
              1025
              1026
              1027
              1028
              1029
              1030
              1031
              1032
              1033
              1034
              1035
              1036
              1037
              1038
              1039
              1040
              1041
              1042
              1043
              1044
              1045
              1046
              1047
              1048
              1049
              1050
              1051
              1052
              1053
              1054
              1055
              1056
              1057
              1058
              1059
              1060
              1061
              1062
              1063
              1064
              1065
              1066
              1067
              1068
              1069
              1070
              1071
              1072
              1073
              1074
              1075
              1076
              1077
              1078
              1079
              1080
              1081
              1082
              1083
              1084
              1085
              1086
              1087
              1088
              1089
              1090
              1091
              1092
              1093
              1094
              1095
              1096
              1097
              1098
              1099
              1100
              1101
              1102
              1103
              1104
              1105
              1106
              1107
              class Agent[TDeps = None, OutputDataT = str](MessageNode[TDeps, OutputDataT]):
                  """The main agent class.
              
                  Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                  """
              
                  @dataclass(frozen=True)
                  class AgentReset:
                      """Emitted when agent is reset."""
              
                      agent_name: AgentName
                      previous_tools: dict[str, bool]
                      new_tools: dict[str, bool]
                      timestamp: datetime = field(default_factory=get_now)
              
                  run_failed = Signal(str, Exception)
                  agent_reset = Signal(AgentReset)
              
                  def __init__(
                      # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                      self,
                      name: str = "llmling-agent",
                      *,
                      deps_type: type[TDeps] | None = None,
                      model: ModelType = None,
                      output_type: OutputSpec[OutputDataT] = str,  # type: ignore[assignment]
                      context: AgentContext[TDeps] | None = None,
                      session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                      system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                      description: str | None = None,
                      tools: Sequence[ToolType | Tool] | None = None,
                      toolsets: Sequence[ResourceProvider] | None = None,
                      mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                      resources: Sequence[PromptType | str] = (),
                      retries: int = 1,
                      output_retries: int | None = None,
                      end_strategy: EndStrategy = "early",
                      input_provider: InputProvider | None = None,
                      parallel_init: bool = True,
                      debug: bool = False,
                      event_handlers: Sequence[IndividualEventHandler] | None = None,
                  ):
                      """Initialize agent.
              
                      Args:
                          name: Name of the agent for logging and identification
                          deps_type: Type of dependencies to use
                          model: The default model to use (defaults to GPT-5)
                          output_type: The default output type to use (defaults to str)
                          context: Agent context with configuration
                          session: Memory configuration.
                              - None: Default memory config
                              - False: Disable message history (max_messages=0)
                              - int: Max tokens for memory
                              - str/UUID: Session identifier
                              - MemoryConfig: Full memory configuration
                              - MemoryProvider: Custom memory provider
                              - SessionQuery: Session query
              
                          system_prompt: System prompts for the agent
                          description: Description of the Agent ("what it can do")
                          tools: List of tools to register with the agent
                          toolsets: List of toolset resource providers for the agent
                          mcp_servers: MCP servers to connect to
                          resources: Additional resources to load
                          retries: Default number of retries for failed operations
                          output_retries: Max retries for result validation (defaults to retries)
                          end_strategy: Strategy for handling tool calls that are requested alongside
                                        a final result
                          input_provider: Provider for human input (tool confirmation / HumanProviders)
                          parallel_init: Whether to initialize resources in parallel
                          debug: Whether to enable debug mode
                          event_handlers: Sequence of event handlers to register with the agent
                      """
                      from llmling_agent.agent import AgentContext
                      from llmling_agent.agent.conversation import ConversationManager
                      from llmling_agent.agent.interactions import Interactions
                      from llmling_agent.agent.sys_prompts import SystemPrompts
                      from llmling_agent.observability import registry
              
                      self.task_manager = TaskManager()
                      self._infinite = False
                      # save some stuff for asnyc init
                      self.deps_type = deps_type
                      # prepare context
                      ctx = context or AgentContext[TDeps].create_default(
                          name,
                          input_provider=input_provider,
                      )
                      self._context = ctx
                      # TODO: use to_structured with tool_name / description?
                      self._output_type = to_type(output_type, ctx.definition.responses)
                      memory_cfg = (
                          session
                          if isinstance(session, MemoryConfig)
                          else MemoryConfig.from_value(session)
                      )
                      # Initialize progress queue before super().__init__()
                      self._progress_queue = asyncio.Queue[ToolCallProgressEvent]()
                      super().__init__(
                          name=name,
                          context=ctx,
                          description=description,
                          enable_logging=memory_cfg.enable,
                          mcp_servers=mcp_servers,
                          progress_handler=create_queuing_progress_handler(self._progress_queue),
                      )
              
                      # Initialize tool manager
                      self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                      all_tools = list(tools or [])
                      self.tools = ToolManager(all_tools)
              
                      # MCP manager will be initialized in __aenter__ and providers added there
                      if builtin_tools := ctx.config.get_tool_provider():
                          self.tools.add_provider(builtin_tools)
              
                      for toolset_provider in toolsets or []:
                          self.tools.add_provider(toolset_provider)
              
                      # Initialize conversation manager
                      resources = list(resources)
                      if ctx.config.knowledge:
                          resources.extend(ctx.config.knowledge.get_resources())
                      self.conversation = ConversationManager(self, memory_cfg, resources=resources)
              
                      # Store pydantic-ai configuration
                      if model and not isinstance(model, str):
                          assert isinstance(model, models.Model)
                      self._model = model
                      self._retries = retries
                      self._end_strategy: EndStrategy = end_strategy
                      self._output_retries = output_retries
              
                      self.skills_registry = SkillsRegistry()
              
                      if ctx and ctx.definition:
                          registry.configure_observability(ctx.definition.observability)
              
                      # init variables
                      self._debug = debug
                      self.parallel_init = parallel_init
                      self._name = name
                      self._background_task: asyncio.Task[Any] | None = None
              
                      self.talk = Interactions(self)
              
                      # Set up system prompts
                      config_prompts = ctx.config.system_prompts if ctx else []
                      all_prompts: list[AnyPromptType] = list(config_prompts)
                      if isinstance(system_prompt, list):
                          all_prompts.extend(system_prompt)
                      else:
                          all_prompts.append(system_prompt)
                      self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
              
                  def __repr__(self) -> str:
                      desc = f", {self.description!r}" if self.description else ""
                      return f"Agent({self.name!r}, model={self._model!r}{desc})"
              
                  async def __prompt__(self) -> str:
                      typ = self.__class__.__name__
                      model = self.model_name or "default"
                      parts = [f"Agent: {self.name}", f"Type: {typ}", f"Model: {model}"]
                      if self.description:
                          parts.append(f"Description: {self.description}")
                      parts.extend([await self.tools.__prompt__(), self.conversation.__prompt__()])
                      return "\n".join(parts)
              
                  async def __aenter__(self) -> Self:
                      """Enter async context and set up MCP servers."""
                      try:
                          # Collect all coroutines that need to be run
                          coros: list[Coroutine[Any, Any, Any]] = []
              
                          coros.append(super().__aenter__())
                          coros.extend(self.conversation.get_initialization_tasks())
              
                          # Execute coroutines either in parallel or sequentially
                          if self.parallel_init and coros:
                              await asyncio.gather(*coros)
                          else:
                              for coro in coros:
                                  await coro
              
                          # Add MCP aggregating provider after manager is initialized
                          aggregating_provider = self.mcp.get_aggregating_provider()
                          self.tools.add_provider(aggregating_provider)
              
                          for toolset_provider in self.context.config.get_toolsets():
                              self.tools.add_provider(toolset_provider)
                      except Exception as e:
                          msg = "Failed to initialize agent"
                          raise RuntimeError(msg) from e
                      else:
                          return self
              
                  async def __aexit__(
                      self,
                      exc_type: type[BaseException] | None,
                      exc_val: BaseException | None,
                      exc_tb: TracebackType | None,
                  ):
                      """Exit async context."""
                      await super().__aexit__(exc_type, exc_val, exc_tb)
              
                  @overload
                  def __and__(  # if other doesnt define deps, we take the agents one
                      self, other: ProcessorCallback[Any] | Team[TDeps] | Agent[TDeps, Any]
                  ) -> Team[TDeps]: ...
              
                  @overload
                  def __and__(  # otherwise, we dont know and deps is Any
                      self, other: ProcessorCallback[Any] | Team[Any] | Agent[Any, Any]
                  ) -> Team[Any]: ...
              
                  def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                      """Create sequential team using & operator.
              
                      Example:
                          group = analyzer & planner & executor  # Create group of 3
                          group = analyzer & existing_group  # Add to existing group
                      """
                      from llmling_agent.delegation.team import Team
              
                      match other:
                          case Team():
                              return Team([self, *other.agents])
                          case Callable():
                              agent_2 = Agent.from_callback(other)
                              agent_2.context.pool = self.context.pool
                              return Team([self, agent_2])
                          case MessageNode():
                              return Team([self, other])
                          case _:
                              msg = f"Invalid agent type: {type(other)}"
                              raise ValueError(msg)
              
                  @overload
                  def __or__(self, other: MessageNode[TDeps, Any]) -> TeamRun[TDeps, Any]: ...
              
                  @overload
                  def __or__[TOtherDeps](
                      self,
                      other: MessageNode[TOtherDeps, Any],
                  ) -> TeamRun[Any, Any]: ...
              
                  @overload
                  def __or__(self, other: ProcessorCallback[Any]) -> TeamRun[Any, Any]: ...
              
                  def __or__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> TeamRun:
                      # Create new execution with sequential mode (for piping)
                      from llmling_agent import TeamRun
              
                      if callable(other):
                          other = Agent.from_callback(other)
                          other.context.pool = self.context.pool
              
                      return TeamRun([self, other])
              
                  @classmethod
                  def from_callback[TResult](
                      cls,
                      callback: ProcessorCallback[TResult],
                      *,
                      name: str | None = None,
                      **kwargs: Any,
                  ) -> Agent[None, TResult]:
                      """Create an agent from a processing callback.
              
                      Args:
                          callback: Function to process messages. Can be:
                              - sync or async
                              - with or without context
                              - must return str for pipeline compatibility
                          name: Optional name for the agent
                          kwargs: Additional arguments for agent
                      """
                      name = name or callback.__name__ or "processor"
                      model = function_to_model(callback)
                      return_type = get_type_hints(callback).get("return")
                      if (  # If async, unwrap from Awaitable
                          return_type
                          and hasattr(return_type, "__origin__")
                          and return_type.__origin__ is Awaitable
                      ):
                          return_type = return_type.__args__[0]
                      return Agent(
                          model=model,
                          name=name,
                          output_type=return_type or str,
                          **kwargs,
                      )
              
                  @property
                  def name(self) -> str:
                      """Get agent name."""
                      return self._name or "llmling-agent"
              
                  @property
                  def context(self) -> AgentContext[TDeps]:
                      """Get agent context."""
                      return self._context
              
                  @context.setter
                  def context(self, value: AgentContext[TDeps]):
                      """Set agent context and propagate to provider."""
                      self._context = value
                      self.mcp.context = value
              
                  def to_structured[NewOutputDataT](
                      self,
                      output_type: type[NewOutputDataT],
                      *,
                      tool_name: str | None = None,
                      tool_description: str | None = None,
                  ) -> Agent[TDeps, NewOutputDataT]:
                      """Convert this agent to a structured agent.
              
                      Args:
                          output_type: Type for structured responses. Can be:
                              - A Python type (Pydantic model)
                          tool_name: Optional override for result tool name
                          tool_description: Optional override for result tool description
              
                      Returns:
                          Typed Agent
                      """
                      self.log.debug("Setting result type", output_type=output_type)
                      self._output_type = to_type(output_type)
                      return self  # type: ignore
              
                  def is_busy(self) -> bool:
                      """Check if agent is currently processing tasks."""
                      return bool(self.task_manager._pending_tasks or self._background_task)
              
                  @property
                  def model_name(self) -> str | None:
                      """Get the model name in a consistent format."""
                      return self._model.model_name if isinstance(self._model, Model) else self._model
              
                  def to_tool(
                      self,
                      *,
                      name: str | None = None,
                      reset_history_on_run: bool = True,
                      pass_message_history: bool = False,
                      parent: Agent[Any, Any] | None = None,
                  ) -> Tool[OutputDataT]:
                      """Create a tool from this agent.
              
                      Args:
                          name: Optional tool name override
                          reset_history_on_run: Clear agent's history before each run
                          pass_message_history: Pass parent's message history to agent
                          parent: Optional parent agent for history/context sharing
                      """
                      tool_name = name or f"ask_{self.name}"
                      # Create wrapper function with correct return type annotation
                      output_type = self._output_type or Any
              
                      async def wrapped_tool(prompt: str):
                          if pass_message_history and not parent:
                              msg = "Parent agent required for message history sharing"
                              raise ToolError(msg)
              
                          if reset_history_on_run:
                              self.conversation.clear()
              
                          history = None
                          if pass_message_history and parent:
                              history = parent.conversation.get_history()
                              old = self.conversation.get_history()
                              self.conversation.set_history(history)
                          result = await self.run(prompt)
                          if history:
                              self.conversation.set_history(old)
                          return result.data
              
                      # Set the correct return annotation dynamically
                      wrapped_tool.__annotations__ = {"prompt": str, "return": output_type}
              
                      normalized_name = self.name.replace("_", " ").title()
                      docstring = f"Get expert answer from specialized agent: {normalized_name}"
                      if self.description:
                          docstring = f"{docstring}\n\n{self.description}"
              
                      wrapped_tool.__doc__ = docstring
                      wrapped_tool.__name__ = tool_name
              
                      return Tool.from_callable(
                          wrapped_tool,
                          name_override=tool_name,
                          description_override=docstring,
                          source="agent",
                      )
              
                  async def get_agentlet(
                      self,
                      tools: list[Tool],
                      model: ModelType = None,
                      output_type: type[Any] = str,
                      deps_type: type[Any] | None = None,
                      input_provider: InputProvider | None = None,
                  ):
                      """Create pydantic-ai agent from current state."""
                      # Monkey patch pydantic-ai to recognize AgentContext
              
                      from llmling_agent.agent.tool_wrapping import wrap_tool
              
                      actual_model = model or self._model
                      model_ = (
                          infer_model(actual_model) if isinstance(actual_model, str) else actual_model
                      )
              
                      agent = PydanticAgent(
                          name=self.name,
                          model=model_,
                          instructions=await self.sys_prompts.format_system_prompt(self),
                          retries=self._retries,
                          end_strategy=self._end_strategy,
                          output_retries=self._output_retries,
                          deps_type=deps_type or NoneType,
                          output_type=output_type,
                      )
              
                      # If input_provider override is provided, create modified context
                      context_for_tools = (
                          self.context
                          if input_provider is None
                          else replace(self.context, input_provider=input_provider)
                      )
              
                      for tool in tools:
                          wrapped = wrap_tool(tool, context_for_tools)
                          if get_argument_key(wrapped, RunContext):
                              logger.info("Registering tool: with context", tool_name=tool.name)
                              agent.tool(wrapped)
                          else:
                              logger.info("Registering tool: no context", tool_name=tool.name)
                              agent.tool_plain(wrapped)
              
                      return agent
              
                  async def _run(
                      self,
                      *prompts: PromptCompatible | ChatMessage[Any],
                      output_type: type[OutputDataT] | None = None,
                      model: ModelType = None,
                      store_history: bool = True,
                      tool_choice: str | list[str] | None = None,
                      usage_limits: UsageLimits | None = None,
                      message_id: str | None = None,
                      conversation_id: str | None = None,
                      messages: list[ChatMessage[Any]] | None = None,
                      deps: TDeps | None = None,
                      input_provider: InputProvider | None = None,
                      wait_for_connections: bool | None = None,
                  ) -> ChatMessage[OutputDataT]:
                      """Run agent with prompt and get response.
              
                      Args:
                          prompts: User query or instruction
                          output_type: Optional type for structured responses
                          model: Optional model override
                          store_history: Whether the message exchange should be added to the
                                          context window
                          tool_choice: Filter tool choice by name
                          usage_limits: Optional usage limits for the model
                          message_id: Optional message id for the returned message.
                                      Automatically generated if not provided.
                          conversation_id: Optional conversation id for the returned message.
                          messages: Optional list of messages to replace the conversation history
                          deps: Optional dependencies for the agent
                          input_provider: Optional input provider for the agent
                          wait_for_connections: Whether to wait for connected agents to complete
              
                      Returns:
                          Result containing response and run information
              
                      Raises:
                          UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                      """
                      """Run agent with prompt and get response."""
                      message_id = message_id or str(uuid4())
                      tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                      final_type = to_type(output_type) if output_type else self._output_type
                      start_time = time.perf_counter()
              
                      message_history = (
                          messages if messages is not None else self.conversation.get_history()
                      )
                      try:
                          # Create pydantic-ai agent for this run
                          agentlet = await self.get_agentlet(
                              tools, model, final_type, self.deps_type, input_provider
                          )
                          converted_prompts = await convert_prompts(prompts)
              
                          # Merge internal and external event handlers like the old provider did
                          async def event_distributor(
                              ctx: RunContext, events: AsyncIterable[AgentStreamEvent]
                          ):
                              async for event in events:
                                  for handler in self.event_handler._wrapped_handlers:
                                      await handler(ctx, event)
              
                          result = await agentlet.run(
                              [
                                  i if isinstance(i, str) else i.to_pydantic_ai()
                                  for i in converted_prompts
                              ],
                              deps=deps,
                              message_history=[
                                  m for run in message_history for m in run.to_pydantic_ai()
                              ],
                              output_type=final_type or str,
                              usage_limits=usage_limits,
                              event_stream_handler=event_distributor,
                          )
              
                          response_time = time.perf_counter() - start_time
                          response_msg = await ChatMessage.from_run_result(
                              result,
                              agent_name=self.name,
                              message_id=message_id,
                              conversation_id=conversation_id,
                              response_time=response_time,
                          )
              
                      except Exception as e:
                          self.log.exception("Agent run failed")
                          self.run_failed.emit("Agent run failed", e)
                          raise
                      else:
                          return response_msg
              
                  @method_spawner
                  async def run_stream(
                      self,
                      *prompt: PromptCompatible,
                      output_type: type[OutputDataT] | None = None,
                      model: ModelType = None,
                      tool_choice: str | list[str] | None = None,
                      store_history: bool = True,
                      usage_limits: UsageLimits | None = None,
                      message_id: str | None = None,
                      conversation_id: str | None = None,
                      messages: list[ChatMessage[Any]] | None = None,
                      input_provider: InputProvider | None = None,
                      wait_for_connections: bool | None = None,
                      deps: TDeps | None = None,
                  ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
                      """Run agent with prompt and get a streaming response.
              
                      Args:
                          prompt: User query or instruction
                          output_type: Optional type for structured responses
                          model: Optional model override
                          tool_choice: Filter tool choice by name
                          store_history: Whether the message exchange should be added to the
                                         context window
                          usage_limits: Optional usage limits for the model
                          message_id: Optional message id for the returned message.
                                      Automatically generated if not provided.
                          conversation_id: Optional conversation id for the returned message.
                          messages: Optional list of messages to replace the conversation history
                          input_provider: Optional input provider for the agent
                          wait_for_connections: Whether to wait for connected agents to complete
                          deps: Optional dependencies for the agent
                      Returns:
                          An async iterator yielding streaming events with final message embedded.
              
                      Raises:
                          UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                      """
                      message_id = message_id or str(uuid4())
                      user_msg, prompts = await self.pre_run(*prompt)
                      final_type = to_type(output_type) if output_type else self._output_type
                      start_time = time.perf_counter()
                      tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                      message_history = (
                          messages if messages is not None else self.conversation.get_history()
                      )
                      try:
                          agentlet = await self.get_agentlet(
                              tools, model, final_type, self.deps_type, input_provider
                          )
                          content = await convert_prompts(prompts)
                          # Initialize variables for final response
                          response_msg = None
                          # Create tool dict for signal emission
                          converted = [i if isinstance(i, str) else i.to_pydantic_ai() for i in content]
                          stream_events = agentlet.run_stream_events(
                              converted,
                              deps=deps,
                              message_history=[
                                  m for run in message_history for m in run.to_pydantic_ai()
                              ],
                              output_type=final_type or str,
                              usage_limits=usage_limits,
                          )
              
                          # Stream events through merge_queue for progress events
                          async with merge_queue_into_iterator(
                              stream_events, self._progress_queue
                          ) as events:
                              # Track tool call starts to combine with results later
                              pending_tcs: dict[str, ToolCallPart] = {}
                              async for event in events:
                                  # Process events and emit signals
                                  yield event
                                  match event:
                                      case (
                                          PartStartEvent(part=ToolCallPart() as tool_part)
                                          | FunctionToolCallEvent(part=tool_part)
                                      ):
                                          # Store tool call start info for later combination with result
                                          pending_tcs[tool_part.tool_call_id] = tool_part
                                      case (
                                          FunctionToolResultEvent(tool_call_id=call_id) as result_event
                                      ):
                                          # Check if we have a pending tool call to combine with
                                          if call_info := pending_tcs.pop(call_id, None):
                                              # Create and yield combined event
                                              combined_event = ToolCallCompleteEvent(
                                                  tool_name=call_info.tool_name,
                                                  tool_call_id=call_id,
                                                  tool_input=call_info.args_as_dict(),
                                                  tool_result=result_event.result.content
                                                  if isinstance(result_event.result, ToolReturnPart)
                                                  else result_event.result,
                                                  agent_name=self.name,
                                                  message_id=message_id,
                                              )
                                              yield combined_event
                                      case AgentRunResultEvent():
                                          # Capture final result data
                                          # Build final response message
                                          response_time = time.perf_counter() - start_time
                                          response_msg = await ChatMessage.from_run_result(
                                              event.result,
                                              agent_name=self.name,
                                              message_id=message_id,
                                              conversation_id=conversation_id,
                                              response_time=response_time,
                                          )
              
                          # Send additional enriched completion event
                          assert response_msg
                          yield StreamCompleteEvent(message=response_msg)
                          self.message_sent.emit(response_msg)
                          await self.log_message(response_msg)
                          if store_history:
                              self.conversation.add_chat_messages([user_msg, response_msg])
                          await self.connections.route_message(
                              response_msg,
                              wait=wait_for_connections,
                          )
              
                      except Exception as e:
                          self.log.exception("Agent stream failed")
                          self.run_failed.emit("Agent stream failed", e)
                          raise
              
                  async def run_iter(
                      self,
                      *prompt_groups: Sequence[PromptCompatible],
                      output_type: type[OutputDataT] | None = None,
                      model: ModelType = None,
                      store_history: bool = True,
                      wait_for_connections: bool | None = None,
                  ) -> AsyncIterator[ChatMessage[OutputDataT]]:
                      """Run agent sequentially on multiple prompt groups.
              
                      Args:
                          prompt_groups: Groups of prompts to process sequentially
                          output_type: Optional type for structured responses
                          model: Optional model override
                          store_history: Whether to store in conversation history
                          wait_for_connections: Whether to wait for connected agents
              
                      Yields:
                          Response messages in sequence
              
                      Example:
                          questions = [
                              ["What is your name?"],
                              ["How old are you?", image1],
                              ["Describe this image", image2],
                          ]
                          async for response in agent.run_iter(*questions):
                              print(response.content)
                      """
                      for prompts in prompt_groups:
                          response = await self.run(
                              *prompts,
                              output_type=output_type,
                              model=model,
                              store_history=store_history,
                              wait_for_connections=wait_for_connections,
                          )
                          yield response  # pyright: ignore
              
                  @method_spawner
                  async def run_job(
                      self,
                      job: Job[TDeps, str | None],
                      *,
                      store_history: bool = True,
                      include_agent_tools: bool = True,
                  ) -> ChatMessage[OutputDataT]:
                      """Execute a pre-defined task.
              
                      Args:
                          job: Job configuration to execute
                          store_history: Whether the message exchange should be added to the
                                         context window
                          include_agent_tools: Whether to include agent tools
                      Returns:
                          Job execution result
              
                      Raises:
                          JobError: If task execution fails
                          ValueError: If task configuration is invalid
                      """
                      if job.required_dependency is not None:  # noqa: SIM102
                          if not isinstance(self.context.data, job.required_dependency):
                              msg = (
                                  f"Agent dependencies ({type(self.context.data)}) "
                                  f"don't match job requirement ({job.required_dependency})"
                              )
                              raise JobError(msg)
              
                      # Load task knowledge
                      if job.knowledge:
                          # Add knowledge sources to context
                          for source in list(job.knowledge.paths):
                              await self.conversation.load_context_source(source)
                          for prompt in job.knowledge.prompts:
                              await self.conversation.load_context_source(prompt)
                      try:
                          # Register task tools temporarily
                          tools = job.get_tools()
                          async with self.tools.temporary_tools(
                              tools, exclusive=not include_agent_tools
                          ):
                              # Execute job with job-specific tools
                              return await self.run(await job.get_prompt(), store_history=store_history)
              
                      except Exception as e:
                          self.log.exception("Task execution failed", error=str(e))
                          msg = f"Task execution failed: {e}"
                          raise JobError(msg) from e
              
                  async def run_in_background(
                      self,
                      *prompt: PromptCompatible,
                      max_count: int | None = None,
                      interval: float = 1.0,
                      block: bool = False,
                      **kwargs: Any,
                  ) -> ChatMessage[OutputDataT] | None:
                      """Run agent continuously in background with prompt or dynamic prompt function.
              
                      Args:
                          prompt: Static prompt or function that generates prompts
                          max_count: Maximum number of runs (None = infinite)
                          interval: Seconds between runs
                          block: Whether to block until completion
                          **kwargs: Arguments passed to run()
                      """
                      self._infinite = max_count is None
              
                      async def _continuous():
                          count = 0
                          self.log.debug(
                              "Starting continuous run", max_count=max_count, interval=interval
                          )
                          latest = None
                          while max_count is None or count < max_count:
                              try:
                                  current_prompts = [
                                      call_with_context(p, self.context, **kwargs) if callable(p) else p
                                      for p in prompt
                                  ]
                                  self.log.debug("Generated prompt", iteration=count)
                                  latest = await self.run(current_prompts, **kwargs)
                                  self.log.debug("Run continuous result", iteration=count)
              
                                  count += 1
                                  await asyncio.sleep(interval)
                              except asyncio.CancelledError:
                                  self.log.debug("Continuous run cancelled")
                                  break
                              except Exception:
                                  self.log.exception("Background run failed")
                                  await asyncio.sleep(interval)
                          self.log.debug("Continuous run completed", iterations=count)
                          return latest
              
                      # Cancel any existing background task
                      await self.stop()
                      task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                      if block:
                          try:
                              return await task  # type: ignore
                          finally:
                              if not task.done():
                                  task.cancel()
                      else:
                          self.log.debug("Started background task", task_name=task.get_name())
                          self._background_task = task
                          return None
              
                  async def stop(self):
                      """Stop continuous execution if running."""
                      if self._background_task and not self._background_task.done():
                          self._background_task.cancel()
                          await self._background_task
                          self._background_task = None
              
                  async def wait(self) -> ChatMessage[OutputDataT]:
                      """Wait for background execution to complete."""
                      if not self._background_task:
                          msg = "No background task running"
                          raise RuntimeError(msg)
                      if self._infinite:
                          msg = "Cannot wait on infinite execution"
                          raise RuntimeError(msg)
                      try:
                          return await self._background_task
                      finally:
                          self._background_task = None
              
                  async def share(
                      self,
                      target: Agent[TDeps, Any],
                      *,
                      tools: list[str] | None = None,
                      history: bool | int | None = None,  # bool or number of messages
                      token_limit: int | None = None,
                  ):
                      """Share capabilities and knowledge with another agent.
              
                      Args:
                          target: Agent to share with
                          tools: List of tool names to share
                          history: Share conversation history:
                                  - True: Share full history
                                  - int: Number of most recent messages to share
                                  - None: Don't share history
                          token_limit: Optional max tokens for history
              
                      Raises:
                          ValueError: If requested items don't exist
                          RuntimeError: If runtime not available for resources
                      """
                      # Share tools if requested
                      for name in tools or []:
                          if tool := await self.tools.get_tool(name):
                              meta = {"shared_from": self.name}
                              target.tools.register_tool(tool.callable, metadata=meta)
                          else:
                              msg = f"Tool not found: {name}"
                              raise ValueError(msg)
              
                      # Share history if requested
                      if history:
                          history_text = await self.conversation.format_history(
                              max_tokens=token_limit,
                              num_messages=history if isinstance(history, int) else None,
                          )
                          target.conversation.add_context_message(
                              history_text, source=self.name, metadata={"type": "shared_history"}
                          )
              
                  def register_worker(
                      self,
                      worker: MessageNode[Any, Any],
                      *,
                      name: str | None = None,
                      reset_history_on_run: bool = True,
                      pass_message_history: bool = False,
                  ) -> Tool:
                      """Register another agent as a worker tool."""
                      return self.tools.register_worker(
                          worker,
                          name=name,
                          reset_history_on_run=reset_history_on_run,
                          pass_message_history=pass_message_history,
                          parent=self if pass_message_history else None,
                      )
              
                  def set_model(self, model: ModelType):
                      """Set the model for this agent.
              
                      Args:
                          model: New model to use (name or instance)
              
                      """
                      self._model = model
              
                  async def reset(self):
                      """Reset agent state (conversation history and tool states)."""
                      old_tools = await self.tools.list_tools()
                      self.conversation.clear()
                      await self.tools.reset_states()
                      new_tools = await self.tools.list_tools()
              
                      event = self.AgentReset(
                          agent_name=self.name,
                          previous_tools=old_tools,
                          new_tools=new_tools,
                      )
                      self.agent_reset.emit(event)
              
                  async def get_stats(self) -> MessageStats:
                      """Get message statistics (async version)."""
                      messages = await self.get_message_history()
                      return MessageStats(messages=messages)
              
                  @asynccontextmanager
                  async def temporary_state[T](
                      self,
                      *,
                      system_prompts: list[AnyPromptType] | None = None,
                      output_type: type[T] | None = None,
                      replace_prompts: bool = False,
                      tools: list[ToolType] | None = None,
                      replace_tools: bool = False,
                      history: list[AnyPromptType] | SessionQuery | None = None,
                      replace_history: bool = False,
                      pause_routing: bool = False,
                      model: ModelType | None = None,
                  ) -> AsyncIterator[Self | Agent[T]]:
                      """Temporarily modify agent state.
              
                      Args:
                          system_prompts: Temporary system prompts to use
                          output_type: Temporary output type to use
                          replace_prompts: Whether to replace existing prompts
                          tools: Temporary tools to make available
                          replace_tools: Whether to replace existing tools
                          history: Conversation history (prompts or query)
                          replace_history: Whether to replace existing history
                          pause_routing: Whether to pause message routing
                          model: Temporary model override
                      """
                      old_model = self._model
                      if output_type:
                          old_type = self._output_type
                          self.to_structured(output_type)
                      async with AsyncExitStack() as stack:
                          if system_prompts is not None:  # System prompts
                              await stack.enter_async_context(
                                  self.sys_prompts.temporary_prompt(
                                      system_prompts, exclusive=replace_prompts
                                  )
                              )
              
                          if tools is not None:  # Tools
                              await stack.enter_async_context(
                                  self.tools.temporary_tools(tools, exclusive=replace_tools)
                              )
              
                          if history is not None:  # History
                              await stack.enter_async_context(
                                  self.conversation.temporary_state(
                                      history, replace_history=replace_history
                                  )
                              )
              
                          if pause_routing:  # Routing
                              await stack.enter_async_context(self.connections.paused_routing())
              
                          elif model is not None:  # Model
                              self._model = model
              
                          try:
                              yield self
                          finally:  # Restore model
                              if model is not None and old_model:
                                  self._model = old_model
                              if output_type:
                                  self.to_structured(old_type)
              
                  async def validate_against(
                      self,
                      prompt: str,
                      criteria: type[OutputDataT],
                      **kwargs: Any,
                  ) -> bool:
                      """Check if agent's response satisfies stricter criteria."""
                      result = await self.run(prompt, **kwargs)
                      try:
                          criteria.model_validate(result.content.model_dump())  # type: ignore
                      except ValidationError:
                          return False
                      else:
                          return True
              

              context property writable

              context: AgentContext[TDeps]
              

              Get agent context.

              model_name property

              model_name: str | None
              

              Get the model name in a consistent format.

              name property

              name: str
              

              Get agent name.

              AgentReset dataclass

              Emitted when agent is reset.

              Source code in src/llmling_agent/agent/agent.py
              113
              114
              115
              116
              117
              118
              119
              120
              @dataclass(frozen=True)
              class AgentReset:
                  """Emitted when agent is reset."""
              
                  agent_name: AgentName
                  previous_tools: dict[str, bool]
                  new_tools: dict[str, bool]
                  timestamp: datetime = field(default_factory=get_now)
              

              __aenter__ async

              __aenter__() -> Self
              

              Enter async context and set up MCP servers.

              Source code in src/llmling_agent/agent/agent.py
              276
              277
              278
              279
              280
              281
              282
              283
              284
              285
              286
              287
              288
              289
              290
              291
              292
              293
              294
              295
              296
              297
              298
              299
              300
              301
              302
              async def __aenter__(self) -> Self:
                  """Enter async context and set up MCP servers."""
                  try:
                      # Collect all coroutines that need to be run
                      coros: list[Coroutine[Any, Any, Any]] = []
              
                      coros.append(super().__aenter__())
                      coros.extend(self.conversation.get_initialization_tasks())
              
                      # Execute coroutines either in parallel or sequentially
                      if self.parallel_init and coros:
                          await asyncio.gather(*coros)
                      else:
                          for coro in coros:
                              await coro
              
                      # Add MCP aggregating provider after manager is initialized
                      aggregating_provider = self.mcp.get_aggregating_provider()
                      self.tools.add_provider(aggregating_provider)
              
                      for toolset_provider in self.context.config.get_toolsets():
                          self.tools.add_provider(toolset_provider)
                  except Exception as e:
                      msg = "Failed to initialize agent"
                      raise RuntimeError(msg) from e
                  else:
                      return self
              

              __aexit__ async

              __aexit__(
                  exc_type: type[BaseException] | None,
                  exc_val: BaseException | None,
                  exc_tb: TracebackType | None,
              )
              

              Exit async context.

              Source code in src/llmling_agent/agent/agent.py
              304
              305
              306
              307
              308
              309
              310
              311
              async def __aexit__(
                  self,
                  exc_type: type[BaseException] | None,
                  exc_val: BaseException | None,
                  exc_tb: TracebackType | None,
              ):
                  """Exit async context."""
                  await super().__aexit__(exc_type, exc_val, exc_tb)
              

              __and__

              __and__(other: ProcessorCallback[Any] | Team[TDeps] | Agent[TDeps, Any]) -> Team[TDeps]
              
              __and__(other: ProcessorCallback[Any] | Team[Any] | Agent[Any, Any]) -> Team[Any]
              
              __and__(other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
              

              Create sequential team using & operator.

              Example

              group = analyzer & planner & executor # Create group of 3 group = analyzer & existing_group # Add to existing group

              Source code in src/llmling_agent/agent/agent.py
              323
              324
              325
              326
              327
              328
              329
              330
              331
              332
              333
              334
              335
              336
              337
              338
              339
              340
              341
              342
              343
              def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                  """Create sequential team using & operator.
              
                  Example:
                      group = analyzer & planner & executor  # Create group of 3
                      group = analyzer & existing_group  # Add to existing group
                  """
                  from llmling_agent.delegation.team import Team
              
                  match other:
                      case Team():
                          return Team([self, *other.agents])
                      case Callable():
                          agent_2 = Agent.from_callback(other)
                          agent_2.context.pool = self.context.pool
                          return Team([self, agent_2])
                      case MessageNode():
                          return Team([self, other])
                      case _:
                          msg = f"Invalid agent type: {type(other)}"
                          raise ValueError(msg)
              

              __init__

              __init__(
                  name: str = "llmling-agent",
                  *,
                  deps_type: type[TDeps] | None = None,
                  model: ModelType = None,
                  output_type: OutputSpec[OutputDataT] = str,
                  context: AgentContext[TDeps] | None = None,
                  session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                  system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                  description: str | None = None,
                  tools: Sequence[ToolType | Tool] | None = None,
                  toolsets: Sequence[ResourceProvider] | None = None,
                  mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                  resources: Sequence[PromptType | str] = (),
                  retries: int = 1,
                  output_retries: int | None = None,
                  end_strategy: EndStrategy = "early",
                  input_provider: InputProvider | None = None,
                  parallel_init: bool = True,
                  debug: bool = False,
                  event_handlers: Sequence[IndividualEventHandler] | None = None
              )
              

              Initialize agent.

              Parameters:

              Name Type Description Default
              name str

              Name of the agent for logging and identification

              'llmling-agent'
              deps_type type[TDeps] | None

              Type of dependencies to use

              None
              model ModelType

              The default model to use (defaults to GPT-5)

              None
              output_type OutputSpec[OutputDataT]

              The default output type to use (defaults to str)

              str
              context AgentContext[TDeps] | None

              Agent context with configuration

              None
              session SessionIdType | SessionQuery | MemoryConfig | bool | int

              Memory configuration. - None: Default memory config - False: Disable message history (max_messages=0) - int: Max tokens for memory - str/UUID: Session identifier - MemoryConfig: Full memory configuration - MemoryProvider: Custom memory provider - SessionQuery: Session query

              None
              system_prompt AnyPromptType | Sequence[AnyPromptType]

              System prompts for the agent

              ()
              description str | None

              Description of the Agent ("what it can do")

              None
              tools Sequence[ToolType | Tool] | None

              List of tools to register with the agent

              None
              toolsets Sequence[ResourceProvider] | None

              List of toolset resource providers for the agent

              None
              mcp_servers Sequence[str | MCPServerConfig] | None

              MCP servers to connect to

              None
              resources Sequence[PromptType | str]

              Additional resources to load

              ()
              retries int

              Default number of retries for failed operations

              1
              output_retries int | None

              Max retries for result validation (defaults to retries)

              None
              end_strategy EndStrategy

              Strategy for handling tool calls that are requested alongside a final result

              'early'
              input_provider InputProvider | None

              Provider for human input (tool confirmation / HumanProviders)

              None
              parallel_init bool

              Whether to initialize resources in parallel

              True
              debug bool

              Whether to enable debug mode

              False
              event_handlers Sequence[IndividualEventHandler] | None

              Sequence of event handlers to register with the agent

              None
              Source code in src/llmling_agent/agent/agent.py
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              241
              242
              243
              244
              245
              246
              247
              248
              249
              250
              251
              252
              253
              254
              255
              256
              257
              258
              259
              260
              261
              def __init__(
                  # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                  self,
                  name: str = "llmling-agent",
                  *,
                  deps_type: type[TDeps] | None = None,
                  model: ModelType = None,
                  output_type: OutputSpec[OutputDataT] = str,  # type: ignore[assignment]
                  context: AgentContext[TDeps] | None = None,
                  session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                  system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                  description: str | None = None,
                  tools: Sequence[ToolType | Tool] | None = None,
                  toolsets: Sequence[ResourceProvider] | None = None,
                  mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                  resources: Sequence[PromptType | str] = (),
                  retries: int = 1,
                  output_retries: int | None = None,
                  end_strategy: EndStrategy = "early",
                  input_provider: InputProvider | None = None,
                  parallel_init: bool = True,
                  debug: bool = False,
                  event_handlers: Sequence[IndividualEventHandler] | None = None,
              ):
                  """Initialize agent.
              
                  Args:
                      name: Name of the agent for logging and identification
                      deps_type: Type of dependencies to use
                      model: The default model to use (defaults to GPT-5)
                      output_type: The default output type to use (defaults to str)
                      context: Agent context with configuration
                      session: Memory configuration.
                          - None: Default memory config
                          - False: Disable message history (max_messages=0)
                          - int: Max tokens for memory
                          - str/UUID: Session identifier
                          - MemoryConfig: Full memory configuration
                          - MemoryProvider: Custom memory provider
                          - SessionQuery: Session query
              
                      system_prompt: System prompts for the agent
                      description: Description of the Agent ("what it can do")
                      tools: List of tools to register with the agent
                      toolsets: List of toolset resource providers for the agent
                      mcp_servers: MCP servers to connect to
                      resources: Additional resources to load
                      retries: Default number of retries for failed operations
                      output_retries: Max retries for result validation (defaults to retries)
                      end_strategy: Strategy for handling tool calls that are requested alongside
                                    a final result
                      input_provider: Provider for human input (tool confirmation / HumanProviders)
                      parallel_init: Whether to initialize resources in parallel
                      debug: Whether to enable debug mode
                      event_handlers: Sequence of event handlers to register with the agent
                  """
                  from llmling_agent.agent import AgentContext
                  from llmling_agent.agent.conversation import ConversationManager
                  from llmling_agent.agent.interactions import Interactions
                  from llmling_agent.agent.sys_prompts import SystemPrompts
                  from llmling_agent.observability import registry
              
                  self.task_manager = TaskManager()
                  self._infinite = False
                  # save some stuff for asnyc init
                  self.deps_type = deps_type
                  # prepare context
                  ctx = context or AgentContext[TDeps].create_default(
                      name,
                      input_provider=input_provider,
                  )
                  self._context = ctx
                  # TODO: use to_structured with tool_name / description?
                  self._output_type = to_type(output_type, ctx.definition.responses)
                  memory_cfg = (
                      session
                      if isinstance(session, MemoryConfig)
                      else MemoryConfig.from_value(session)
                  )
                  # Initialize progress queue before super().__init__()
                  self._progress_queue = asyncio.Queue[ToolCallProgressEvent]()
                  super().__init__(
                      name=name,
                      context=ctx,
                      description=description,
                      enable_logging=memory_cfg.enable,
                      mcp_servers=mcp_servers,
                      progress_handler=create_queuing_progress_handler(self._progress_queue),
                  )
              
                  # Initialize tool manager
                  self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                  all_tools = list(tools or [])
                  self.tools = ToolManager(all_tools)
              
                  # MCP manager will be initialized in __aenter__ and providers added there
                  if builtin_tools := ctx.config.get_tool_provider():
                      self.tools.add_provider(builtin_tools)
              
                  for toolset_provider in toolsets or []:
                      self.tools.add_provider(toolset_provider)
              
                  # Initialize conversation manager
                  resources = list(resources)
                  if ctx.config.knowledge:
                      resources.extend(ctx.config.knowledge.get_resources())
                  self.conversation = ConversationManager(self, memory_cfg, resources=resources)
              
                  # Store pydantic-ai configuration
                  if model and not isinstance(model, str):
                      assert isinstance(model, models.Model)
                  self._model = model
                  self._retries = retries
                  self._end_strategy: EndStrategy = end_strategy
                  self._output_retries = output_retries
              
                  self.skills_registry = SkillsRegistry()
              
                  if ctx and ctx.definition:
                      registry.configure_observability(ctx.definition.observability)
              
                  # init variables
                  self._debug = debug
                  self.parallel_init = parallel_init
                  self._name = name
                  self._background_task: asyncio.Task[Any] | None = None
              
                  self.talk = Interactions(self)
              
                  # Set up system prompts
                  config_prompts = ctx.config.system_prompts if ctx else []
                  all_prompts: list[AnyPromptType] = list(config_prompts)
                  if isinstance(system_prompt, list):
                      all_prompts.extend(system_prompt)
                  else:
                      all_prompts.append(system_prompt)
                  self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
              

              from_callback classmethod

              from_callback(
                  callback: ProcessorCallback[TResult], *, name: str | None = None, **kwargs: Any
              ) -> Agent[None, TResult]
              

              Create an agent from a processing callback.

              Parameters:

              Name Type Description Default
              callback ProcessorCallback[TResult]

              Function to process messages. Can be: - sync or async - with or without context - must return str for pipeline compatibility

              required
              name str | None

              Optional name for the agent

              None
              kwargs Any

              Additional arguments for agent

              {}
              Source code in src/llmling_agent/agent/agent.py
              367
              368
              369
              370
              371
              372
              373
              374
              375
              376
              377
              378
              379
              380
              381
              382
              383
              384
              385
              386
              387
              388
              389
              390
              391
              392
              393
              394
              395
              396
              397
              398
              399
              @classmethod
              def from_callback[TResult](
                  cls,
                  callback: ProcessorCallback[TResult],
                  *,
                  name: str | None = None,
                  **kwargs: Any,
              ) -> Agent[None, TResult]:
                  """Create an agent from a processing callback.
              
                  Args:
                      callback: Function to process messages. Can be:
                          - sync or async
                          - with or without context
                          - must return str for pipeline compatibility
                      name: Optional name for the agent
                      kwargs: Additional arguments for agent
                  """
                  name = name or callback.__name__ or "processor"
                  model = function_to_model(callback)
                  return_type = get_type_hints(callback).get("return")
                  if (  # If async, unwrap from Awaitable
                      return_type
                      and hasattr(return_type, "__origin__")
                      and return_type.__origin__ is Awaitable
                  ):
                      return_type = return_type.__args__[0]
                  return Agent(
                      model=model,
                      name=name,
                      output_type=return_type or str,
                      **kwargs,
                  )
              

              get_agentlet async

              get_agentlet(
                  tools: list[Tool],
                  model: ModelType = None,
                  output_type: type[Any] = str,
                  deps_type: type[Any] | None = None,
                  input_provider: InputProvider | None = None,
              )
              

              Create pydantic-ai agent from current state.

              Source code in src/llmling_agent/agent/agent.py
              504
              505
              506
              507
              508
              509
              510
              511
              512
              513
              514
              515
              516
              517
              518
              519
              520
              521
              522
              523
              524
              525
              526
              527
              528
              529
              530
              531
              532
              533
              534
              535
              536
              537
              538
              539
              540
              541
              542
              543
              544
              545
              546
              547
              548
              549
              async def get_agentlet(
                  self,
                  tools: list[Tool],
                  model: ModelType = None,
                  output_type: type[Any] = str,
                  deps_type: type[Any] | None = None,
                  input_provider: InputProvider | None = None,
              ):
                  """Create pydantic-ai agent from current state."""
                  # Monkey patch pydantic-ai to recognize AgentContext
              
                  from llmling_agent.agent.tool_wrapping import wrap_tool
              
                  actual_model = model or self._model
                  model_ = (
                      infer_model(actual_model) if isinstance(actual_model, str) else actual_model
                  )
              
                  agent = PydanticAgent(
                      name=self.name,
                      model=model_,
                      instructions=await self.sys_prompts.format_system_prompt(self),
                      retries=self._retries,
                      end_strategy=self._end_strategy,
                      output_retries=self._output_retries,
                      deps_type=deps_type or NoneType,
                      output_type=output_type,
                  )
              
                  # If input_provider override is provided, create modified context
                  context_for_tools = (
                      self.context
                      if input_provider is None
                      else replace(self.context, input_provider=input_provider)
                  )
              
                  for tool in tools:
                      wrapped = wrap_tool(tool, context_for_tools)
                      if get_argument_key(wrapped, RunContext):
                          logger.info("Registering tool: with context", tool_name=tool.name)
                          agent.tool(wrapped)
                      else:
                          logger.info("Registering tool: no context", tool_name=tool.name)
                          agent.tool_plain(wrapped)
              
                  return agent
              

              get_stats async

              get_stats() -> MessageStats
              

              Get message statistics (async version).

              Source code in src/llmling_agent/agent/agent.py
              1024
              1025
              1026
              1027
              async def get_stats(self) -> MessageStats:
                  """Get message statistics (async version)."""
                  messages = await self.get_message_history()
                  return MessageStats(messages=messages)
              

              is_busy

              is_busy() -> bool
              

              Check if agent is currently processing tasks.

              Source code in src/llmling_agent/agent/agent.py
              439
              440
              441
              def is_busy(self) -> bool:
                  """Check if agent is currently processing tasks."""
                  return bool(self.task_manager._pending_tasks or self._background_task)
              

              register_worker

              register_worker(
                  worker: MessageNode[Any, Any],
                  *,
                  name: str | None = None,
                  reset_history_on_run: bool = True,
                  pass_message_history: bool = False
              ) -> Tool
              

              Register another agent as a worker tool.

              Source code in src/llmling_agent/agent/agent.py
              984
              985
              986
              987
              988
              989
              990
              991
              992
              993
              994
              995
              996
              997
              998
              999
              def register_worker(
                  self,
                  worker: MessageNode[Any, Any],
                  *,
                  name: str | None = None,
                  reset_history_on_run: bool = True,
                  pass_message_history: bool = False,
              ) -> Tool:
                  """Register another agent as a worker tool."""
                  return self.tools.register_worker(
                      worker,
                      name=name,
                      reset_history_on_run=reset_history_on_run,
                      pass_message_history=pass_message_history,
                      parent=self if pass_message_history else None,
                  )
              

              reset async

              reset()
              

              Reset agent state (conversation history and tool states).

              Source code in src/llmling_agent/agent/agent.py
              1010
              1011
              1012
              1013
              1014
              1015
              1016
              1017
              1018
              1019
              1020
              1021
              1022
              async def reset(self):
                  """Reset agent state (conversation history and tool states)."""
                  old_tools = await self.tools.list_tools()
                  self.conversation.clear()
                  await self.tools.reset_states()
                  new_tools = await self.tools.list_tools()
              
                  event = self.AgentReset(
                      agent_name=self.name,
                      previous_tools=old_tools,
                      new_tools=new_tools,
                  )
                  self.agent_reset.emit(event)
              

              run_in_background async

              run_in_background(
                  *prompt: PromptCompatible,
                  max_count: int | None = None,
                  interval: float = 1.0,
                  block: bool = False,
                  **kwargs: Any
              ) -> ChatMessage[OutputDataT] | None
              

              Run agent continuously in background with prompt or dynamic prompt function.

              Parameters:

              Name Type Description Default
              prompt PromptCompatible

              Static prompt or function that generates prompts

              ()
              max_count int | None

              Maximum number of runs (None = infinite)

              None
              interval float

              Seconds between runs

              1.0
              block bool

              Whether to block until completion

              False
              **kwargs Any

              Arguments passed to run()

              {}
              Source code in src/llmling_agent/agent/agent.py
              862
              863
              864
              865
              866
              867
              868
              869
              870
              871
              872
              873
              874
              875
              876
              877
              878
              879
              880
              881
              882
              883
              884
              885
              886
              887
              888
              889
              890
              891
              892
              893
              894
              895
              896
              897
              898
              899
              900
              901
              902
              903
              904
              905
              906
              907
              908
              909
              910
              911
              912
              913
              914
              915
              916
              917
              918
              919
              920
              async def run_in_background(
                  self,
                  *prompt: PromptCompatible,
                  max_count: int | None = None,
                  interval: float = 1.0,
                  block: bool = False,
                  **kwargs: Any,
              ) -> ChatMessage[OutputDataT] | None:
                  """Run agent continuously in background with prompt or dynamic prompt function.
              
                  Args:
                      prompt: Static prompt or function that generates prompts
                      max_count: Maximum number of runs (None = infinite)
                      interval: Seconds between runs
                      block: Whether to block until completion
                      **kwargs: Arguments passed to run()
                  """
                  self._infinite = max_count is None
              
                  async def _continuous():
                      count = 0
                      self.log.debug(
                          "Starting continuous run", max_count=max_count, interval=interval
                      )
                      latest = None
                      while max_count is None or count < max_count:
                          try:
                              current_prompts = [
                                  call_with_context(p, self.context, **kwargs) if callable(p) else p
                                  for p in prompt
                              ]
                              self.log.debug("Generated prompt", iteration=count)
                              latest = await self.run(current_prompts, **kwargs)
                              self.log.debug("Run continuous result", iteration=count)
              
                              count += 1
                              await asyncio.sleep(interval)
                          except asyncio.CancelledError:
                              self.log.debug("Continuous run cancelled")
                              break
                          except Exception:
                              self.log.exception("Background run failed")
                              await asyncio.sleep(interval)
                      self.log.debug("Continuous run completed", iterations=count)
                      return latest
              
                  # Cancel any existing background task
                  await self.stop()
                  task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                  if block:
                      try:
                          return await task  # type: ignore
                      finally:
                          if not task.done():
                              task.cancel()
                  else:
                      self.log.debug("Started background task", task_name=task.get_name())
                      self._background_task = task
                      return None
              

              run_iter async

              run_iter(
                  *prompt_groups: Sequence[PromptCompatible],
                  output_type: type[OutputDataT] | None = None,
                  model: ModelType = None,
                  store_history: bool = True,
                  wait_for_connections: bool | None = None
              ) -> AsyncIterator[ChatMessage[OutputDataT]]
              

              Run agent sequentially on multiple prompt groups.

              Parameters:

              Name Type Description Default
              prompt_groups Sequence[PromptCompatible]

              Groups of prompts to process sequentially

              ()
              output_type type[OutputDataT] | None

              Optional type for structured responses

              None
              model ModelType

              Optional model override

              None
              store_history bool

              Whether to store in conversation history

              True
              wait_for_connections bool | None

              Whether to wait for connected agents

              None

              Yields:

              Type Description
              AsyncIterator[ChatMessage[OutputDataT]]

              Response messages in sequence

              Example

              questions = [ ["What is your name?"], ["How old are you?", image1], ["Describe this image", image2], ] async for response in agent.run_iter(*questions): print(response.content)

              Source code in src/llmling_agent/agent/agent.py
              772
              773
              774
              775
              776
              777
              778
              779
              780
              781
              782
              783
              784
              785
              786
              787
              788
              789
              790
              791
              792
              793
              794
              795
              796
              797
              798
              799
              800
              801
              802
              803
              804
              805
              806
              807
              808
              809
              async def run_iter(
                  self,
                  *prompt_groups: Sequence[PromptCompatible],
                  output_type: type[OutputDataT] | None = None,
                  model: ModelType = None,
                  store_history: bool = True,
                  wait_for_connections: bool | None = None,
              ) -> AsyncIterator[ChatMessage[OutputDataT]]:
                  """Run agent sequentially on multiple prompt groups.
              
                  Args:
                      prompt_groups: Groups of prompts to process sequentially
                      output_type: Optional type for structured responses
                      model: Optional model override
                      store_history: Whether to store in conversation history
                      wait_for_connections: Whether to wait for connected agents
              
                  Yields:
                      Response messages in sequence
              
                  Example:
                      questions = [
                          ["What is your name?"],
                          ["How old are you?", image1],
                          ["Describe this image", image2],
                      ]
                      async for response in agent.run_iter(*questions):
                          print(response.content)
                  """
                  for prompts in prompt_groups:
                      response = await self.run(
                          *prompts,
                          output_type=output_type,
                          model=model,
                          store_history=store_history,
                          wait_for_connections=wait_for_connections,
                      )
                      yield response  # pyright: ignore
              

              run_job async

              run_job(
                  job: Job[TDeps, str | None],
                  *,
                  store_history: bool = True,
                  include_agent_tools: bool = True
              ) -> ChatMessage[OutputDataT]
              

              Execute a pre-defined task.

              Parameters:

              Name Type Description Default
              job Job[TDeps, str | None]

              Job configuration to execute

              required
              store_history bool

              Whether the message exchange should be added to the context window

              True
              include_agent_tools bool

              Whether to include agent tools

              True

              Returns: Job execution result

              Raises:

              Type Description
              JobError

              If task execution fails

              ValueError

              If task configuration is invalid

              Source code in src/llmling_agent/agent/agent.py
              811
              812
              813
              814
              815
              816
              817
              818
              819
              820
              821
              822
              823
              824
              825
              826
              827
              828
              829
              830
              831
              832
              833
              834
              835
              836
              837
              838
              839
              840
              841
              842
              843
              844
              845
              846
              847
              848
              849
              850
              851
              852
              853
              854
              855
              856
              857
              858
              859
              860
              @method_spawner
              async def run_job(
                  self,
                  job: Job[TDeps, str | None],
                  *,
                  store_history: bool = True,
                  include_agent_tools: bool = True,
              ) -> ChatMessage[OutputDataT]:
                  """Execute a pre-defined task.
              
                  Args:
                      job: Job configuration to execute
                      store_history: Whether the message exchange should be added to the
                                     context window
                      include_agent_tools: Whether to include agent tools
                  Returns:
                      Job execution result
              
                  Raises:
                      JobError: If task execution fails
                      ValueError: If task configuration is invalid
                  """
                  if job.required_dependency is not None:  # noqa: SIM102
                      if not isinstance(self.context.data, job.required_dependency):
                          msg = (
                              f"Agent dependencies ({type(self.context.data)}) "
                              f"don't match job requirement ({job.required_dependency})"
                          )
                          raise JobError(msg)
              
                  # Load task knowledge
                  if job.knowledge:
                      # Add knowledge sources to context
                      for source in list(job.knowledge.paths):
                          await self.conversation.load_context_source(source)
                      for prompt in job.knowledge.prompts:
                          await self.conversation.load_context_source(prompt)
                  try:
                      # Register task tools temporarily
                      tools = job.get_tools()
                      async with self.tools.temporary_tools(
                          tools, exclusive=not include_agent_tools
                      ):
                          # Execute job with job-specific tools
                          return await self.run(await job.get_prompt(), store_history=store_history)
              
                  except Exception as e:
                      self.log.exception("Task execution failed", error=str(e))
                      msg = f"Task execution failed: {e}"
                      raise JobError(msg) from e
              

              run_stream async

              run_stream(
                  *prompt: PromptCompatible,
                  output_type: type[OutputDataT] | None = None,
                  model: ModelType = None,
                  tool_choice: str | list[str] | None = None,
                  store_history: bool = True,
                  usage_limits: UsageLimits | None = None,
                  message_id: str | None = None,
                  conversation_id: str | None = None,
                  messages: list[ChatMessage[Any]] | None = None,
                  input_provider: InputProvider | None = None,
                  wait_for_connections: bool | None = None,
                  deps: TDeps | None = None
              ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]
              

              Run agent with prompt and get a streaming response.

              Parameters:

              Name Type Description Default
              prompt PromptCompatible

              User query or instruction

              ()
              output_type type[OutputDataT] | None

              Optional type for structured responses

              None
              model ModelType

              Optional model override

              None
              tool_choice str | list[str] | None

              Filter tool choice by name

              None
              store_history bool

              Whether the message exchange should be added to the context window

              True
              usage_limits UsageLimits | None

              Optional usage limits for the model

              None
              message_id str | None

              Optional message id for the returned message. Automatically generated if not provided.

              None
              conversation_id str | None

              Optional conversation id for the returned message.

              None
              messages list[ChatMessage[Any]] | None

              Optional list of messages to replace the conversation history

              None
              input_provider InputProvider | None

              Optional input provider for the agent

              None
              wait_for_connections bool | None

              Whether to wait for connected agents to complete

              None
              deps TDeps | None

              Optional dependencies for the agent

              None

              Returns: An async iterator yielding streaming events with final message embedded.

              Raises:

              Type Description
              UnexpectedModelBehavior

              If the model fails or behaves unexpectedly

              Source code in src/llmling_agent/agent/agent.py
              644
              645
              646
              647
              648
              649
              650
              651
              652
              653
              654
              655
              656
              657
              658
              659
              660
              661
              662
              663
              664
              665
              666
              667
              668
              669
              670
              671
              672
              673
              674
              675
              676
              677
              678
              679
              680
              681
              682
              683
              684
              685
              686
              687
              688
              689
              690
              691
              692
              693
              694
              695
              696
              697
              698
              699
              700
              701
              702
              703
              704
              705
              706
              707
              708
              709
              710
              711
              712
              713
              714
              715
              716
              717
              718
              719
              720
              721
              722
              723
              724
              725
              726
              727
              728
              729
              730
              731
              732
              733
              734
              735
              736
              737
              738
              739
              740
              741
              742
              743
              744
              745
              746
              747
              748
              749
              750
              751
              752
              753
              754
              755
              756
              757
              758
              759
              760
              761
              762
              763
              764
              765
              766
              767
              768
              769
              770
              @method_spawner
              async def run_stream(
                  self,
                  *prompt: PromptCompatible,
                  output_type: type[OutputDataT] | None = None,
                  model: ModelType = None,
                  tool_choice: str | list[str] | None = None,
                  store_history: bool = True,
                  usage_limits: UsageLimits | None = None,
                  message_id: str | None = None,
                  conversation_id: str | None = None,
                  messages: list[ChatMessage[Any]] | None = None,
                  input_provider: InputProvider | None = None,
                  wait_for_connections: bool | None = None,
                  deps: TDeps | None = None,
              ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
                  """Run agent with prompt and get a streaming response.
              
                  Args:
                      prompt: User query or instruction
                      output_type: Optional type for structured responses
                      model: Optional model override
                      tool_choice: Filter tool choice by name
                      store_history: Whether the message exchange should be added to the
                                     context window
                      usage_limits: Optional usage limits for the model
                      message_id: Optional message id for the returned message.
                                  Automatically generated if not provided.
                      conversation_id: Optional conversation id for the returned message.
                      messages: Optional list of messages to replace the conversation history
                      input_provider: Optional input provider for the agent
                      wait_for_connections: Whether to wait for connected agents to complete
                      deps: Optional dependencies for the agent
                  Returns:
                      An async iterator yielding streaming events with final message embedded.
              
                  Raises:
                      UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                  """
                  message_id = message_id or str(uuid4())
                  user_msg, prompts = await self.pre_run(*prompt)
                  final_type = to_type(output_type) if output_type else self._output_type
                  start_time = time.perf_counter()
                  tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                  message_history = (
                      messages if messages is not None else self.conversation.get_history()
                  )
                  try:
                      agentlet = await self.get_agentlet(
                          tools, model, final_type, self.deps_type, input_provider
                      )
                      content = await convert_prompts(prompts)
                      # Initialize variables for final response
                      response_msg = None
                      # Create tool dict for signal emission
                      converted = [i if isinstance(i, str) else i.to_pydantic_ai() for i in content]
                      stream_events = agentlet.run_stream_events(
                          converted,
                          deps=deps,
                          message_history=[
                              m for run in message_history for m in run.to_pydantic_ai()
                          ],
                          output_type=final_type or str,
                          usage_limits=usage_limits,
                      )
              
                      # Stream events through merge_queue for progress events
                      async with merge_queue_into_iterator(
                          stream_events, self._progress_queue
                      ) as events:
                          # Track tool call starts to combine with results later
                          pending_tcs: dict[str, ToolCallPart] = {}
                          async for event in events:
                              # Process events and emit signals
                              yield event
                              match event:
                                  case (
                                      PartStartEvent(part=ToolCallPart() as tool_part)
                                      | FunctionToolCallEvent(part=tool_part)
                                  ):
                                      # Store tool call start info for later combination with result
                                      pending_tcs[tool_part.tool_call_id] = tool_part
                                  case (
                                      FunctionToolResultEvent(tool_call_id=call_id) as result_event
                                  ):
                                      # Check if we have a pending tool call to combine with
                                      if call_info := pending_tcs.pop(call_id, None):
                                          # Create and yield combined event
                                          combined_event = ToolCallCompleteEvent(
                                              tool_name=call_info.tool_name,
                                              tool_call_id=call_id,
                                              tool_input=call_info.args_as_dict(),
                                              tool_result=result_event.result.content
                                              if isinstance(result_event.result, ToolReturnPart)
                                              else result_event.result,
                                              agent_name=self.name,
                                              message_id=message_id,
                                          )
                                          yield combined_event
                                  case AgentRunResultEvent():
                                      # Capture final result data
                                      # Build final response message
                                      response_time = time.perf_counter() - start_time
                                      response_msg = await ChatMessage.from_run_result(
                                          event.result,
                                          agent_name=self.name,
                                          message_id=message_id,
                                          conversation_id=conversation_id,
                                          response_time=response_time,
                                      )
              
                      # Send additional enriched completion event
                      assert response_msg
                      yield StreamCompleteEvent(message=response_msg)
                      self.message_sent.emit(response_msg)
                      await self.log_message(response_msg)
                      if store_history:
                          self.conversation.add_chat_messages([user_msg, response_msg])
                      await self.connections.route_message(
                          response_msg,
                          wait=wait_for_connections,
                      )
              
                  except Exception as e:
                      self.log.exception("Agent stream failed")
                      self.run_failed.emit("Agent stream failed", e)
                      raise
              

              set_model

              set_model(model: ModelType)
              

              Set the model for this agent.

              Parameters:

              Name Type Description Default
              model ModelType

              New model to use (name or instance)

              required
              Source code in src/llmling_agent/agent/agent.py
              1001
              1002
              1003
              1004
              1005
              1006
              1007
              1008
              def set_model(self, model: ModelType):
                  """Set the model for this agent.
              
                  Args:
                      model: New model to use (name or instance)
              
                  """
                  self._model = model
              

              share async

              share(
                  target: Agent[TDeps, Any],
                  *,
                  tools: list[str] | None = None,
                  history: bool | int | None = None,
                  token_limit: int | None = None
              )
              

              Share capabilities and knowledge with another agent.

              Parameters:

              Name Type Description Default
              target Agent[TDeps, Any]

              Agent to share with

              required
              tools list[str] | None

              List of tool names to share

              None
              history bool | int | None

              Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

              None
              token_limit int | None

              Optional max tokens for history

              None

              Raises:

              Type Description
              ValueError

              If requested items don't exist

              RuntimeError

              If runtime not available for resources

              Source code in src/llmling_agent/agent/agent.py
              942
              943
              944
              945
              946
              947
              948
              949
              950
              951
              952
              953
              954
              955
              956
              957
              958
              959
              960
              961
              962
              963
              964
              965
              966
              967
              968
              969
              970
              971
              972
              973
              974
              975
              976
              977
              978
              979
              980
              981
              982
              async def share(
                  self,
                  target: Agent[TDeps, Any],
                  *,
                  tools: list[str] | None = None,
                  history: bool | int | None = None,  # bool or number of messages
                  token_limit: int | None = None,
              ):
                  """Share capabilities and knowledge with another agent.
              
                  Args:
                      target: Agent to share with
                      tools: List of tool names to share
                      history: Share conversation history:
                              - True: Share full history
                              - int: Number of most recent messages to share
                              - None: Don't share history
                      token_limit: Optional max tokens for history
              
                  Raises:
                      ValueError: If requested items don't exist
                      RuntimeError: If runtime not available for resources
                  """
                  # Share tools if requested
                  for name in tools or []:
                      if tool := await self.tools.get_tool(name):
                          meta = {"shared_from": self.name}
                          target.tools.register_tool(tool.callable, metadata=meta)
                      else:
                          msg = f"Tool not found: {name}"
                          raise ValueError(msg)
              
                  # Share history if requested
                  if history:
                      history_text = await self.conversation.format_history(
                          max_tokens=token_limit,
                          num_messages=history if isinstance(history, int) else None,
                      )
                      target.conversation.add_context_message(
                          history_text, source=self.name, metadata={"type": "shared_history"}
                      )
              

              stop async

              stop()
              

              Stop continuous execution if running.

              Source code in src/llmling_agent/agent/agent.py
              922
              923
              924
              925
              926
              927
              async def stop(self):
                  """Stop continuous execution if running."""
                  if self._background_task and not self._background_task.done():
                      self._background_task.cancel()
                      await self._background_task
                      self._background_task = None
              

              temporary_state async

              temporary_state(
                  *,
                  system_prompts: list[AnyPromptType] | None = None,
                  output_type: type[T] | None = None,
                  replace_prompts: bool = False,
                  tools: list[ToolType] | None = None,
                  replace_tools: bool = False,
                  history: list[AnyPromptType] | SessionQuery | None = None,
                  replace_history: bool = False,
                  pause_routing: bool = False,
                  model: ModelType | None = None
              ) -> AsyncIterator[Self | Agent[T]]
              

              Temporarily modify agent state.

              Parameters:

              Name Type Description Default
              system_prompts list[AnyPromptType] | None

              Temporary system prompts to use

              None
              output_type type[T] | None

              Temporary output type to use

              None
              replace_prompts bool

              Whether to replace existing prompts

              False
              tools list[ToolType] | None

              Temporary tools to make available

              None
              replace_tools bool

              Whether to replace existing tools

              False
              history list[AnyPromptType] | SessionQuery | None

              Conversation history (prompts or query)

              None
              replace_history bool

              Whether to replace existing history

              False
              pause_routing bool

              Whether to pause message routing

              False
              model ModelType | None

              Temporary model override

              None
              Source code in src/llmling_agent/agent/agent.py
              1029
              1030
              1031
              1032
              1033
              1034
              1035
              1036
              1037
              1038
              1039
              1040
              1041
              1042
              1043
              1044
              1045
              1046
              1047
              1048
              1049
              1050
              1051
              1052
              1053
              1054
              1055
              1056
              1057
              1058
              1059
              1060
              1061
              1062
              1063
              1064
              1065
              1066
              1067
              1068
              1069
              1070
              1071
              1072
              1073
              1074
              1075
              1076
              1077
              1078
              1079
              1080
              1081
              1082
              1083
              1084
              1085
              1086
              1087
              1088
              1089
              1090
              1091
              1092
              @asynccontextmanager
              async def temporary_state[T](
                  self,
                  *,
                  system_prompts: list[AnyPromptType] | None = None,
                  output_type: type[T] | None = None,
                  replace_prompts: bool = False,
                  tools: list[ToolType] | None = None,
                  replace_tools: bool = False,
                  history: list[AnyPromptType] | SessionQuery | None = None,
                  replace_history: bool = False,
                  pause_routing: bool = False,
                  model: ModelType | None = None,
              ) -> AsyncIterator[Self | Agent[T]]:
                  """Temporarily modify agent state.
              
                  Args:
                      system_prompts: Temporary system prompts to use
                      output_type: Temporary output type to use
                      replace_prompts: Whether to replace existing prompts
                      tools: Temporary tools to make available
                      replace_tools: Whether to replace existing tools
                      history: Conversation history (prompts or query)
                      replace_history: Whether to replace existing history
                      pause_routing: Whether to pause message routing
                      model: Temporary model override
                  """
                  old_model = self._model
                  if output_type:
                      old_type = self._output_type
                      self.to_structured(output_type)
                  async with AsyncExitStack() as stack:
                      if system_prompts is not None:  # System prompts
                          await stack.enter_async_context(
                              self.sys_prompts.temporary_prompt(
                                  system_prompts, exclusive=replace_prompts
                              )
                          )
              
                      if tools is not None:  # Tools
                          await stack.enter_async_context(
                              self.tools.temporary_tools(tools, exclusive=replace_tools)
                          )
              
                      if history is not None:  # History
                          await stack.enter_async_context(
                              self.conversation.temporary_state(
                                  history, replace_history=replace_history
                              )
                          )
              
                      if pause_routing:  # Routing
                          await stack.enter_async_context(self.connections.paused_routing())
              
                      elif model is not None:  # Model
                          self._model = model
              
                      try:
                          yield self
                      finally:  # Restore model
                          if model is not None and old_model:
                              self._model = old_model
                          if output_type:
                              self.to_structured(old_type)
              

              to_structured

              to_structured(
                  output_type: type[NewOutputDataT],
                  *,
                  tool_name: str | None = None,
                  tool_description: str | None = None
              ) -> Agent[TDeps, NewOutputDataT]
              

              Convert this agent to a structured agent.

              Parameters:

              Name Type Description Default
              output_type type[NewOutputDataT]

              Type for structured responses. Can be: - A Python type (Pydantic model)

              required
              tool_name str | None

              Optional override for result tool name

              None
              tool_description str | None

              Optional override for result tool description

              None

              Returns:

              Type Description
              Agent[TDeps, NewOutputDataT]

              Typed Agent

              Source code in src/llmling_agent/agent/agent.py
              417
              418
              419
              420
              421
              422
              423
              424
              425
              426
              427
              428
              429
              430
              431
              432
              433
              434
              435
              436
              437
              def to_structured[NewOutputDataT](
                  self,
                  output_type: type[NewOutputDataT],
                  *,
                  tool_name: str | None = None,
                  tool_description: str | None = None,
              ) -> Agent[TDeps, NewOutputDataT]:
                  """Convert this agent to a structured agent.
              
                  Args:
                      output_type: Type for structured responses. Can be:
                          - A Python type (Pydantic model)
                      tool_name: Optional override for result tool name
                      tool_description: Optional override for result tool description
              
                  Returns:
                      Typed Agent
                  """
                  self.log.debug("Setting result type", output_type=output_type)
                  self._output_type = to_type(output_type)
                  return self  # type: ignore
              

              to_tool

              to_tool(
                  *,
                  name: str | None = None,
                  reset_history_on_run: bool = True,
                  pass_message_history: bool = False,
                  parent: Agent[Any, Any] | None = None
              ) -> Tool[OutputDataT]
              

              Create a tool from this agent.

              Parameters:

              Name Type Description Default
              name str | None

              Optional tool name override

              None
              reset_history_on_run bool

              Clear agent's history before each run

              True
              pass_message_history bool

              Pass parent's message history to agent

              False
              parent Agent[Any, Any] | None

              Optional parent agent for history/context sharing

              None
              Source code in src/llmling_agent/agent/agent.py
              448
              449
              450
              451
              452
              453
              454
              455
              456
              457
              458
              459
              460
              461
              462
              463
              464
              465
              466
              467
              468
              469
              470
              471
              472
              473
              474
              475
              476
              477
              478
              479
              480
              481
              482
              483
              484
              485
              486
              487
              488
              489
              490
              491
              492
              493
              494
              495
              496
              497
              498
              499
              500
              501
              502
              def to_tool(
                  self,
                  *,
                  name: str | None = None,
                  reset_history_on_run: bool = True,
                  pass_message_history: bool = False,
                  parent: Agent[Any, Any] | None = None,
              ) -> Tool[OutputDataT]:
                  """Create a tool from this agent.
              
                  Args:
                      name: Optional tool name override
                      reset_history_on_run: Clear agent's history before each run
                      pass_message_history: Pass parent's message history to agent
                      parent: Optional parent agent for history/context sharing
                  """
                  tool_name = name or f"ask_{self.name}"
                  # Create wrapper function with correct return type annotation
                  output_type = self._output_type or Any
              
                  async def wrapped_tool(prompt: str):
                      if pass_message_history and not parent:
                          msg = "Parent agent required for message history sharing"
                          raise ToolError(msg)
              
                      if reset_history_on_run:
                          self.conversation.clear()
              
                      history = None
                      if pass_message_history and parent:
                          history = parent.conversation.get_history()
                          old = self.conversation.get_history()
                          self.conversation.set_history(history)
                      result = await self.run(prompt)
                      if history:
                          self.conversation.set_history(old)
                      return result.data
              
                  # Set the correct return annotation dynamically
                  wrapped_tool.__annotations__ = {"prompt": str, "return": output_type}
              
                  normalized_name = self.name.replace("_", " ").title()
                  docstring = f"Get expert answer from specialized agent: {normalized_name}"
                  if self.description:
                      docstring = f"{docstring}\n\n{self.description}"
              
                  wrapped_tool.__doc__ = docstring
                  wrapped_tool.__name__ = tool_name
              
                  return Tool.from_callable(
                      wrapped_tool,
                      name_override=tool_name,
                      description_override=docstring,
                      source="agent",
                  )
              

              validate_against async

              validate_against(prompt: str, criteria: type[OutputDataT], **kwargs: Any) -> bool
              

              Check if agent's response satisfies stricter criteria.

              Source code in src/llmling_agent/agent/agent.py
              1094
              1095
              1096
              1097
              1098
              1099
              1100
              1101
              1102
              1103
              1104
              1105
              1106
              1107
              async def validate_against(
                  self,
                  prompt: str,
                  criteria: type[OutputDataT],
                  **kwargs: Any,
              ) -> bool:
                  """Check if agent's response satisfies stricter criteria."""
                  result = await self.run(prompt, **kwargs)
                  try:
                      criteria.model_validate(result.content.model_dump())  # type: ignore
                  except ValidationError:
                      return False
                  else:
                      return True
              

              wait async

              wait() -> ChatMessage[OutputDataT]
              

              Wait for background execution to complete.

              Source code in src/llmling_agent/agent/agent.py
              929
              930
              931
              932
              933
              934
              935
              936
              937
              938
              939
              940
              async def wait(self) -> ChatMessage[OutputDataT]:
                  """Wait for background execution to complete."""
                  if not self._background_task:
                      msg = "No background task running"
                      raise RuntimeError(msg)
                  if self._infinite:
                      msg = "Cannot wait on infinite execution"
                      raise RuntimeError(msg)
                  try:
                      return await self._background_task
                  finally:
                      self._background_task = None
              

              AgentContext dataclass

              Bases: NodeContext[TDeps]

              Runtime context for agent execution.

              Generically typed with AgentContext[Type of Dependencies]

              Source code in src/llmling_agent/agent/context.py
               28
               29
               30
               31
               32
               33
               34
               35
               36
               37
               38
               39
               40
               41
               42
               43
               44
               45
               46
               47
               48
               49
               50
               51
               52
               53
               54
               55
               56
               57
               58
               59
               60
               61
               62
               63
               64
               65
               66
               67
               68
               69
               70
               71
               72
               73
               74
               75
               76
               77
               78
               79
               80
               81
               82
               83
               84
               85
               86
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              @dataclass(kw_only=True)
              class AgentContext[TDeps = Any](NodeContext[TDeps]):
                  """Runtime context for agent execution.
              
                  Generically typed with AgentContext[Type of Dependencies]
                  """
              
                  config: AgentConfig
                  """Current agent's specific configuration."""
              
                  model_settings: dict[str, Any] = field(default_factory=dict)
                  """Model-specific settings."""
              
                  data: TDeps | None = None
                  """Custom context data."""
              
                  tool_name: str | None = None
                  """Name of the currently executing tool."""
              
                  tool_call_id: str | None = None
                  """ID of the current tool call."""
              
                  tool_input: dict[str, Any] = field(default_factory=dict)
                  """Input arguments for the current tool call."""
              
                  @classmethod
                  def create_default(
                      cls,
                      name: str,
                      deps: TDeps | None = None,
                      pool: AgentPool | None = None,
                      input_provider: InputProvider | None = None,
                  ) -> AgentContext[TDeps]:
                      """Create a default agent context with minimal privileges.
              
                      Args:
                          name: Name of the agent
              
                          deps: Optional dependencies for the agent
                          pool: Optional pool the agent is part of
                          input_provider: Optional input provider for the agent
                      """
                      from llmling_agent.models import AgentConfig, AgentsManifest
              
                      defn = AgentsManifest()
                      cfg = AgentConfig(name=name)
                      return cls(
                          input_provider=input_provider,
                          node_name=name,
                          definition=defn,
                          config=cfg,
                          data=deps,
                          pool=pool,
                      )
              
                  @cached_property
                  def converter(self) -> ConversionManager:
                      """Get conversion manager from global config."""
                      return ConversionManager(self.definition.conversion)
              
                  # TODO: perhaps add agent directly to context?
                  @property
                  def agent(self) -> Agent[TDeps, Any]:
                      """Get the agent instance from the pool."""
                      assert self.pool, "No agent pool available"
                      assert self.node_name, "No agent name available"
                      return self.pool.agents[self.node_name]
              
                  @property
                  def process_manager(self):
                      """Get process manager from pool."""
                      assert self.pool, "No agent pool available"
                      return self.pool.process_manager
              
                  async def handle_confirmation(
                      self,
                      tool: Tool,
                      args: dict[str, Any],
                  ) -> ConfirmationResult:
                      """Handle tool execution confirmation.
              
                      Returns True if:
                      - No confirmation handler is set
                      - Handler confirms the execution
                      """
                      provider = self.get_input_provider()
                      mode = self.config.requires_tool_confirmation
                      if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                          return "allow"
                      history = self.agent.conversation.get_history() if self.pool else []
                      return await provider.get_tool_confirmation(self, tool, args, history)
              
                  async def handle_elicitation(
                      self,
                      params: types.ElicitRequestParams,
                  ) -> types.ElicitResult | types.ErrorData:
                      """Handle elicitation request for additional information."""
                      provider = self.get_input_provider()
                      history = self.agent.conversation.get_history() if self.pool else []
                      return await provider.get_elicitation(self, params, history)
              
                  async def report_progress(self, progress: float, total: float | None, message: str):
                      """Access progress reporting from pool server if available."""
                      logger.info(
                          "Reporting tool call progress",
                          progress=progress,
                          total=total,
                          message=message,
                      )
                      if self.pool:
                          await self.pool.progress_handlers(progress, total, message)
              

              agent property

              agent: Agent[TDeps, Any]
              

              Get the agent instance from the pool.

              config instance-attribute

              config: AgentConfig
              

              Current agent's specific configuration.

              converter cached property

              converter: ConversionManager
              

              Get conversion manager from global config.

              data class-attribute instance-attribute

              data: TDeps | None = None
              

              Custom context data.

              model_settings class-attribute instance-attribute

              model_settings: dict[str, Any] = field(default_factory=dict)
              

              Model-specific settings.

              process_manager property

              process_manager
              

              Get process manager from pool.

              tool_call_id class-attribute instance-attribute

              tool_call_id: str | None = None
              

              ID of the current tool call.

              tool_input class-attribute instance-attribute

              tool_input: dict[str, Any] = field(default_factory=dict)
              

              Input arguments for the current tool call.

              tool_name class-attribute instance-attribute

              tool_name: str | None = None
              

              Name of the currently executing tool.

              create_default classmethod

              create_default(
                  name: str,
                  deps: TDeps | None = None,
                  pool: AgentPool | None = None,
                  input_provider: InputProvider | None = None,
              ) -> AgentContext[TDeps]
              

              Create a default agent context with minimal privileges.

              Parameters:

              Name Type Description Default
              name str

              Name of the agent

              required
              deps TDeps | None

              Optional dependencies for the agent

              None
              pool AgentPool | None

              Optional pool the agent is part of

              None
              input_provider InputProvider | None

              Optional input provider for the agent

              None
              Source code in src/llmling_agent/agent/context.py
              53
              54
              55
              56
              57
              58
              59
              60
              61
              62
              63
              64
              65
              66
              67
              68
              69
              70
              71
              72
              73
              74
              75
              76
              77
              78
              79
              80
              81
              @classmethod
              def create_default(
                  cls,
                  name: str,
                  deps: TDeps | None = None,
                  pool: AgentPool | None = None,
                  input_provider: InputProvider | None = None,
              ) -> AgentContext[TDeps]:
                  """Create a default agent context with minimal privileges.
              
                  Args:
                      name: Name of the agent
              
                      deps: Optional dependencies for the agent
                      pool: Optional pool the agent is part of
                      input_provider: Optional input provider for the agent
                  """
                  from llmling_agent.models import AgentConfig, AgentsManifest
              
                  defn = AgentsManifest()
                  cfg = AgentConfig(name=name)
                  return cls(
                      input_provider=input_provider,
                      node_name=name,
                      definition=defn,
                      config=cfg,
                      data=deps,
                      pool=pool,
                  )
              

              handle_confirmation async

              handle_confirmation(tool: Tool, args: dict[str, Any]) -> ConfirmationResult
              

              Handle tool execution confirmation.

              Returns True if: - No confirmation handler is set - Handler confirms the execution

              Source code in src/llmling_agent/agent/context.py
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              async def handle_confirmation(
                  self,
                  tool: Tool,
                  args: dict[str, Any],
              ) -> ConfirmationResult:
                  """Handle tool execution confirmation.
              
                  Returns True if:
                  - No confirmation handler is set
                  - Handler confirms the execution
                  """
                  provider = self.get_input_provider()
                  mode = self.config.requires_tool_confirmation
                  if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                      return "allow"
                  history = self.agent.conversation.get_history() if self.pool else []
                  return await provider.get_tool_confirmation(self, tool, args, history)
              

              handle_elicitation async

              handle_elicitation(params: ElicitRequestParams) -> ElicitResult | ErrorData
              

              Handle elicitation request for additional information.

              Source code in src/llmling_agent/agent/context.py
              120
              121
              122
              123
              124
              125
              126
              127
              async def handle_elicitation(
                  self,
                  params: types.ElicitRequestParams,
              ) -> types.ElicitResult | types.ErrorData:
                  """Handle elicitation request for additional information."""
                  provider = self.get_input_provider()
                  history = self.agent.conversation.get_history() if self.pool else []
                  return await provider.get_elicitation(self, params, history)
              

              report_progress async

              report_progress(progress: float, total: float | None, message: str)
              

              Access progress reporting from pool server if available.

              Source code in src/llmling_agent/agent/context.py
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              async def report_progress(self, progress: float, total: float | None, message: str):
                  """Access progress reporting from pool server if available."""
                  logger.info(
                      "Reporting tool call progress",
                      progress=progress,
                      total=total,
                      message=message,
                  )
                  if self.pool:
                      await self.pool.progress_handlers(progress, total, message)
              

              ConversationManager

              Manages conversation state and system prompts.

              Source code in src/llmling_agent/agent/conversation.py
               38
               39
               40
               41
               42
               43
               44
               45
               46
               47
               48
               49
               50
               51
               52
               53
               54
               55
               56
               57
               58
               59
               60
               61
               62
               63
               64
               65
               66
               67
               68
               69
               70
               71
               72
               73
               74
               75
               76
               77
               78
               79
               80
               81
               82
               83
               84
               85
               86
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              241
              242
              243
              244
              245
              246
              247
              248
              249
              250
              251
              252
              253
              254
              255
              256
              257
              258
              259
              260
              261
              262
              263
              264
              265
              266
              267
              268
              269
              270
              271
              272
              273
              274
              275
              276
              277
              278
              279
              280
              281
              282
              283
              284
              285
              286
              287
              288
              289
              290
              291
              292
              293
              294
              295
              296
              297
              298
              299
              300
              301
              302
              303
              304
              305
              306
              307
              308
              309
              310
              311
              312
              313
              314
              315
              316
              317
              318
              319
              320
              321
              322
              323
              324
              325
              326
              327
              328
              329
              330
              331
              332
              333
              334
              335
              336
              337
              338
              339
              340
              341
              342
              343
              344
              345
              346
              347
              348
              349
              350
              351
              352
              353
              354
              355
              356
              357
              358
              359
              360
              361
              362
              363
              364
              365
              366
              367
              368
              369
              370
              371
              372
              373
              374
              375
              376
              377
              378
              379
              380
              381
              382
              383
              384
              385
              386
              387
              388
              389
              390
              391
              392
              393
              394
              395
              396
              397
              398
              399
              400
              401
              402
              403
              404
              405
              406
              407
              408
              409
              410
              411
              412
              413
              414
              415
              416
              417
              418
              419
              420
              421
              422
              423
              424
              425
              426
              427
              428
              429
              430
              431
              432
              433
              434
              435
              436
              437
              438
              439
              440
              441
              442
              443
              444
              445
              446
              447
              448
              449
              450
              451
              452
              453
              class ConversationManager:
                  """Manages conversation state and system prompts."""
              
                  @dataclass(frozen=True)
                  class HistoryCleared:
                      """Emitted when chat history is cleared."""
              
                      session_id: str
                      timestamp: datetime = field(default_factory=get_now)
              
                  history_cleared = Signal(HistoryCleared)
              
                  def __init__(
                      self,
                      agent: Agent[Any, Any],
                      session_config: MemoryConfig | None = None,
                      *,
                      resources: Sequence[PromptType | str] = (),
                  ):
                      """Initialize conversation manager.
              
                      Args:
                          agent: instance to manage
                          session_config: Optional MemoryConfig
                          resources: Optional paths to load as context
                      """
                      self._agent = agent
                      self.chat_messages = ChatMessageContainer()
                      self._last_messages: list[ChatMessage] = []
                      self._pending_messages: deque[ChatMessage] = deque()
                      self._config = session_config
                      self._resources = list(resources)  # Store for async loading
                      # Generate new ID if none provided
                      self.id = str(uuid4())
              
                      if session_config is not None and session_config.session is not None:
                          storage = self._agent.context.storage
                          self._current_history = storage.filter_messages.sync(session_config.session)
                          if session_config.session.name:
                              self.id = session_config.session.name
              
                      # Note: max_messages and max_tokens will be handled in add_message/get_history
                      # to maintain the rolling window during conversation
              
                  def get_initialization_tasks(self) -> list[Coroutine[Any, Any, Any]]:
                      """Get all initialization coroutines."""
                      self._resources = []  # Clear so we dont load again on async init
                      return [self.load_context_source(source) for source in self._resources]
              
                  async def __aenter__(self) -> Self:
                      """Initialize when used standalone."""
                      if tasks := self.get_initialization_tasks():
                          await asyncio.gather(*tasks)
                      return self
              
                  async def __aexit__(
                      self,
                      exc_type: type[BaseException] | None,
                      exc_val: BaseException | None,
                      exc_tb: TracebackType | None,
                  ):
                      """Clean up any pending messages."""
                      self._pending_messages.clear()
              
                  def __bool__(self) -> bool:
                      return bool(self._pending_messages) or bool(self.chat_messages)
              
                  def __repr__(self) -> str:
                      return f"ConversationManager(id={self.id!r})"
              
                  def __prompt__(self) -> str:
                      if not self.chat_messages:
                          return "No conversation history"
              
                      last_msgs = self.chat_messages[-2:]
                      parts = ["Recent conversation:"]
                      parts.extend(msg.format() for msg in last_msgs)
                      return "\n".join(parts)
              
                  def __contains__(self, item: Any) -> bool:
                      """Check if item is in history."""
                      return item in self.chat_messages
              
                  def __len__(self) -> int:
                      """Get length of history."""
                      return len(self.chat_messages)
              
                  def get_message_tokens(self, message: ChatMessage) -> int:
                      """Get token count for a single message."""
                      content = "\n".join(message.format())
                      return count_tokens(content, message.model_name)
              
                  async def format_history(
                      self,
                      *,
                      max_tokens: int | None = None,
                      include_system: bool = False,
                      format_template: str | None = None,
                      num_messages: int | None = None,  # Add this parameter
                  ) -> str:
                      """Format conversation history as a single context message.
              
                      Args:
                          max_tokens: Optional limit to include only last N tokens
                          include_system: Whether to include system messages
                          format_template: Optional custom format (defaults to agent/message pairs)
                          num_messages: Optional limit to include only last N messages
                      """
                      template = format_template or "Agent {agent}: {content}\n"
                      messages: list[str] = []
                      token_count = 0
              
                      # Get messages, optionally limited
                      history: Sequence[ChatMessage[Any]] = self.chat_messages
                      if num_messages:
                          history = history[-num_messages:]
              
                      if max_tokens:
                          history = list(reversed(history))  # Start from newest when token limited
              
                      for msg in history:
                          # Check role directly from ChatMessage
                          if not include_system and msg.role == "system":
                              continue
                          name = msg.name or msg.role.title()
                          formatted = template.format(agent=name, content=str(msg.content))
              
                          if max_tokens:
                              # Count tokens in this message
                              if msg.cost_info:
                                  msg_tokens = msg.cost_info.token_usage.total_tokens
                              else:
                                  # Fallback to tiktoken if no cost info
                                  msg_tokens = self.get_message_tokens(msg)
              
                              if token_count + msg_tokens > max_tokens:
                                  break
                              token_count += msg_tokens
                              # Add to front since we're going backwards
                              messages.insert(0, formatted)
                          else:
                              messages.append(formatted)
              
                      return "\n".join(messages)
              
                  async def load_context_source(self, source: PromptType | str):
                      """Load context from a single source."""
                      try:
                          match source:
                              case str():
                                  await self.add_context_from_path(source)
                              case BasePrompt():
                                  await self.add_context_from_prompt(source)
                      except Exception:
                          logger.exception(
                              "Failed to load context",
                              source="file" if isinstance(source, str) else source.type,
                          )
              
                  def load_history_from_database(
                      self,
                      session: SessionIdType | SessionQuery = None,
                      *,
                      since: datetime | None = None,
                      until: datetime | None = None,
                      roles: set[MessageRole] | None = None,
                      limit: int | None = None,
                  ):
                      """Load conversation history from database.
              
                      Args:
                          session: Session ID or query config
                          since: Only include messages after this time (override)
                          until: Only include messages before this time (override)
                          roles: Only include messages with these roles (override)
                          limit: Maximum number of messages to return (override)
                      """
                      storage = self._agent.context.storage
                      match session:
                          case SessionQuery() as query:
                              # Override query params if provided
                              if since is not None or until is not None or roles or limit:
                                  update = {
                                      "since": since.isoformat() if since else None,
                                      "until": until.isoformat() if until else None,
                                      "roles": roles,
                                      "limit": limit,
                                  }
                                  query = query.model_copy(update=update)
                              if query.name:
                                  self.id = query.name
                          case str() | UUID():
                              self.id = str(session)
                              query = SessionQuery(
                                  name=self.id,
                                  since=since.isoformat() if since else None,
                                  until=until.isoformat() if until else None,
                                  roles=roles,
                                  limit=limit,
                              )
                          case None:
                              # Use current session ID
                              query = SessionQuery(
                                  name=self.id,
                                  since=since.isoformat() if since else None,
                                  until=until.isoformat() if until else None,
                                  roles=roles,
                                  limit=limit,
                              )
                          case _ as unreachable:
                              assert_never(unreachable)
                      self.chat_messages.clear()
                      self.chat_messages.extend(storage.filter_messages.sync(query))
              
                  def get_history(
                      self,
                      include_pending: bool = True,
                      do_filter: bool = True,
                  ) -> list[ChatMessage]:
                      """Get conversation history.
              
                      Args:
                          include_pending: Whether to include pending messages
                          do_filter: Whether to apply memory config limits (max_tokens, max_messages)
              
                      Returns:
                          Filtered list of messages in chronological order
                      """
                      if include_pending and self._pending_messages:
                          self.chat_messages.extend(self._pending_messages)
                          self._pending_messages.clear()
              
                      # 2. Start with original history
                      history: Sequence[ChatMessage[Any]] = self.chat_messages
              
                      # 3. Only filter if needed
                      if do_filter and self._config:
                          # First filter by message count (simple slice)
                          if self._config.max_messages:
                              history = history[-self._config.max_messages :]
              
                          # Then filter by tokens if needed
                          if self._config.max_tokens:
                              token_count = 0
                              filtered = []
                              # Collect messages from newest to oldest until we hit the limit
                              for msg in reversed(history):
                                  msg_tokens = self.get_message_tokens(msg)
                                  if token_count + msg_tokens > self._config.max_tokens:
                                      break
                                  token_count += msg_tokens
                                  filtered.append(msg)
                              history = list(reversed(filtered))
              
                      return list(history)
              
                  def get_pending_messages(self) -> list[ChatMessage]:
                      """Get messages that will be included in next interaction."""
                      return list(self._pending_messages)
              
                  def clear_pending(self):
                      """Clear pending messages without adding them to history."""
                      self._pending_messages.clear()
              
                  def set_history(self, history: list[ChatMessage]):
                      """Update conversation history after run."""
                      self.chat_messages.clear()
                      self.chat_messages.extend(history)
              
                  def clear(self):
                      """Clear conversation history and prompts."""
                      self.chat_messages = ChatMessageContainer()
                      self._last_messages = []
                      event = self.HistoryCleared(session_id=str(self.id))
                      self.history_cleared.emit(event)
              
                  @asynccontextmanager
                  async def temporary_state(
                      self,
                      history: list[AnyPromptType] | SessionQuery | None = None,
                      *,
                      replace_history: bool = False,
                  ) -> AsyncIterator[Self]:
                      """Temporarily set conversation history.
              
                      Args:
                          history: Optional list of prompts to use as temporary history.
                                  Can be strings, BasePrompts, or other prompt types.
                          replace_history: If True, only use provided history. If False, append
                                  to existing history.
                      """
                      from toprompt import to_prompt
              
                      old_history = self.chat_messages.copy()
              
                      try:
                          messages: Sequence[ChatMessage[Any]] = ChatMessageContainer()
                          if history is not None:
                              if isinstance(history, SessionQuery):
                                  messages = await self._agent.context.storage.filter_messages(history)
                              else:
                                  messages = [
                                      ChatMessage.user_prompt(message=prompt)
                                      for p in history
                                      if (prompt := await to_prompt(p))
                                  ]
              
                          if replace_history:
                              self.chat_messages = ChatMessageContainer(messages)
                          else:
                              self.chat_messages.extend(messages)
              
                          yield self
              
                      finally:
                          self.chat_messages = old_history
              
                  def add_chat_messages(self, messages: Sequence[ChatMessage]):
                      """Add new messages to history and update last_messages."""
                      self._last_messages = list(messages)
                      self.chat_messages.extend(messages)
              
                  @property
                  def last_run_messages(self) -> list[ChatMessage]:
                      """Get messages from the last run converted to our format."""
                      return self._last_messages
              
                  def add_context_message(
                      self,
                      content: str,
                      source: str | None = None,
                      **metadata: Any,
                  ):
                      """Add a context message.
              
                      Args:
                          content: Text content to add
                          source: Description of content source
                          **metadata: Additional metadata to include with the message
                      """
                      meta_str = ""
                      if metadata:
                          meta_str = "\n".join(f"{k}: {v}" for k, v in metadata.items())
                          meta_str = f"\nMetadata:\n{meta_str}\n"
              
                      header = f"Content from {source}:" if source else "Additional context:"
                      formatted = f"{header}{meta_str}\n{content}\n"
              
                      chat_message = ChatMessage(
                          content=formatted,
                          role="user",
                          name="user",
                          metadata=metadata,
                          conversation_id="context",  # TODO: should probably allow DB field to be NULL
                      )
                      self._pending_messages.append(chat_message)
              
                  async def add_context_from_path(
                      self,
                      path: JoinablePathLike,
                      *,
                      convert_to_md: bool = False,
                      **metadata: Any,
                  ):
                      """Add file or URL content as context message.
              
                      Args:
                          path: Any UPath-supported path
                          convert_to_md: Whether to convert content to markdown
                          **metadata: Additional metadata to include with the message
              
                      Raises:
                          ValueError: If content cannot be loaded or converted
                      """
                      path_obj = to_upath(path)
                      if convert_to_md:
                          content = await self._agent.context.converter.convert_file(path)
                          source = f"markdown:{path_obj.name}"
                      else:
                          content = await read_path(path)
                          source = f"{path_obj.protocol}:{path_obj.name}"
                      self.add_context_message(content, source=source, **metadata)
              
                  async def add_context_from_prompt(
                      self,
                      prompt: PromptType,
                      metadata: dict[str, Any] | None = None,
                      **kwargs: Any,
                  ):
                      """Add rendered prompt content as context message.
              
                      Args:
                          prompt: LLMling prompt (static, dynamic, or file-based)
                          metadata: Additional metadata to include with the message
                          kwargs: Optional kwargs for prompt formatting
                      """
                      try:
                          # Format the prompt using LLMling's prompt system
                          messages = await prompt.format(kwargs)
                          # Extract text content from all messages
                          content = "\n\n".join(msg.get_text_content() for msg in messages)
              
                          self.add_context_message(
                              content,
                              source=f"prompt:{prompt.name or prompt.type}",
                              prompt_args=kwargs,
                              **(metadata or {}),
                          )
                      except Exception as e:
                          msg = f"Failed to format prompt: {e}"
                          raise ValueError(msg) from e
              
                  def get_history_tokens(self) -> int:
                      """Get token count for current history."""
                      # Use cost_info if available
                      return self.chat_messages.get_history_tokens()
              

              last_run_messages property

              last_run_messages: list[ChatMessage]
              

              Get messages from the last run converted to our format.

              HistoryCleared dataclass

              Emitted when chat history is cleared.

              Source code in src/llmling_agent/agent/conversation.py
              41
              42
              43
              44
              45
              46
              @dataclass(frozen=True)
              class HistoryCleared:
                  """Emitted when chat history is cleared."""
              
                  session_id: str
                  timestamp: datetime = field(default_factory=get_now)
              

              __aenter__ async

              __aenter__() -> Self
              

              Initialize when used standalone.

              Source code in src/llmling_agent/agent/conversation.py
              87
              88
              89
              90
              91
              async def __aenter__(self) -> Self:
                  """Initialize when used standalone."""
                  if tasks := self.get_initialization_tasks():
                      await asyncio.gather(*tasks)
                  return self
              

              __aexit__ async

              __aexit__(
                  exc_type: type[BaseException] | None,
                  exc_val: BaseException | None,
                  exc_tb: TracebackType | None,
              )
              

              Clean up any pending messages.

              Source code in src/llmling_agent/agent/conversation.py
               93
               94
               95
               96
               97
               98
               99
              100
              async def __aexit__(
                  self,
                  exc_type: type[BaseException] | None,
                  exc_val: BaseException | None,
                  exc_tb: TracebackType | None,
              ):
                  """Clean up any pending messages."""
                  self._pending_messages.clear()
              

              __contains__

              __contains__(item: Any) -> bool
              

              Check if item is in history.

              Source code in src/llmling_agent/agent/conversation.py
              117
              118
              119
              def __contains__(self, item: Any) -> bool:
                  """Check if item is in history."""
                  return item in self.chat_messages
              

              __init__

              __init__(
                  agent: Agent[Any, Any],
                  session_config: MemoryConfig | None = None,
                  *,
                  resources: Sequence[PromptType | str] = ()
              )
              

              Initialize conversation manager.

              Parameters:

              Name Type Description Default
              agent Agent[Any, Any]

              instance to manage

              required
              session_config MemoryConfig | None

              Optional MemoryConfig

              None
              resources Sequence[PromptType | str]

              Optional paths to load as context

              ()
              Source code in src/llmling_agent/agent/conversation.py
              50
              51
              52
              53
              54
              55
              56
              57
              58
              59
              60
              61
              62
              63
              64
              65
              66
              67
              68
              69
              70
              71
              72
              73
              74
              75
              76
              77
              def __init__(
                  self,
                  agent: Agent[Any, Any],
                  session_config: MemoryConfig | None = None,
                  *,
                  resources: Sequence[PromptType | str] = (),
              ):
                  """Initialize conversation manager.
              
                  Args:
                      agent: instance to manage
                      session_config: Optional MemoryConfig
                      resources: Optional paths to load as context
                  """
                  self._agent = agent
                  self.chat_messages = ChatMessageContainer()
                  self._last_messages: list[ChatMessage] = []
                  self._pending_messages: deque[ChatMessage] = deque()
                  self._config = session_config
                  self._resources = list(resources)  # Store for async loading
                  # Generate new ID if none provided
                  self.id = str(uuid4())
              
                  if session_config is not None and session_config.session is not None:
                      storage = self._agent.context.storage
                      self._current_history = storage.filter_messages.sync(session_config.session)
                      if session_config.session.name:
                          self.id = session_config.session.name
              

              __len__

              __len__() -> int
              

              Get length of history.

              Source code in src/llmling_agent/agent/conversation.py
              121
              122
              123
              def __len__(self) -> int:
                  """Get length of history."""
                  return len(self.chat_messages)
              

              add_chat_messages

              add_chat_messages(messages: Sequence[ChatMessage])
              

              Add new messages to history and update last_messages.

              Source code in src/llmling_agent/agent/conversation.py
              355
              356
              357
              358
              def add_chat_messages(self, messages: Sequence[ChatMessage]):
                  """Add new messages to history and update last_messages."""
                  self._last_messages = list(messages)
                  self.chat_messages.extend(messages)
              

              add_context_from_path async

              add_context_from_path(
                  path: JoinablePathLike, *, convert_to_md: bool = False, **metadata: Any
              )
              

              Add file or URL content as context message.

              Parameters:

              Name Type Description Default
              path JoinablePathLike

              Any UPath-supported path

              required
              convert_to_md bool

              Whether to convert content to markdown

              False
              **metadata Any

              Additional metadata to include with the message

              {}

              Raises:

              Type Description
              ValueError

              If content cannot be loaded or converted

              Source code in src/llmling_agent/agent/conversation.py
              395
              396
              397
              398
              399
              400
              401
              402
              403
              404
              405
              406
              407
              408
              409
              410
              411
              412
              413
              414
              415
              416
              417
              418
              419
              async def add_context_from_path(
                  self,
                  path: JoinablePathLike,
                  *,
                  convert_to_md: bool = False,
                  **metadata: Any,
              ):
                  """Add file or URL content as context message.
              
                  Args:
                      path: Any UPath-supported path
                      convert_to_md: Whether to convert content to markdown
                      **metadata: Additional metadata to include with the message
              
                  Raises:
                      ValueError: If content cannot be loaded or converted
                  """
                  path_obj = to_upath(path)
                  if convert_to_md:
                      content = await self._agent.context.converter.convert_file(path)
                      source = f"markdown:{path_obj.name}"
                  else:
                      content = await read_path(path)
                      source = f"{path_obj.protocol}:{path_obj.name}"
                  self.add_context_message(content, source=source, **metadata)
              

              add_context_from_prompt async

              add_context_from_prompt(
                  prompt: PromptType, metadata: dict[str, Any] | None = None, **kwargs: Any
              )
              

              Add rendered prompt content as context message.

              Parameters:

              Name Type Description Default
              prompt PromptType

              LLMling prompt (static, dynamic, or file-based)

              required
              metadata dict[str, Any] | None

              Additional metadata to include with the message

              None
              kwargs Any

              Optional kwargs for prompt formatting

              {}
              Source code in src/llmling_agent/agent/conversation.py
              421
              422
              423
              424
              425
              426
              427
              428
              429
              430
              431
              432
              433
              434
              435
              436
              437
              438
              439
              440
              441
              442
              443
              444
              445
              446
              447
              448
              async def add_context_from_prompt(
                  self,
                  prompt: PromptType,
                  metadata: dict[str, Any] | None = None,
                  **kwargs: Any,
              ):
                  """Add rendered prompt content as context message.
              
                  Args:
                      prompt: LLMling prompt (static, dynamic, or file-based)
                      metadata: Additional metadata to include with the message
                      kwargs: Optional kwargs for prompt formatting
                  """
                  try:
                      # Format the prompt using LLMling's prompt system
                      messages = await prompt.format(kwargs)
                      # Extract text content from all messages
                      content = "\n\n".join(msg.get_text_content() for msg in messages)
              
                      self.add_context_message(
                          content,
                          source=f"prompt:{prompt.name or prompt.type}",
                          prompt_args=kwargs,
                          **(metadata or {}),
                      )
                  except Exception as e:
                      msg = f"Failed to format prompt: {e}"
                      raise ValueError(msg) from e
              

              add_context_message

              add_context_message(content: str, source: str | None = None, **metadata: Any)
              

              Add a context message.

              Parameters:

              Name Type Description Default
              content str

              Text content to add

              required
              source str | None

              Description of content source

              None
              **metadata Any

              Additional metadata to include with the message

              {}
              Source code in src/llmling_agent/agent/conversation.py
              365
              366
              367
              368
              369
              370
              371
              372
              373
              374
              375
              376
              377
              378
              379
              380
              381
              382
              383
              384
              385
              386
              387
              388
              389
              390
              391
              392
              393
              def add_context_message(
                  self,
                  content: str,
                  source: str | None = None,
                  **metadata: Any,
              ):
                  """Add a context message.
              
                  Args:
                      content: Text content to add
                      source: Description of content source
                      **metadata: Additional metadata to include with the message
                  """
                  meta_str = ""
                  if metadata:
                      meta_str = "\n".join(f"{k}: {v}" for k, v in metadata.items())
                      meta_str = f"\nMetadata:\n{meta_str}\n"
              
                  header = f"Content from {source}:" if source else "Additional context:"
                  formatted = f"{header}{meta_str}\n{content}\n"
              
                  chat_message = ChatMessage(
                      content=formatted,
                      role="user",
                      name="user",
                      metadata=metadata,
                      conversation_id="context",  # TODO: should probably allow DB field to be NULL
                  )
                  self._pending_messages.append(chat_message)
              

              clear

              clear()
              

              Clear conversation history and prompts.

              Source code in src/llmling_agent/agent/conversation.py
              307
              308
              309
              310
              311
              312
              def clear(self):
                  """Clear conversation history and prompts."""
                  self.chat_messages = ChatMessageContainer()
                  self._last_messages = []
                  event = self.HistoryCleared(session_id=str(self.id))
                  self.history_cleared.emit(event)
              

              clear_pending

              clear_pending()
              

              Clear pending messages without adding them to history.

              Source code in src/llmling_agent/agent/conversation.py
              298
              299
              300
              def clear_pending(self):
                  """Clear pending messages without adding them to history."""
                  self._pending_messages.clear()
              

              format_history async

              format_history(
                  *,
                  max_tokens: int | None = None,
                  include_system: bool = False,
                  format_template: str | None = None,
                  num_messages: int | None = None
              ) -> str
              

              Format conversation history as a single context message.

              Parameters:

              Name Type Description Default
              max_tokens int | None

              Optional limit to include only last N tokens

              None
              include_system bool

              Whether to include system messages

              False
              format_template str | None

              Optional custom format (defaults to agent/message pairs)

              None
              num_messages int | None

              Optional limit to include only last N messages

              None
              Source code in src/llmling_agent/agent/conversation.py
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              async def format_history(
                  self,
                  *,
                  max_tokens: int | None = None,
                  include_system: bool = False,
                  format_template: str | None = None,
                  num_messages: int | None = None,  # Add this parameter
              ) -> str:
                  """Format conversation history as a single context message.
              
                  Args:
                      max_tokens: Optional limit to include only last N tokens
                      include_system: Whether to include system messages
                      format_template: Optional custom format (defaults to agent/message pairs)
                      num_messages: Optional limit to include only last N messages
                  """
                  template = format_template or "Agent {agent}: {content}\n"
                  messages: list[str] = []
                  token_count = 0
              
                  # Get messages, optionally limited
                  history: Sequence[ChatMessage[Any]] = self.chat_messages
                  if num_messages:
                      history = history[-num_messages:]
              
                  if max_tokens:
                      history = list(reversed(history))  # Start from newest when token limited
              
                  for msg in history:
                      # Check role directly from ChatMessage
                      if not include_system and msg.role == "system":
                          continue
                      name = msg.name or msg.role.title()
                      formatted = template.format(agent=name, content=str(msg.content))
              
                      if max_tokens:
                          # Count tokens in this message
                          if msg.cost_info:
                              msg_tokens = msg.cost_info.token_usage.total_tokens
                          else:
                              # Fallback to tiktoken if no cost info
                              msg_tokens = self.get_message_tokens(msg)
              
                          if token_count + msg_tokens > max_tokens:
                              break
                          token_count += msg_tokens
                          # Add to front since we're going backwards
                          messages.insert(0, formatted)
                      else:
                          messages.append(formatted)
              
                  return "\n".join(messages)
              

              get_history

              get_history(include_pending: bool = True, do_filter: bool = True) -> list[ChatMessage]
              

              Get conversation history.

              Parameters:

              Name Type Description Default
              include_pending bool

              Whether to include pending messages

              True
              do_filter bool

              Whether to apply memory config limits (max_tokens, max_messages)

              True

              Returns:

              Type Description
              list[ChatMessage]

              Filtered list of messages in chronological order

              Source code in src/llmling_agent/agent/conversation.py
              252
              253
              254
              255
              256
              257
              258
              259
              260
              261
              262
              263
              264
              265
              266
              267
              268
              269
              270
              271
              272
              273
              274
              275
              276
              277
              278
              279
              280
              281
              282
              283
              284
              285
              286
              287
              288
              289
              290
              291
              292
              def get_history(
                  self,
                  include_pending: bool = True,
                  do_filter: bool = True,
              ) -> list[ChatMessage]:
                  """Get conversation history.
              
                  Args:
                      include_pending: Whether to include pending messages
                      do_filter: Whether to apply memory config limits (max_tokens, max_messages)
              
                  Returns:
                      Filtered list of messages in chronological order
                  """
                  if include_pending and self._pending_messages:
                      self.chat_messages.extend(self._pending_messages)
                      self._pending_messages.clear()
              
                  # 2. Start with original history
                  history: Sequence[ChatMessage[Any]] = self.chat_messages
              
                  # 3. Only filter if needed
                  if do_filter and self._config:
                      # First filter by message count (simple slice)
                      if self._config.max_messages:
                          history = history[-self._config.max_messages :]
              
                      # Then filter by tokens if needed
                      if self._config.max_tokens:
                          token_count = 0
                          filtered = []
                          # Collect messages from newest to oldest until we hit the limit
                          for msg in reversed(history):
                              msg_tokens = self.get_message_tokens(msg)
                              if token_count + msg_tokens > self._config.max_tokens:
                                  break
                              token_count += msg_tokens
                              filtered.append(msg)
                          history = list(reversed(filtered))
              
                  return list(history)
              

              get_history_tokens

              get_history_tokens() -> int
              

              Get token count for current history.

              Source code in src/llmling_agent/agent/conversation.py
              450
              451
              452
              453
              def get_history_tokens(self) -> int:
                  """Get token count for current history."""
                  # Use cost_info if available
                  return self.chat_messages.get_history_tokens()
              

              get_initialization_tasks

              get_initialization_tasks() -> list[Coroutine[Any, Any, Any]]
              

              Get all initialization coroutines.

              Source code in src/llmling_agent/agent/conversation.py
              82
              83
              84
              85
              def get_initialization_tasks(self) -> list[Coroutine[Any, Any, Any]]:
                  """Get all initialization coroutines."""
                  self._resources = []  # Clear so we dont load again on async init
                  return [self.load_context_source(source) for source in self._resources]
              

              get_message_tokens

              get_message_tokens(message: ChatMessage) -> int
              

              Get token count for a single message.

              Source code in src/llmling_agent/agent/conversation.py
              125
              126
              127
              128
              def get_message_tokens(self, message: ChatMessage) -> int:
                  """Get token count for a single message."""
                  content = "\n".join(message.format())
                  return count_tokens(content, message.model_name)
              

              get_pending_messages

              get_pending_messages() -> list[ChatMessage]
              

              Get messages that will be included in next interaction.

              Source code in src/llmling_agent/agent/conversation.py
              294
              295
              296
              def get_pending_messages(self) -> list[ChatMessage]:
                  """Get messages that will be included in next interaction."""
                  return list(self._pending_messages)
              

              load_context_source async

              load_context_source(source: PromptType | str)
              

              Load context from a single source.

              Source code in src/llmling_agent/agent/conversation.py
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              async def load_context_source(self, source: PromptType | str):
                  """Load context from a single source."""
                  try:
                      match source:
                          case str():
                              await self.add_context_from_path(source)
                          case BasePrompt():
                              await self.add_context_from_prompt(source)
                  except Exception:
                      logger.exception(
                          "Failed to load context",
                          source="file" if isinstance(source, str) else source.type,
                      )
              

              load_history_from_database

              load_history_from_database(
                  session: SessionIdType | SessionQuery = None,
                  *,
                  since: datetime | None = None,
                  until: datetime | None = None,
                  roles: set[MessageRole] | None = None,
                  limit: int | None = None
              )
              

              Load conversation history from database.

              Parameters:

              Name Type Description Default
              session SessionIdType | SessionQuery

              Session ID or query config

              None
              since datetime | None

              Only include messages after this time (override)

              None
              until datetime | None

              Only include messages before this time (override)

              None
              roles set[MessageRole] | None

              Only include messages with these roles (override)

              None
              limit int | None

              Maximum number of messages to return (override)

              None
              Source code in src/llmling_agent/agent/conversation.py
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              241
              242
              243
              244
              245
              246
              247
              248
              249
              250
              def load_history_from_database(
                  self,
                  session: SessionIdType | SessionQuery = None,
                  *,
                  since: datetime | None = None,
                  until: datetime | None = None,
                  roles: set[MessageRole] | None = None,
                  limit: int | None = None,
              ):
                  """Load conversation history from database.
              
                  Args:
                      session: Session ID or query config
                      since: Only include messages after this time (override)
                      until: Only include messages before this time (override)
                      roles: Only include messages with these roles (override)
                      limit: Maximum number of messages to return (override)
                  """
                  storage = self._agent.context.storage
                  match session:
                      case SessionQuery() as query:
                          # Override query params if provided
                          if since is not None or until is not None or roles or limit:
                              update = {
                                  "since": since.isoformat() if since else None,
                                  "until": until.isoformat() if until else None,
                                  "roles": roles,
                                  "limit": limit,
                              }
                              query = query.model_copy(update=update)
                          if query.name:
                              self.id = query.name
                      case str() | UUID():
                          self.id = str(session)
                          query = SessionQuery(
                              name=self.id,
                              since=since.isoformat() if since else None,
                              until=until.isoformat() if until else None,
                              roles=roles,
                              limit=limit,
                          )
                      case None:
                          # Use current session ID
                          query = SessionQuery(
                              name=self.id,
                              since=since.isoformat() if since else None,
                              until=until.isoformat() if until else None,
                              roles=roles,
                              limit=limit,
                          )
                      case _ as unreachable:
                          assert_never(unreachable)
                  self.chat_messages.clear()
                  self.chat_messages.extend(storage.filter_messages.sync(query))
              

              set_history

              set_history(history: list[ChatMessage])
              

              Update conversation history after run.

              Source code in src/llmling_agent/agent/conversation.py
              302
              303
              304
              305
              def set_history(self, history: list[ChatMessage]):
                  """Update conversation history after run."""
                  self.chat_messages.clear()
                  self.chat_messages.extend(history)
              

              temporary_state async

              temporary_state(
                  history: list[AnyPromptType] | SessionQuery | None = None,
                  *,
                  replace_history: bool = False
              ) -> AsyncIterator[Self]
              

              Temporarily set conversation history.

              Parameters:

              Name Type Description Default
              history list[AnyPromptType] | SessionQuery | None

              Optional list of prompts to use as temporary history. Can be strings, BasePrompts, or other prompt types.

              None
              replace_history bool

              If True, only use provided history. If False, append to existing history.

              False
              Source code in src/llmling_agent/agent/conversation.py
              314
              315
              316
              317
              318
              319
              320
              321
              322
              323
              324
              325
              326
              327
              328
              329
              330
              331
              332
              333
              334
              335
              336
              337
              338
              339
              340
              341
              342
              343
              344
              345
              346
              347
              348
              349
              350
              351
              352
              353
              @asynccontextmanager
              async def temporary_state(
                  self,
                  history: list[AnyPromptType] | SessionQuery | None = None,
                  *,
                  replace_history: bool = False,
              ) -> AsyncIterator[Self]:
                  """Temporarily set conversation history.
              
                  Args:
                      history: Optional list of prompts to use as temporary history.
                              Can be strings, BasePrompts, or other prompt types.
                      replace_history: If True, only use provided history. If False, append
                              to existing history.
                  """
                  from toprompt import to_prompt
              
                  old_history = self.chat_messages.copy()
              
                  try:
                      messages: Sequence[ChatMessage[Any]] = ChatMessageContainer()
                      if history is not None:
                          if isinstance(history, SessionQuery):
                              messages = await self._agent.context.storage.filter_messages(history)
                          else:
                              messages = [
                                  ChatMessage.user_prompt(message=prompt)
                                  for p in history
                                  if (prompt := await to_prompt(p))
                              ]
              
                      if replace_history:
                          self.chat_messages = ChatMessageContainer(messages)
                      else:
                          self.chat_messages.extend(messages)
              
                      yield self
              
                  finally:
                      self.chat_messages = old_history
              

              Interactions

              Manages agent communication patterns.

              Source code in src/llmling_agent/agent/interactions.py
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              241
              242
              243
              244
              245
              246
              247
              248
              249
              250
              251
              252
              253
              254
              255
              256
              257
              258
              259
              260
              261
              262
              263
              264
              265
              266
              267
              268
              269
              270
              271
              272
              273
              274
              275
              276
              277
              278
              279
              280
              281
              282
              283
              284
              285
              286
              287
              288
              289
              290
              291
              292
              293
              294
              295
              296
              297
              298
              299
              300
              301
              302
              303
              304
              305
              306
              307
              308
              309
              310
              311
              312
              313
              314
              315
              316
              317
              318
              319
              320
              321
              322
              323
              324
              325
              326
              327
              328
              329
              330
              331
              332
              333
              334
              335
              336
              337
              338
              339
              340
              341
              342
              343
              344
              345
              346
              347
              348
              349
              350
              351
              352
              353
              354
              355
              356
              357
              358
              359
              360
              361
              362
              363
              364
              365
              366
              367
              368
              369
              370
              371
              372
              373
              374
              375
              376
              377
              378
              379
              380
              381
              382
              383
              384
              385
              386
              387
              388
              389
              390
              391
              392
              393
              394
              395
              396
              397
              398
              399
              400
              401
              402
              403
              404
              405
              406
              407
              408
              409
              410
              411
              412
              413
              414
              415
              416
              417
              418
              419
              420
              421
              422
              423
              424
              425
              426
              427
              428
              429
              430
              431
              432
              433
              434
              435
              436
              437
              438
              439
              440
              441
              442
              443
              444
              445
              446
              447
              448
              449
              450
              451
              452
              453
              454
              455
              456
              457
              458
              459
              460
              461
              462
              463
              464
              465
              466
              467
              468
              469
              470
              471
              472
              473
              474
              475
              476
              477
              478
              479
              480
              481
              482
              483
              484
              485
              486
              487
              488
              489
              490
              491
              492
              493
              494
              495
              496
              497
              498
              499
              500
              501
              502
              503
              504
              505
              506
              507
              508
              509
              510
              511
              512
              513
              514
              515
              516
              517
              518
              519
              520
              521
              522
              523
              524
              525
              class Interactions[TDeps, TResult]:
                  """Manages agent communication patterns."""
              
                  def __init__(self, agent: Agent[TDeps, TResult]):
                      self.agent = agent
              
                  async def conversation(
                      self,
                      other: MessageNode[Any, Any],
                      initial_message: AnyPromptType,
                      *,
                      max_rounds: int | None = None,
                      end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                      | None = None,
                      store_history: bool = True,
                  ) -> AsyncIterator[ChatMessage[Any]]:
                      """Maintain conversation between two agents.
              
                      Args:
                          other: Agent to converse with
                          initial_message: Message to start conversation with
                          max_rounds: Optional maximum number of exchanges
                          end_condition: Optional predicate to check for conversation end
                          store_history: Whether to store in conversation history
              
                      Yields:
                          Messages from both agents in conversation order
                      """
                      rounds = 0
                      messages: list[ChatMessage[Any]] = []
                      current_message = initial_message
                      current_node: MessageNode[Any, Any] = self.agent
              
                      while True:
                          if max_rounds and rounds >= max_rounds:
                              logger.debug("Conversation ended", max_rounds=max_rounds)
                              return
              
                          response = await current_node.run(
                              current_message, store_history=store_history
                          )
                          messages.append(response)
                          yield response
              
                          if end_condition and end_condition(messages, response):
                              logger.debug("Conversation ended: end condition met")
                              return
              
                          # Switch agents for next round
                          current_node = other if current_node == self.agent else self.agent
                          current_message = response.content
                          rounds += 1
              
                  @overload
                  async def pick[T: AnyPromptType](
                      self,
                      selections: Sequence[T],
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[T]: ...
              
                  @overload
                  async def pick[T: AnyPromptType](
                      self,
                      selections: Sequence[T],
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[T]: ...
              
                  @overload
                  async def pick[T: AnyPromptType](
                      self,
                      selections: Mapping[str, T],
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[T]: ...
              
                  @overload
                  async def pick(
                      self,
                      selections: AgentPool,
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[Agent[Any, Any]]: ...
              
                  @overload
                  async def pick(
                      self,
                      selections: BaseTeam[TDeps, Any],
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[MessageNode[TDeps, Any]]: ...
              
                  async def pick[T](
                      self,
                      selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[T]:
                      """Pick from available options with reasoning.
              
                      Args:
                          selections: What to pick from:
                              - Sequence of items (auto-labeled)
                              - Dict mapping labels to items
                              - AgentPool
                              - Team
                          task: Task/decision description
                          prompt: Optional custom selection prompt
              
                      Returns:
                          Decision with selected item and reasoning
              
                      Raises:
                          ValueError: If no choices available or invalid selection
                      """
                      # Get items and create label mapping
                      from toprompt import to_prompt
              
                      from llmling_agent import AgentPool
                      from llmling_agent.delegation.base_team import BaseTeam
              
                      match selections:
                          case dict():
                              label_map = selections
                              items: list[Any] = list(selections.values())
                          case BaseTeam():
                              items = list(selections.agents)
                              label_map = {get_label(item): item for item in items}
                          case AgentPool():
                              items = list(selections.agents.values())
                              label_map = {get_label(item): item for item in items}
                          case _:
                              items = list(selections)
                              label_map = {get_label(item): item for item in items}
              
                      if not items:
                          msg = "No choices available"
                          raise ValueError(msg)
              
                      # Get descriptions for all items
                      descriptions = []
                      for label, item in label_map.items():
                          item_desc = await to_prompt(item)
                          descriptions.append(f"{label}:\n{item_desc}")
              
                      default_prompt = f"""Task/Decision: {task}
              
              Available options:
              {"-" * 40}
              {"\n\n".join(descriptions)}
              {"-" * 40}
              
              Select ONE option by its exact label."""
              
                      # Get LLM's string-based decision
                      result = await self.agent.run(prompt or default_prompt, output_type=LLMPick)
              
                      # Convert to type-safe decision
                      if result.content.selection not in label_map:
                          msg = f"Invalid selection: {result.content.selection}"
                          raise ValueError(msg)
              
                      selected = cast(T, label_map[result.content.selection])
                      return Pick(selection=selected, reason=result.content.reason)
              
                  @overload
                  async def pick_multiple[T: AnyPromptType](
                      self,
                      selections: Sequence[T],
                      task: str,
                      *,
                      min_picks: int = 1,
                      max_picks: int | None = None,
                      prompt: AnyPromptType | None = None,
                  ) -> MultiPick[T]: ...
              
                  @overload
                  async def pick_multiple[T: AnyPromptType](
                      self,
                      selections: Mapping[str, T],
                      task: str,
                      *,
                      min_picks: int = 1,
                      max_picks: int | None = None,
                      prompt: AnyPromptType | None = None,
                  ) -> MultiPick[T]: ...
              
                  @overload
                  async def pick_multiple(
                      self,
                      selections: BaseTeam[TDeps, Any],
                      task: str,
                      *,
                      min_picks: int = 1,
                      max_picks: int | None = None,
                      prompt: AnyPromptType | None = None,
                  ) -> MultiPick[MessageNode[TDeps, Any]]: ...
              
                  @overload
                  async def pick_multiple(
                      self,
                      selections: AgentPool,
                      task: str,
                      *,
                      min_picks: int = 1,
                      max_picks: int | None = None,
                      prompt: AnyPromptType | None = None,
                  ) -> MultiPick[Agent[Any, Any]]: ...
              
                  async def pick_multiple[T](
                      self,
                      selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                      task: str,
                      *,
                      min_picks: int = 1,
                      max_picks: int | None = None,
                      prompt: AnyPromptType | None = None,
                  ) -> MultiPick[T]:
                      """Pick multiple options from available choices.
              
                      Args:
                          selections: What to pick from
                          task: Task/decision description
                          min_picks: Minimum number of selections required
                          max_picks: Maximum number of selections (None for unlimited)
                          prompt: Optional custom selection prompt
                      """
                      from toprompt import to_prompt
              
                      from llmling_agent import AgentPool
                      from llmling_agent.delegation.base_team import BaseTeam
              
                      match selections:
                          case Mapping():
                              label_map = selections
                              items: list[Any] = list(selections.values())
                          case BaseTeam():
                              items = list(selections.agents)
                              label_map = {get_label(item): item for item in items}
                          case AgentPool():
                              items = list(selections.agents.values())
                              label_map = {get_label(item): item for item in items}
                          case _:
                              items = list(selections)
                              label_map = {get_label(item): item for item in items}
              
                      if not items:
                          msg = "No choices available"
                          raise ValueError(msg)
              
                      if max_picks is not None and max_picks < min_picks:
                          msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                          raise ValueError(msg)
              
                      descriptions = []
                      for label, item in label_map.items():
                          item_desc = await to_prompt(item)
                          descriptions.append(f"{label}:\n{item_desc}")
              
                      picks_info = (
                          f"Select between {min_picks} and {max_picks}"
                          if max_picks is not None
                          else f"Select at least {min_picks}"
                      )
              
                      default_prompt = f"""Task/Decision: {task}
              
              Available options:
              {"-" * 40}
              {"\n\n".join(descriptions)}
              {"-" * 40}
              
              {picks_info} options by their exact labels.
              List your selections, one per line, followed by your reasoning."""
              
                      result = await self.agent.run(prompt or default_prompt, output_type=LLMMultiPick)
              
                      # Validate selections
                      invalid = [s for s in result.content.selections if s not in label_map]
                      if invalid:
                          msg = f"Invalid selections: {', '.join(invalid)}"
                          raise ValueError(msg)
                      num_picks = len(result.content.selections)
                      if num_picks < min_picks:
                          msg = f"Too few selections: got {num_picks}, need {min_picks}"
                          raise ValueError(msg)
              
                      if max_picks and num_picks > max_picks:
                          msg = f"Too many selections: got {num_picks}, max {max_picks}"
                          raise ValueError(msg)
              
                      selected = [cast(T, label_map[label]) for label in result.content.selections]
                      return MultiPick(selections=selected, reason=result.content.reason)
              
                  async def extract[T](
                      self,
                      text: str,
                      as_type: type[T],
                      *,
                      mode: ExtractionMode = "structured",
                      prompt: AnyPromptType | None = None,
                      include_tools: bool = False,
                  ) -> T:
                      """Extract single instance of type from text.
              
                      Args:
                          text: Text to extract from
                          as_type: Type to extract
                          mode: Extraction approach:
                              - "structured": Use Pydantic models (more robust)
                              - "tool_calls": Use tool calls (more flexible)
                          prompt: Optional custom prompt
                          include_tools: Whether to include other tools (tool_calls mode only)
                      """
                      # Create model for single instance
                      item_model = Schema.for_class_ctor(as_type)
              
                      # Create extraction prompt
                      final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                      schema_obj = create_constructor_schema(as_type)
                      schema = schema_obj.model_dump_openai()["function"]
              
                      if mode == "structured":
              
                          class Extraction(Schema):
                              instance: item_model  # type: ignore
                              # explanation: str | None = None
              
                          result = await self.agent.run(final_prompt, output_type=Extraction)
                          # Convert model instance to actual type
                          return as_type(**result.content.instance.model_dump())  # type: ignore
              
                      # Legacy tool-calls approach
              
                      async def construct(**kwargs: Any) -> T:
                          """Construct instance from extracted data."""
                          return as_type(**kwargs)
              
                      tool = Tool.from_callable(
                          construct,
                          name_override=schema["name"],
                          description_override=schema["description"],
                          # schema_override=schema,
                      )
                      async with self.agent.tools.temporary_tools(tool, exclusive=not include_tools):
                          result = await self.agent.run(final_prompt, output_type=item_model)  # type: ignore
                      return result.content  # type: ignore
              
                  async def extract_multiple[T](
                      self,
                      text: str,
                      as_type: type[T],
                      *,
                      mode: ExtractionMode = "structured",
                      min_items: int = 1,
                      max_items: int | None = None,
                      prompt: AnyPromptType | None = None,
                      include_tools: bool = False,
                  ) -> list[T]:
                      """Extract multiple instances of type from text.
              
                      Args:
                          text: Text to extract from
                          as_type: Type to extract
                          mode: Extraction approach:
                              - "structured": Use Pydantic models (more robust)
                              - "tool_calls": Use tool calls (more flexible)
                          min_items: Minimum number of instances to extract
                          max_items: Maximum number of instances (None=unlimited)
                          prompt: Optional custom prompt
                          include_tools: Whether to include other tools (tool_calls mode only)
                      """
                      item_model = Schema.for_class_ctor(as_type)
              
                      instances: list[T] = []
                      schema_obj = create_constructor_schema(as_type)
                      final_prompt = prompt or "\n".join([
                          f"Extract {as_type.__name__} instances from text.",
                          # "Requirements:",
                          # f"- Extract at least {min_items} instances",
                          # f"- Extract at most {max_items} instances" if max_items else "",
                          "\nText to analyze:",
                          text,
                      ])
                      if mode == "structured":
                          # Create model for individual instance
              
                          class Extraction(Schema):
                              instances: list[item_model]  # type: ignore
                              # explanation: str | None = None
              
                          result = await self.agent.run(final_prompt, output_type=Extraction)
              
                          # Validate counts
                          num_instances = len(result.content.instances)
                          if len(result.content.instances) < min_items:
                              msg = f"Found only {num_instances} instances, need {min_items}"
                              raise ValueError(msg)
              
                          if max_items and num_instances > max_items:
                              msg = f"Found {num_instances} instances, max is {max_items}"
                              raise ValueError(msg)
              
                          # Convert model instances to actual type
                          return [
                              as_type(
                                  **instance.data  # type: ignore
                                  if hasattr(instance, "data")
                                  else instance.model_dump()  # type: ignore
                              )
                              for instance in result.content.instances
                          ]
              
                      # Legacy tool-calls approach
              
                      async def add_instance(**kwargs: Any) -> str:
                          """Add an extracted instance."""
                          if max_items and len(instances) >= max_items:
                              msg = f"Maximum number of items ({max_items}) reached"
                              raise ValueError(msg)
                          instance = as_type(**kwargs)
                          instances.append(instance)
                          return f"Added {instance}"
              
                      add_instance.__annotations__ = schema_obj.get_annotations()
                      add_instance.__signature__ = schema_obj.to_python_signature()  # type: ignore
                      async with self.agent.temporary_state(
                          tools=[add_instance],
                          replace_tools=not include_tools,
                          output_type=item_model,
                      ):
                          await self.agent.run(final_prompt)  # Create extraction prompt
              
                      if len(instances) < min_items:
                          msg = f"Found only {len(instances)} instances, need at least {min_items}"
                          raise ValueError(msg)
              
                      return instances
              

              conversation async

              conversation(
                  other: MessageNode[Any, Any],
                  initial_message: AnyPromptType,
                  *,
                  max_rounds: int | None = None,
                  end_condition: (
                      Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool] | None
                  ) = None,
                  store_history: bool = True
              ) -> AsyncIterator[ChatMessage[Any]]
              

              Maintain conversation between two agents.

              Parameters:

              Name Type Description Default
              other MessageNode[Any, Any]

              Agent to converse with

              required
              initial_message AnyPromptType

              Message to start conversation with

              required
              max_rounds int | None

              Optional maximum number of exchanges

              None
              end_condition Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool] | None

              Optional predicate to check for conversation end

              None
              store_history bool

              Whether to store in conversation history

              True

              Yields:

              Type Description
              AsyncIterator[ChatMessage[Any]]

              Messages from both agents in conversation order

              Source code in src/llmling_agent/agent/interactions.py
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              async def conversation(
                  self,
                  other: MessageNode[Any, Any],
                  initial_message: AnyPromptType,
                  *,
                  max_rounds: int | None = None,
                  end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                  | None = None,
                  store_history: bool = True,
              ) -> AsyncIterator[ChatMessage[Any]]:
                  """Maintain conversation between two agents.
              
                  Args:
                      other: Agent to converse with
                      initial_message: Message to start conversation with
                      max_rounds: Optional maximum number of exchanges
                      end_condition: Optional predicate to check for conversation end
                      store_history: Whether to store in conversation history
              
                  Yields:
                      Messages from both agents in conversation order
                  """
                  rounds = 0
                  messages: list[ChatMessage[Any]] = []
                  current_message = initial_message
                  current_node: MessageNode[Any, Any] = self.agent
              
                  while True:
                      if max_rounds and rounds >= max_rounds:
                          logger.debug("Conversation ended", max_rounds=max_rounds)
                          return
              
                      response = await current_node.run(
                          current_message, store_history=store_history
                      )
                      messages.append(response)
                      yield response
              
                      if end_condition and end_condition(messages, response):
                          logger.debug("Conversation ended: end condition met")
                          return
              
                      # Switch agents for next round
                      current_node = other if current_node == self.agent else self.agent
                      current_message = response.content
                      rounds += 1
              

              extract async

              extract(
                  text: str,
                  as_type: type[T],
                  *,
                  mode: ExtractionMode = "structured",
                  prompt: AnyPromptType | None = None,
                  include_tools: bool = False
              ) -> T
              

              Extract single instance of type from text.

              Parameters:

              Name Type Description Default
              text str

              Text to extract from

              required
              as_type type[T]

              Type to extract

              required
              mode ExtractionMode

              Extraction approach: - "structured": Use Pydantic models (more robust) - "tool_calls": Use tool calls (more flexible)

              'structured'
              prompt AnyPromptType | None

              Optional custom prompt

              None
              include_tools bool

              Whether to include other tools (tool_calls mode only)

              False
              Source code in src/llmling_agent/agent/interactions.py
              382
              383
              384
              385
              386
              387
              388
              389
              390
              391
              392
              393
              394
              395
              396
              397
              398
              399
              400
              401
              402
              403
              404
              405
              406
              407
              408
              409
              410
              411
              412
              413
              414
              415
              416
              417
              418
              419
              420
              421
              422
              423
              424
              425
              426
              427
              428
              429
              430
              431
              432
              433
              434
              async def extract[T](
                  self,
                  text: str,
                  as_type: type[T],
                  *,
                  mode: ExtractionMode = "structured",
                  prompt: AnyPromptType | None = None,
                  include_tools: bool = False,
              ) -> T:
                  """Extract single instance of type from text.
              
                  Args:
                      text: Text to extract from
                      as_type: Type to extract
                      mode: Extraction approach:
                          - "structured": Use Pydantic models (more robust)
                          - "tool_calls": Use tool calls (more flexible)
                      prompt: Optional custom prompt
                      include_tools: Whether to include other tools (tool_calls mode only)
                  """
                  # Create model for single instance
                  item_model = Schema.for_class_ctor(as_type)
              
                  # Create extraction prompt
                  final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                  schema_obj = create_constructor_schema(as_type)
                  schema = schema_obj.model_dump_openai()["function"]
              
                  if mode == "structured":
              
                      class Extraction(Schema):
                          instance: item_model  # type: ignore
                          # explanation: str | None = None
              
                      result = await self.agent.run(final_prompt, output_type=Extraction)
                      # Convert model instance to actual type
                      return as_type(**result.content.instance.model_dump())  # type: ignore
              
                  # Legacy tool-calls approach
              
                  async def construct(**kwargs: Any) -> T:
                      """Construct instance from extracted data."""
                      return as_type(**kwargs)
              
                  tool = Tool.from_callable(
                      construct,
                      name_override=schema["name"],
                      description_override=schema["description"],
                      # schema_override=schema,
                  )
                  async with self.agent.tools.temporary_tools(tool, exclusive=not include_tools):
                      result = await self.agent.run(final_prompt, output_type=item_model)  # type: ignore
                  return result.content  # type: ignore
              

              extract_multiple async

              extract_multiple(
                  text: str,
                  as_type: type[T],
                  *,
                  mode: ExtractionMode = "structured",
                  min_items: int = 1,
                  max_items: int | None = None,
                  prompt: AnyPromptType | None = None,
                  include_tools: bool = False
              ) -> list[T]
              

              Extract multiple instances of type from text.

              Parameters:

              Name Type Description Default
              text str

              Text to extract from

              required
              as_type type[T]

              Type to extract

              required
              mode ExtractionMode

              Extraction approach: - "structured": Use Pydantic models (more robust) - "tool_calls": Use tool calls (more flexible)

              'structured'
              min_items int

              Minimum number of instances to extract

              1
              max_items int | None

              Maximum number of instances (None=unlimited)

              None
              prompt AnyPromptType | None

              Optional custom prompt

              None
              include_tools bool

              Whether to include other tools (tool_calls mode only)

              False
              Source code in src/llmling_agent/agent/interactions.py
              436
              437
              438
              439
              440
              441
              442
              443
              444
              445
              446
              447
              448
              449
              450
              451
              452
              453
              454
              455
              456
              457
              458
              459
              460
              461
              462
              463
              464
              465
              466
              467
              468
              469
              470
              471
              472
              473
              474
              475
              476
              477
              478
              479
              480
              481
              482
              483
              484
              485
              486
              487
              488
              489
              490
              491
              492
              493
              494
              495
              496
              497
              498
              499
              500
              501
              502
              503
              504
              505
              506
              507
              508
              509
              510
              511
              512
              513
              514
              515
              516
              517
              518
              519
              520
              521
              522
              523
              524
              525
              async def extract_multiple[T](
                  self,
                  text: str,
                  as_type: type[T],
                  *,
                  mode: ExtractionMode = "structured",
                  min_items: int = 1,
                  max_items: int | None = None,
                  prompt: AnyPromptType | None = None,
                  include_tools: bool = False,
              ) -> list[T]:
                  """Extract multiple instances of type from text.
              
                  Args:
                      text: Text to extract from
                      as_type: Type to extract
                      mode: Extraction approach:
                          - "structured": Use Pydantic models (more robust)
                          - "tool_calls": Use tool calls (more flexible)
                      min_items: Minimum number of instances to extract
                      max_items: Maximum number of instances (None=unlimited)
                      prompt: Optional custom prompt
                      include_tools: Whether to include other tools (tool_calls mode only)
                  """
                  item_model = Schema.for_class_ctor(as_type)
              
                  instances: list[T] = []
                  schema_obj = create_constructor_schema(as_type)
                  final_prompt = prompt or "\n".join([
                      f"Extract {as_type.__name__} instances from text.",
                      # "Requirements:",
                      # f"- Extract at least {min_items} instances",
                      # f"- Extract at most {max_items} instances" if max_items else "",
                      "\nText to analyze:",
                      text,
                  ])
                  if mode == "structured":
                      # Create model for individual instance
              
                      class Extraction(Schema):
                          instances: list[item_model]  # type: ignore
                          # explanation: str | None = None
              
                      result = await self.agent.run(final_prompt, output_type=Extraction)
              
                      # Validate counts
                      num_instances = len(result.content.instances)
                      if len(result.content.instances) < min_items:
                          msg = f"Found only {num_instances} instances, need {min_items}"
                          raise ValueError(msg)
              
                      if max_items and num_instances > max_items:
                          msg = f"Found {num_instances} instances, max is {max_items}"
                          raise ValueError(msg)
              
                      # Convert model instances to actual type
                      return [
                          as_type(
                              **instance.data  # type: ignore
                              if hasattr(instance, "data")
                              else instance.model_dump()  # type: ignore
                          )
                          for instance in result.content.instances
                      ]
              
                  # Legacy tool-calls approach
              
                  async def add_instance(**kwargs: Any) -> str:
                      """Add an extracted instance."""
                      if max_items and len(instances) >= max_items:
                          msg = f"Maximum number of items ({max_items}) reached"
                          raise ValueError(msg)
                      instance = as_type(**kwargs)
                      instances.append(instance)
                      return f"Added {instance}"
              
                  add_instance.__annotations__ = schema_obj.get_annotations()
                  add_instance.__signature__ = schema_obj.to_python_signature()  # type: ignore
                  async with self.agent.temporary_state(
                      tools=[add_instance],
                      replace_tools=not include_tools,
                      output_type=item_model,
                  ):
                      await self.agent.run(final_prompt)  # Create extraction prompt
              
                  if len(instances) < min_items:
                      msg = f"Found only {len(instances)} instances, need at least {min_items}"
                      raise ValueError(msg)
              
                  return instances
              

              pick async

              pick(
                  selections: Sequence[T], task: str, prompt: AnyPromptType | None = None
              ) -> Pick[T]
              
              pick(
                  selections: Sequence[T], task: str, prompt: AnyPromptType | None = None
              ) -> Pick[T]
              
              pick(
                  selections: Mapping[str, T], task: str, prompt: AnyPromptType | None = None
              ) -> Pick[T]
              
              pick(
                  selections: AgentPool, task: str, prompt: AnyPromptType | None = None
              ) -> Pick[Agent[Any, Any]]
              
              pick(
                  selections: BaseTeam[TDeps, Any], task: str, prompt: AnyPromptType | None = None
              ) -> Pick[MessageNode[TDeps, Any]]
              
              pick(
                  selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                  task: str,
                  prompt: AnyPromptType | None = None,
              ) -> Pick[T]
              

              Pick from available options with reasoning.

              Parameters:

              Name Type Description Default
              selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any]

              What to pick from: - Sequence of items (auto-labeled) - Dict mapping labels to items - AgentPool - Team

              required
              task str

              Task/decision description

              required
              prompt AnyPromptType | None

              Optional custom selection prompt

              None

              Returns:

              Type Description
              Pick[T]

              Decision with selected item and reasoning

              Raises:

              Type Description
              ValueError

              If no choices available or invalid selection

              Source code in src/llmling_agent/agent/interactions.py
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              241
              242
              243
              244
              245
              246
              247
              248
              249
              250
              251
                  async def pick[T](
                      self,
                      selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                      task: str,
                      prompt: AnyPromptType | None = None,
                  ) -> Pick[T]:
                      """Pick from available options with reasoning.
              
                      Args:
                          selections: What to pick from:
                              - Sequence of items (auto-labeled)
                              - Dict mapping labels to items
                              - AgentPool
                              - Team
                          task: Task/decision description
                          prompt: Optional custom selection prompt
              
                      Returns:
                          Decision with selected item and reasoning
              
                      Raises:
                          ValueError: If no choices available or invalid selection
                      """
                      # Get items and create label mapping
                      from toprompt import to_prompt
              
                      from llmling_agent import AgentPool
                      from llmling_agent.delegation.base_team import BaseTeam
              
                      match selections:
                          case dict():
                              label_map = selections
                              items: list[Any] = list(selections.values())
                          case BaseTeam():
                              items = list(selections.agents)
                              label_map = {get_label(item): item for item in items}
                          case AgentPool():
                              items = list(selections.agents.values())
                              label_map = {get_label(item): item for item in items}
                          case _:
                              items = list(selections)
                              label_map = {get_label(item): item for item in items}
              
                      if not items:
                          msg = "No choices available"
                          raise ValueError(msg)
              
                      # Get descriptions for all items
                      descriptions = []
                      for label, item in label_map.items():
                          item_desc = await to_prompt(item)
                          descriptions.append(f"{label}:\n{item_desc}")
              
                      default_prompt = f"""Task/Decision: {task}
              
              Available options:
              {"-" * 40}
              {"\n\n".join(descriptions)}
              {"-" * 40}
              
              Select ONE option by its exact label."""
              
                      # Get LLM's string-based decision
                      result = await self.agent.run(prompt or default_prompt, output_type=LLMPick)
              
                      # Convert to type-safe decision
                      if result.content.selection not in label_map:
                          msg = f"Invalid selection: {result.content.selection}"
                          raise ValueError(msg)
              
                      selected = cast(T, label_map[result.content.selection])
                      return Pick(selection=selected, reason=result.content.reason)
              

              pick_multiple async

              pick_multiple(
                  selections: Sequence[T],
                  task: str,
                  *,
                  min_picks: int = 1,
                  max_picks: int | None = None,
                  prompt: AnyPromptType | None = None
              ) -> MultiPick[T]
              
              pick_multiple(
                  selections: Mapping[str, T],
                  task: str,
                  *,
                  min_picks: int = 1,
                  max_picks: int | None = None,
                  prompt: AnyPromptType | None = None
              ) -> MultiPick[T]
              
              pick_multiple(
                  selections: BaseTeam[TDeps, Any],
                  task: str,
                  *,
                  min_picks: int = 1,
                  max_picks: int | None = None,
                  prompt: AnyPromptType | None = None
              ) -> MultiPick[MessageNode[TDeps, Any]]
              
              pick_multiple(
                  selections: AgentPool,
                  task: str,
                  *,
                  min_picks: int = 1,
                  max_picks: int | None = None,
                  prompt: AnyPromptType | None = None
              ) -> MultiPick[Agent[Any, Any]]
              
              pick_multiple(
                  selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                  task: str,
                  *,
                  min_picks: int = 1,
                  max_picks: int | None = None,
                  prompt: AnyPromptType | None = None
              ) -> MultiPick[T]
              

              Pick multiple options from available choices.

              Parameters:

              Name Type Description Default
              selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any]

              What to pick from

              required
              task str

              Task/decision description

              required
              min_picks int

              Minimum number of selections required

              1
              max_picks int | None

              Maximum number of selections (None for unlimited)

              None
              prompt AnyPromptType | None

              Optional custom selection prompt

              None
              Source code in src/llmling_agent/agent/interactions.py
              297
              298
              299
              300
              301
              302
              303
              304
              305
              306
              307
              308
              309
              310
              311
              312
              313
              314
              315
              316
              317
              318
              319
              320
              321
              322
              323
              324
              325
              326
              327
              328
              329
              330
              331
              332
              333
              334
              335
              336
              337
              338
              339
              340
              341
              342
              343
              344
              345
              346
              347
              348
              349
              350
              351
              352
              353
              354
              355
              356
              357
              358
              359
              360
              361
              362
              363
              364
              365
              366
              367
              368
              369
              370
              371
              372
              373
              374
              375
              376
              377
              378
              379
              380
                  async def pick_multiple[T](
                      self,
                      selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                      task: str,
                      *,
                      min_picks: int = 1,
                      max_picks: int | None = None,
                      prompt: AnyPromptType | None = None,
                  ) -> MultiPick[T]:
                      """Pick multiple options from available choices.
              
                      Args:
                          selections: What to pick from
                          task: Task/decision description
                          min_picks: Minimum number of selections required
                          max_picks: Maximum number of selections (None for unlimited)
                          prompt: Optional custom selection prompt
                      """
                      from toprompt import to_prompt
              
                      from llmling_agent import AgentPool
                      from llmling_agent.delegation.base_team import BaseTeam
              
                      match selections:
                          case Mapping():
                              label_map = selections
                              items: list[Any] = list(selections.values())
                          case BaseTeam():
                              items = list(selections.agents)
                              label_map = {get_label(item): item for item in items}
                          case AgentPool():
                              items = list(selections.agents.values())
                              label_map = {get_label(item): item for item in items}
                          case _:
                              items = list(selections)
                              label_map = {get_label(item): item for item in items}
              
                      if not items:
                          msg = "No choices available"
                          raise ValueError(msg)
              
                      if max_picks is not None and max_picks < min_picks:
                          msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                          raise ValueError(msg)
              
                      descriptions = []
                      for label, item in label_map.items():
                          item_desc = await to_prompt(item)
                          descriptions.append(f"{label}:\n{item_desc}")
              
                      picks_info = (
                          f"Select between {min_picks} and {max_picks}"
                          if max_picks is not None
                          else f"Select at least {min_picks}"
                      )
              
                      default_prompt = f"""Task/Decision: {task}
              
              Available options:
              {"-" * 40}
              {"\n\n".join(descriptions)}
              {"-" * 40}
              
              {picks_info} options by their exact labels.
              List your selections, one per line, followed by your reasoning."""
              
                      result = await self.agent.run(prompt or default_prompt, output_type=LLMMultiPick)
              
                      # Validate selections
                      invalid = [s for s in result.content.selections if s not in label_map]
                      if invalid:
                          msg = f"Invalid selections: {', '.join(invalid)}"
                          raise ValueError(msg)
                      num_picks = len(result.content.selections)
                      if num_picks < min_picks:
                          msg = f"Too few selections: got {num_picks}, need {min_picks}"
                          raise ValueError(msg)
              
                      if max_picks and num_picks > max_picks:
                          msg = f"Too many selections: got {num_picks}, max {max_picks}"
                          raise ValueError(msg)
              
                      selected = [cast(T, label_map[label]) for label in result.content.selections]
                      return MultiPick(selections=selected, reason=result.content.reason)
              

              SlashedAgent

              Wrapper around Agent that handles slash commands in streams.

              Uses the "commands first" strategy from the ACP adapter: 1. Execute all slash commands first 2. Then process remaining content through wrapped agent 3. If only commands, end without LLM processing

              Source code in src/llmling_agent/agent/slashed_agent.py
               49
               50
               51
               52
               53
               54
               55
               56
               57
               58
               59
               60
               61
               62
               63
               64
               65
               66
               67
               68
               69
               70
               71
               72
               73
               74
               75
               76
               77
               78
               79
               80
               81
               82
               83
               84
               85
               86
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              class SlashedAgent[TDeps, OutputDataT]:
                  """Wrapper around Agent that handles slash commands in streams.
              
                  Uses the "commands first" strategy from the ACP adapter:
                  1. Execute all slash commands first
                  2. Then process remaining content through wrapped agent
                  3. If only commands, end without LLM processing
                  """
              
                  def __init__(
                      self,
                      agent: Agent[TDeps, OutputDataT],
                      command_store: CommandStore | None = None,
                      *,
                      context_data_factory: Callable[[], Any] | None = None,
                  ) -> None:
                      """Initialize with wrapped agent and command store.
              
                      Args:
                          agent: The agent to wrap
                          command_store: Command store for slash commands (creates default if None)
                          context_data_factory: Optional factory for creating command context data
                      """
                      self.agent = agent
                      self._context_data_factory = context_data_factory
                      self._event_queue: asyncio.Queue[CommandStoreEvent] | None = None
              
                      # Create store with our streaming event handler
                      if command_store is None:
                          from slashed import CommandStore
              
                          from llmling_agent_commands import get_commands
              
                          self.command_store = CommandStore(
                              event_handler=self._bridge_events_to_queue,
                              commands=get_commands(),
                          )
              
                      else:
                          self.command_store = command_store
              
                  async def _bridge_events_to_queue(self, event: CommandStoreEvent) -> None:
                      """Bridge store events to async queue during command execution."""
                      if self._event_queue:
                          await self._event_queue.put(event)
              
                  def _is_slash_command(self, text: str) -> bool:
                      """Check if text starts with a slash command.
              
                      Args:
                          text: Text to check
              
                      Returns:
                          True if text is a slash command
                      """
                      return bool(SLASH_PATTERN.match(text.strip()))
              
                  async def _execute_slash_command_streaming(
                      self, command_text: str
                  ) -> AsyncGenerator[CommandOutputEvent | CommandCompleteEvent]:
                      """Execute a single slash command and yield events as they happen.
              
                      Args:
                          command_text: Full command text including slash
              
                      Yields:
                          Command output and completion events
                      """
                      parsed = _parse_slash_command(command_text)
                      if not parsed:
                          logger.warning("Invalid slash command", command=command_text)
                          yield CommandCompleteEvent(command="unknown", success=False)
                          return
              
                      cmd_name, args = parsed
              
                      # Set up event queue for this command execution
                      self._event_queue = asyncio.Queue()
                      context_data = (  # Create command context
                          self._context_data_factory()
                          if self._context_data_factory
                          else self.agent.context
                      )
              
                      cmd_ctx = self.command_store.create_context(data=context_data)
                      command_str = f"{cmd_name} {args}".strip()
                      execute_task = asyncio.create_task(
                          self.command_store.execute_command(command_str, cmd_ctx)
                      )
              
                      success = True
                      try:
                          # Yield events from queue as command runs
                          while not execute_task.done():
                              try:
                                  # Wait for events with short timeout to check task completion
                                  event = await asyncio.wait_for(self._event_queue.get(), timeout=0.1)
                                  # Convert store events to our stream events
                                  match event:
                                      case SlashedCommandOutputEvent(output=output):
                                          yield CommandOutputEvent(command=cmd_name, output=output)
                                      case CommandExecutedEvent(success=False, error=error) if error:
                                          yield CommandOutputEvent(
                                              command=cmd_name,
                                              output=f"Command error: {error}",
                                          )
                                          success = False
              
                              except TimeoutError:
                                  # No events in queue, continue checking if command is done
                                  continue
              
                          # Ensure command task completes and handle any remaining events
                          try:
                              await execute_task
                          except Exception as e:
                              logger.exception("Command execution failed", command=cmd_name)
                              success = False
                              yield CommandOutputEvent(command=cmd_name, output=f"Command error: {e}")
              
                          # Drain any remaining events from queue
                          while not self._event_queue.empty():
                              try:
                                  match self._event_queue.get_nowait():
                                      case SlashedCommandOutputEvent(output=output):
                                          yield CommandOutputEvent(command=cmd_name, output=output)
                              except asyncio.QueueEmpty:
                                  break
              
                          # Always yield completion event
                          yield CommandCompleteEvent(command=cmd_name, success=success)
              
                      finally:
                          # Clean up event queue
                          self._event_queue = None
              
                  async def run_stream(
                      self,
                      *prompts: PromptCompatible,
                      **kwargs: Any,
                  ) -> AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]:
                      """Run agent with slash command support.
              
                      Separates slash commands from regular prompts, executes commands first,
                      then processes remaining content through the wrapped agent.
              
                      Args:
                          *prompts: Input prompts (may include slash commands)
                          **kwargs: Additional arguments passed to agent.run_stream
              
                      Yields:
                          Stream events from command execution and agent processing
                      """
                      # Separate slash commands from regular content
                      commands: list[str] = []
                      regular_prompts: list[Any] = []
              
                      for prompt in prompts:
                          if isinstance(prompt, str) and self._is_slash_command(prompt):
                              logger.debug("Found slash command", command=prompt)
                              commands.append(prompt.strip())
                          else:
                              regular_prompts.append(prompt)
              
                      # Execute all commands first with streaming
                      if commands:
                          for command in commands:
                              logger.info("Processing slash command", command=command)
                              async for cmd_event in self._execute_slash_command_streaming(command):
                                  yield cmd_event
              
                      # If we have regular content, process it through the agent
                      if regular_prompts:
                          logger.debug(
                              "Processing prompts through agent", num_prompts=len(regular_prompts)
                          )
                          async for event in self.agent.run_stream(*regular_prompts, **kwargs):
                              yield event
              
                      # If we only had commands and no regular content, we're done
                      # (no additional events needed)
              
                  def __getattr__(self, name: str) -> Any:
                      """Delegate attribute access to wrapped agent.
              
                      Args:
                          name: Attribute name
              
                      Returns:
                          Attribute value from wrapped agent
                      """
                      return getattr(self.agent, name)
              

              __getattr__

              __getattr__(name: str) -> Any
              

              Delegate attribute access to wrapped agent.

              Parameters:

              Name Type Description Default
              name str

              Attribute name

              required

              Returns:

              Type Description
              Any

              Attribute value from wrapped agent

              Source code in src/llmling_agent/agent/slashed_agent.py
              231
              232
              233
              234
              235
              236
              237
              238
              239
              240
              def __getattr__(self, name: str) -> Any:
                  """Delegate attribute access to wrapped agent.
              
                  Args:
                      name: Attribute name
              
                  Returns:
                      Attribute value from wrapped agent
                  """
                  return getattr(self.agent, name)
              

              __init__

              __init__(
                  agent: Agent[TDeps, OutputDataT],
                  command_store: CommandStore | None = None,
                  *,
                  context_data_factory: Callable[[], Any] | None = None
              ) -> None
              

              Initialize with wrapped agent and command store.

              Parameters:

              Name Type Description Default
              agent Agent[TDeps, OutputDataT]

              The agent to wrap

              required
              command_store CommandStore | None

              Command store for slash commands (creates default if None)

              None
              context_data_factory Callable[[], Any] | None

              Optional factory for creating command context data

              None
              Source code in src/llmling_agent/agent/slashed_agent.py
              58
              59
              60
              61
              62
              63
              64
              65
              66
              67
              68
              69
              70
              71
              72
              73
              74
              75
              76
              77
              78
              79
              80
              81
              82
              83
              84
              85
              86
              87
              88
              def __init__(
                  self,
                  agent: Agent[TDeps, OutputDataT],
                  command_store: CommandStore | None = None,
                  *,
                  context_data_factory: Callable[[], Any] | None = None,
              ) -> None:
                  """Initialize with wrapped agent and command store.
              
                  Args:
                      agent: The agent to wrap
                      command_store: Command store for slash commands (creates default if None)
                      context_data_factory: Optional factory for creating command context data
                  """
                  self.agent = agent
                  self._context_data_factory = context_data_factory
                  self._event_queue: asyncio.Queue[CommandStoreEvent] | None = None
              
                  # Create store with our streaming event handler
                  if command_store is None:
                      from slashed import CommandStore
              
                      from llmling_agent_commands import get_commands
              
                      self.command_store = CommandStore(
                          event_handler=self._bridge_events_to_queue,
                          commands=get_commands(),
                      )
              
                  else:
                      self.command_store = command_store
              

              run_stream async

              run_stream(
                  *prompts: PromptCompatible, **kwargs: Any
              ) -> AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]
              

              Run agent with slash command support.

              Separates slash commands from regular prompts, executes commands first, then processes remaining content through the wrapped agent.

              Parameters:

              Name Type Description Default
              *prompts PromptCompatible

              Input prompts (may include slash commands)

              ()
              **kwargs Any

              Additional arguments passed to agent.run_stream

              {}

              Yields:

              Type Description
              AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]

              Stream events from command execution and agent processing

              Source code in src/llmling_agent/agent/slashed_agent.py
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              async def run_stream(
                  self,
                  *prompts: PromptCompatible,
                  **kwargs: Any,
              ) -> AsyncGenerator[SlashedAgentStreamEvent[OutputDataT]]:
                  """Run agent with slash command support.
              
                  Separates slash commands from regular prompts, executes commands first,
                  then processes remaining content through the wrapped agent.
              
                  Args:
                      *prompts: Input prompts (may include slash commands)
                      **kwargs: Additional arguments passed to agent.run_stream
              
                  Yields:
                      Stream events from command execution and agent processing
                  """
                  # Separate slash commands from regular content
                  commands: list[str] = []
                  regular_prompts: list[Any] = []
              
                  for prompt in prompts:
                      if isinstance(prompt, str) and self._is_slash_command(prompt):
                          logger.debug("Found slash command", command=prompt)
                          commands.append(prompt.strip())
                      else:
                          regular_prompts.append(prompt)
              
                  # Execute all commands first with streaming
                  if commands:
                      for command in commands:
                          logger.info("Processing slash command", command=command)
                          async for cmd_event in self._execute_slash_command_streaming(command):
                              yield cmd_event
              
                  # If we have regular content, process it through the agent
                  if regular_prompts:
                      logger.debug(
                          "Processing prompts through agent", num_prompts=len(regular_prompts)
                      )
                      async for event in self.agent.run_stream(*regular_prompts, **kwargs):
                          yield event
              

              SystemPrompts

              Manages system prompts for an agent.

              Source code in src/llmling_agent/agent/sys_prompts.py
               55
               56
               57
               58
               59
               60
               61
               62
               63
               64
               65
               66
               67
               68
               69
               70
               71
               72
               73
               74
               75
               76
               77
               78
               79
               80
               81
               82
               83
               84
               85
               86
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              195
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              class SystemPrompts:
                  """Manages system prompts for an agent."""
              
                  def __init__(
                      self,
                      prompts: AnyPromptType | list[AnyPromptType] | None = None,
                      template: str | None = None,
                      dynamic: bool = True,
                      context: AgentContext | None = None,
                      inject_agent_info: bool = True,
                      inject_tools: ToolInjectionMode = "off",
                      tool_usage_style: ToolUsageStyle = "suggestive",
                  ):
                      """Initialize prompt manager."""
                      from jinjarope import Environment
                      from toprompt import to_prompt
              
                      match prompts:
                          case list():
                              self.prompts = prompts
                          case None:
                              self.prompts = []
                          case _:
                              self.prompts = [prompts]
                      self.context = context
                      self.template = template
                      self.dynamic = dynamic
                      self.inject_agent_info = inject_agent_info
                      self.inject_tools = inject_tools
                      self.tool_usage_style = tool_usage_style
                      self._cached = False
                      self._env = Environment(enable_async=True)
                      self._env.filters["to_prompt"] = to_prompt
              
                  def __repr__(self) -> str:
                      return (
                          f"SystemPrompts(prompts={len(self.prompts)}, "
                          f"dynamic={self.dynamic}, inject_agent_info={self.inject_agent_info}, "
                          f"inject_tools={self.inject_tools!r})"
                      )
              
                  def __len__(self) -> int:
                      return len(self.prompts)
              
                  def __getitem__(self, idx: int | slice) -> AnyPromptType | list[AnyPromptType]:
                      return self.prompts[idx]
              
                  async def add_by_reference(self, reference: str):
                      """Add a system prompt using reference syntax.
              
                      Args:
                          reference: [provider:]identifier[@version][?var1=val1,...]
              
                      Examples:
                          await sys_prompts.add_by_reference("code_review?language=python")
                          await sys_prompts.add_by_reference("langfuse:expert@v2")
                      """
                      if not self.context:
                          msg = "No context available to resolve prompts"
                          raise RuntimeError(msg)
              
                      try:
                          content = await self.context.prompt_manager.get(reference)
                          self.prompts.append(content)
                      except Exception as e:
                          msg = f"Failed to add prompt {reference!r}"
                          raise RuntimeError(msg) from e
              
                  async def add(
                      self,
                      identifier: str,
                      *,
                      provider: str | None = None,
                      version: str | None = None,
                      variables: dict[str, Any] | None = None,
                  ):
                      """Add a system prompt using explicit parameters.
              
                      Args:
                          identifier: Prompt identifier/name
                          provider: Provider name (None = builtin)
                          version: Optional version string
                          variables: Optional template variables
              
                      Examples:
                          await sys_prompts.add("code_review", variables={"language": "python"})
                          await sys_prompts.add("expert", provider="langfuse", version="v2")
                      """
                      if not self.context:
                          msg = "No context available to resolve prompts"
                          raise RuntimeError(msg)
              
                      try:
                          content = await self.context.prompt_manager.get_from(
                              identifier,
                              provider=provider,
                              version=version,
                              variables=variables,
                          )
                          self.prompts.append(content)
                      except Exception as e:
                          ref = f"{provider + ':' if provider else ''}{identifier}"
                          msg = f"Failed to add prompt {ref!r}"
                          raise RuntimeError(msg) from e
              
                  def clear(self):
                      """Clear all system prompts."""
                      self.prompts = []
              
                  async def refresh_cache(self):
                      """Force re-evaluation of prompts."""
                      from toprompt import to_prompt
              
                      evaluated = []
                      for prompt in self.prompts:
                          result = await to_prompt(prompt)
                          evaluated.append(result)
                      self.prompts = evaluated
                      self._cached = True
              
                  @asynccontextmanager
                  async def temporary_prompt(
                      self, prompt: AnyPromptType, exclusive: bool = False
                  ) -> AsyncIterator[None]:
                      """Temporarily override system prompts.
              
                      Args:
                          prompt: Single prompt or sequence of prompts to use temporarily
                          exclusive: Whether to only use given prompt. If False, prompt will be
                                     appended to the agents prompts temporarily.
                      """
                      from toprompt import to_prompt
              
                      original_prompts = self.prompts.copy()
                      new_prompt = await to_prompt(prompt)
                      self.prompts = [new_prompt] if not exclusive else [*self.prompts, new_prompt]
                      try:
                          yield
                      finally:
                          self.prompts = original_prompts
              
                  async def format_system_prompt(self, agent: Agent[Any, Any]) -> str:
                      """Format complete system prompt."""
                      if not self.dynamic and not self._cached:
                          await self.refresh_cache()
              
                      template = self._env.from_string(self.template or DEFAULT_TEMPLATE)
                      result = await template.render_async(
                          agent=agent,
                          prompts=self.prompts,
                          dynamic=self.dynamic,
                          inject_agent_info=self.inject_agent_info,
                          inject_tools=self.inject_tools,
                          tool_usage_style=self.tool_usage_style,
                      )
                      return result.strip()
              

              __init__

              __init__(
                  prompts: AnyPromptType | list[AnyPromptType] | None = None,
                  template: str | None = None,
                  dynamic: bool = True,
                  context: AgentContext | None = None,
                  inject_agent_info: bool = True,
                  inject_tools: ToolInjectionMode = "off",
                  tool_usage_style: ToolUsageStyle = "suggestive",
              )
              

              Initialize prompt manager.

              Source code in src/llmling_agent/agent/sys_prompts.py
              58
              59
              60
              61
              62
              63
              64
              65
              66
              67
              68
              69
              70
              71
              72
              73
              74
              75
              76
              77
              78
              79
              80
              81
              82
              83
              84
              85
              86
              87
              def __init__(
                  self,
                  prompts: AnyPromptType | list[AnyPromptType] | None = None,
                  template: str | None = None,
                  dynamic: bool = True,
                  context: AgentContext | None = None,
                  inject_agent_info: bool = True,
                  inject_tools: ToolInjectionMode = "off",
                  tool_usage_style: ToolUsageStyle = "suggestive",
              ):
                  """Initialize prompt manager."""
                  from jinjarope import Environment
                  from toprompt import to_prompt
              
                  match prompts:
                      case list():
                          self.prompts = prompts
                      case None:
                          self.prompts = []
                      case _:
                          self.prompts = [prompts]
                  self.context = context
                  self.template = template
                  self.dynamic = dynamic
                  self.inject_agent_info = inject_agent_info
                  self.inject_tools = inject_tools
                  self.tool_usage_style = tool_usage_style
                  self._cached = False
                  self._env = Environment(enable_async=True)
                  self._env.filters["to_prompt"] = to_prompt
              

              add async

              add(
                  identifier: str,
                  *,
                  provider: str | None = None,
                  version: str | None = None,
                  variables: dict[str, Any] | None = None
              )
              

              Add a system prompt using explicit parameters.

              Parameters:

              Name Type Description Default
              identifier str

              Prompt identifier/name

              required
              provider str | None

              Provider name (None = builtin)

              None
              version str | None

              Optional version string

              None
              variables dict[str, Any] | None

              Optional template variables

              None

              Examples:

              await sys_prompts.add("code_review", variables={"language": "python"}) await sys_prompts.add("expert", provider="langfuse", version="v2")

              Source code in src/llmling_agent/agent/sys_prompts.py
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              142
              143
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              async def add(
                  self,
                  identifier: str,
                  *,
                  provider: str | None = None,
                  version: str | None = None,
                  variables: dict[str, Any] | None = None,
              ):
                  """Add a system prompt using explicit parameters.
              
                  Args:
                      identifier: Prompt identifier/name
                      provider: Provider name (None = builtin)
                      version: Optional version string
                      variables: Optional template variables
              
                  Examples:
                      await sys_prompts.add("code_review", variables={"language": "python"})
                      await sys_prompts.add("expert", provider="langfuse", version="v2")
                  """
                  if not self.context:
                      msg = "No context available to resolve prompts"
                      raise RuntimeError(msg)
              
                  try:
                      content = await self.context.prompt_manager.get_from(
                          identifier,
                          provider=provider,
                          version=version,
                          variables=variables,
                      )
                      self.prompts.append(content)
                  except Exception as e:
                      ref = f"{provider + ':' if provider else ''}{identifier}"
                      msg = f"Failed to add prompt {ref!r}"
                      raise RuntimeError(msg) from e
              

              add_by_reference async

              add_by_reference(reference: str)
              

              Add a system prompt using reference syntax.

              Parameters:

              Name Type Description Default
              reference str

              [provider:]identifier[@version][?var1=val1,...]

              required

              Examples:

              await sys_prompts.add_by_reference("code_review?language=python") await sys_prompts.add_by_reference("langfuse:expert@v2")

              Source code in src/llmling_agent/agent/sys_prompts.py
              102
              103
              104
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              async def add_by_reference(self, reference: str):
                  """Add a system prompt using reference syntax.
              
                  Args:
                      reference: [provider:]identifier[@version][?var1=val1,...]
              
                  Examples:
                      await sys_prompts.add_by_reference("code_review?language=python")
                      await sys_prompts.add_by_reference("langfuse:expert@v2")
                  """
                  if not self.context:
                      msg = "No context available to resolve prompts"
                      raise RuntimeError(msg)
              
                  try:
                      content = await self.context.prompt_manager.get(reference)
                      self.prompts.append(content)
                  except Exception as e:
                      msg = f"Failed to add prompt {reference!r}"
                      raise RuntimeError(msg) from e
              

              clear

              clear()
              

              Clear all system prompts.

              Source code in src/llmling_agent/agent/sys_prompts.py
              160
              161
              162
              def clear(self):
                  """Clear all system prompts."""
                  self.prompts = []
              

              format_system_prompt async

              format_system_prompt(agent: Agent[Any, Any]) -> str
              

              Format complete system prompt.

              Source code in src/llmling_agent/agent/sys_prompts.py
              196
              197
              198
              199
              200
              201
              202
              203
              204
              205
              206
              207
              208
              209
              210
              async def format_system_prompt(self, agent: Agent[Any, Any]) -> str:
                  """Format complete system prompt."""
                  if not self.dynamic and not self._cached:
                      await self.refresh_cache()
              
                  template = self._env.from_string(self.template or DEFAULT_TEMPLATE)
                  result = await template.render_async(
                      agent=agent,
                      prompts=self.prompts,
                      dynamic=self.dynamic,
                      inject_agent_info=self.inject_agent_info,
                      inject_tools=self.inject_tools,
                      tool_usage_style=self.tool_usage_style,
                  )
                  return result.strip()
              

              refresh_cache async

              refresh_cache()
              

              Force re-evaluation of prompts.

              Source code in src/llmling_agent/agent/sys_prompts.py
              164
              165
              166
              167
              168
              169
              170
              171
              172
              173
              async def refresh_cache(self):
                  """Force re-evaluation of prompts."""
                  from toprompt import to_prompt
              
                  evaluated = []
                  for prompt in self.prompts:
                      result = await to_prompt(prompt)
                      evaluated.append(result)
                  self.prompts = evaluated
                  self._cached = True
              

              temporary_prompt async

              temporary_prompt(prompt: AnyPromptType, exclusive: bool = False) -> AsyncIterator[None]
              

              Temporarily override system prompts.

              Parameters:

              Name Type Description Default
              prompt AnyPromptType

              Single prompt or sequence of prompts to use temporarily

              required
              exclusive bool

              Whether to only use given prompt. If False, prompt will be appended to the agents prompts temporarily.

              False
              Source code in src/llmling_agent/agent/sys_prompts.py
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              191
              192
              193
              194
              @asynccontextmanager
              async def temporary_prompt(
                  self, prompt: AnyPromptType, exclusive: bool = False
              ) -> AsyncIterator[None]:
                  """Temporarily override system prompts.
              
                  Args:
                      prompt: Single prompt or sequence of prompts to use temporarily
                      exclusive: Whether to only use given prompt. If False, prompt will be
                                 appended to the agents prompts temporarily.
                  """
                  from toprompt import to_prompt
              
                  original_prompts = self.prompts.copy()
                  new_prompt = await to_prompt(prompt)
                  self.prompts = [new_prompt] if not exclusive else [*self.prompts, new_prompt]
                  try:
                      yield
                  finally:
                      self.prompts = original_prompts