Skip to content

llmling_agent

Class info

Classes

Name Children Inherits
Agent
llmling_agent.agent.agent
Agent for AI-powered interaction with LLMling resources and tools.
    AgentConfig
    llmling_agent.models.agents
    Configuration for a single agent in the system.
      AgentPool
      llmling_agent.delegation.pool
      Pool of initialized agents.
        AgentPoolView
        llmling_agent.chat_session.base
        User's view and control point for interacting with an agent in a pool.
          AgentRouter
          llmling_agent.delegation.router
          Base class for routing messages between agents.
          AgentsManifest
          llmling_agent.models.agents
          Complete agent configuration manifest defining all available agents.
            AwaitResponseDecision
            llmling_agent.delegation.router
            Forward message and wait for response.
              CallbackRouter
              llmling_agent.delegation.router
              Router using callback function for decisions.
                ChatMessage
                llmling_agent.models.messages
                Common message format for all UI types.
                  Decision
                  llmling_agent.delegation.router
                  Base class for all routing decisions.
                  EndDecision
                  llmling_agent.delegation.router
                  End the conversation.
                    RouteDecision
                    llmling_agent.delegation.router
                    Forward message without waiting for response.
                      RuleRouter
                      llmling_agent.delegation.router
                      Router using predefined rules.
                        SlashedAgent
                        llmling_agent.agent.slashed_agent
                        Wraps an agent with slash command support.
                          StructuredAgent
                          llmling_agent.agent.structured
                          Wrapper for Agent that enforces a specific result type.
                            SystemPrompt
                            llmling_agent.models.prompts
                            System prompt configuration for agent behavior control.

                              🛈 DocStrings

                              Agent configuration and creation.

                              Agent

                              Bases: TaskManagerMixin

                              Agent for AI-powered interaction with LLMling resources and tools.

                              Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

                              This agent integrates LLMling's resource system with PydanticAI's agent capabilities. It provides: - Access to resources through RuntimeConfig - Tool registration for resource operations - System prompt customization - Signals - Message history management - Database logging

                              Source code in src/llmling_agent/agent/agent.py
                                73
                                74
                                75
                                76
                                77
                                78
                                79
                                80
                                81
                                82
                                83
                                84
                                85
                                86
                                87
                                88
                                89
                                90
                                91
                                92
                                93
                                94
                                95
                                96
                                97
                                98
                                99
                               100
                               101
                               102
                               103
                               104
                               105
                               106
                               107
                               108
                               109
                               110
                               111
                               112
                               113
                               114
                               115
                               116
                               117
                               118
                               119
                               120
                               121
                               122
                               123
                               124
                               125
                               126
                               127
                               128
                               129
                               130
                               131
                               132
                               133
                               134
                               135
                               136
                               137
                               138
                               139
                               140
                               141
                               142
                               143
                               144
                               145
                               146
                               147
                               148
                               149
                               150
                               151
                               152
                               153
                               154
                               155
                               156
                               157
                               158
                               159
                               160
                               161
                               162
                               163
                               164
                               165
                               166
                               167
                               168
                               169
                               170
                               171
                               172
                               173
                               174
                               175
                               176
                               177
                               178
                               179
                               180
                               181
                               182
                               183
                               184
                               185
                               186
                               187
                               188
                               189
                               190
                               191
                               192
                               193
                               194
                               195
                               196
                               197
                               198
                               199
                               200
                               201
                               202
                               203
                               204
                               205
                               206
                               207
                               208
                               209
                               210
                               211
                               212
                               213
                               214
                               215
                               216
                               217
                               218
                               219
                               220
                               221
                               222
                               223
                               224
                               225
                               226
                               227
                               228
                               229
                               230
                               231
                               232
                               233
                               234
                               235
                               236
                               237
                               238
                               239
                               240
                               241
                               242
                               243
                               244
                               245
                               246
                               247
                               248
                               249
                               250
                               251
                               252
                               253
                               254
                               255
                               256
                               257
                               258
                               259
                               260
                               261
                               262
                               263
                               264
                               265
                               266
                               267
                               268
                               269
                               270
                               271
                               272
                               273
                               274
                               275
                               276
                               277
                               278
                               279
                               280
                               281
                               282
                               283
                               284
                               285
                               286
                               287
                               288
                               289
                               290
                               291
                               292
                               293
                               294
                               295
                               296
                               297
                               298
                               299
                               300
                               301
                               302
                               303
                               304
                               305
                               306
                               307
                               308
                               309
                               310
                               311
                               312
                               313
                               314
                               315
                               316
                               317
                               318
                               319
                               320
                               321
                               322
                               323
                               324
                               325
                               326
                               327
                               328
                               329
                               330
                               331
                               332
                               333
                               334
                               335
                               336
                               337
                               338
                               339
                               340
                               341
                               342
                               343
                               344
                               345
                               346
                               347
                               348
                               349
                               350
                               351
                               352
                               353
                               354
                               355
                               356
                               357
                               358
                               359
                               360
                               361
                               362
                               363
                               364
                               365
                               366
                               367
                               368
                               369
                               370
                               371
                               372
                               373
                               374
                               375
                               376
                               377
                               378
                               379
                               380
                               381
                               382
                               383
                               384
                               385
                               386
                               387
                               388
                               389
                               390
                               391
                               392
                               393
                               394
                               395
                               396
                               397
                               398
                               399
                               400
                               401
                               402
                               403
                               404
                               405
                               406
                               407
                               408
                               409
                               410
                               411
                               412
                               413
                               414
                               415
                               416
                               417
                               418
                               419
                               420
                               421
                               422
                               423
                               424
                               425
                               426
                               427
                               428
                               429
                               430
                               431
                               432
                               433
                               434
                               435
                               436
                               437
                               438
                               439
                               440
                               441
                               442
                               443
                               444
                               445
                               446
                               447
                               448
                               449
                               450
                               451
                               452
                               453
                               454
                               455
                               456
                               457
                               458
                               459
                               460
                               461
                               462
                               463
                               464
                               465
                               466
                               467
                               468
                               469
                               470
                               471
                               472
                               473
                               474
                               475
                               476
                               477
                               478
                               479
                               480
                               481
                               482
                               483
                               484
                               485
                               486
                               487
                               488
                               489
                               490
                               491
                               492
                               493
                               494
                               495
                               496
                               497
                               498
                               499
                               500
                               501
                               502
                               503
                               504
                               505
                               506
                               507
                               508
                               509
                               510
                               511
                               512
                               513
                               514
                               515
                               516
                               517
                               518
                               519
                               520
                               521
                               522
                               523
                               524
                               525
                               526
                               527
                               528
                               529
                               530
                               531
                               532
                               533
                               534
                               535
                               536
                               537
                               538
                               539
                               540
                               541
                               542
                               543
                               544
                               545
                               546
                               547
                               548
                               549
                               550
                               551
                               552
                               553
                               554
                               555
                               556
                               557
                               558
                               559
                               560
                               561
                               562
                               563
                               564
                               565
                               566
                               567
                               568
                               569
                               570
                               571
                               572
                               573
                               574
                               575
                               576
                               577
                               578
                               579
                               580
                               581
                               582
                               583
                               584
                               585
                               586
                               587
                               588
                               589
                               590
                               591
                               592
                               593
                               594
                               595
                               596
                               597
                               598
                               599
                               600
                               601
                               602
                               603
                               604
                               605
                               606
                               607
                               608
                               609
                               610
                               611
                               612
                               613
                               614
                               615
                               616
                               617
                               618
                               619
                               620
                               621
                               622
                               623
                               624
                               625
                               626
                               627
                               628
                               629
                               630
                               631
                               632
                               633
                               634
                               635
                               636
                               637
                               638
                               639
                               640
                               641
                               642
                               643
                               644
                               645
                               646
                               647
                               648
                               649
                               650
                               651
                               652
                               653
                               654
                               655
                               656
                               657
                               658
                               659
                               660
                               661
                               662
                               663
                               664
                               665
                               666
                               667
                               668
                               669
                               670
                               671
                               672
                               673
                               674
                               675
                               676
                               677
                               678
                               679
                               680
                               681
                               682
                               683
                               684
                               685
                               686
                               687
                               688
                               689
                               690
                               691
                               692
                               693
                               694
                               695
                               696
                               697
                               698
                               699
                               700
                               701
                               702
                               703
                               704
                               705
                               706
                               707
                               708
                               709
                               710
                               711
                               712
                               713
                               714
                               715
                               716
                               717
                               718
                               719
                               720
                               721
                               722
                               723
                               724
                               725
                               726
                               727
                               728
                               729
                               730
                               731
                               732
                               733
                               734
                               735
                               736
                               737
                               738
                               739
                               740
                               741
                               742
                               743
                               744
                               745
                               746
                               747
                               748
                               749
                               750
                               751
                               752
                               753
                               754
                               755
                               756
                               757
                               758
                               759
                               760
                               761
                               762
                               763
                               764
                               765
                               766
                               767
                               768
                               769
                               770
                               771
                               772
                               773
                               774
                               775
                               776
                               777
                               778
                               779
                               780
                               781
                               782
                               783
                               784
                               785
                               786
                               787
                               788
                               789
                               790
                               791
                               792
                               793
                               794
                               795
                               796
                               797
                               798
                               799
                               800
                               801
                               802
                               803
                               804
                               805
                               806
                               807
                               808
                               809
                               810
                               811
                               812
                               813
                               814
                               815
                               816
                               817
                               818
                               819
                               820
                               821
                               822
                               823
                               824
                               825
                               826
                               827
                               828
                               829
                               830
                               831
                               832
                               833
                               834
                               835
                               836
                               837
                               838
                               839
                               840
                               841
                               842
                               843
                               844
                               845
                               846
                               847
                               848
                               849
                               850
                               851
                               852
                               853
                               854
                               855
                               856
                               857
                               858
                               859
                               860
                               861
                               862
                               863
                               864
                               865
                               866
                               867
                               868
                               869
                               870
                               871
                               872
                               873
                               874
                               875
                               876
                               877
                               878
                               879
                               880
                               881
                               882
                               883
                               884
                               885
                               886
                               887
                               888
                               889
                               890
                               891
                               892
                               893
                               894
                               895
                               896
                               897
                               898
                               899
                               900
                               901
                               902
                               903
                               904
                               905
                               906
                               907
                               908
                               909
                               910
                               911
                               912
                               913
                               914
                               915
                               916
                               917
                               918
                               919
                               920
                               921
                               922
                               923
                               924
                               925
                               926
                               927
                               928
                               929
                               930
                               931
                               932
                               933
                               934
                               935
                               936
                               937
                               938
                               939
                               940
                               941
                               942
                               943
                               944
                               945
                               946
                               947
                               948
                               949
                               950
                               951
                               952
                               953
                               954
                               955
                               956
                               957
                               958
                               959
                               960
                               961
                               962
                               963
                               964
                               965
                               966
                               967
                               968
                               969
                               970
                               971
                               972
                               973
                               974
                               975
                               976
                               977
                               978
                               979
                               980
                               981
                               982
                               983
                               984
                               985
                               986
                               987
                               988
                               989
                               990
                               991
                               992
                               993
                               994
                               995
                               996
                               997
                               998
                               999
                              1000
                              1001
                              1002
                              1003
                              1004
                              1005
                              1006
                              1007
                              1008
                              1009
                              1010
                              1011
                              1012
                              1013
                              1014
                              1015
                              1016
                              1017
                              1018
                              1019
                              1020
                              1021
                              1022
                              1023
                              1024
                              1025
                              1026
                              1027
                              1028
                              1029
                              1030
                              1031
                              1032
                              1033
                              1034
                              1035
                              1036
                              1037
                              1038
                              1039
                              1040
                              1041
                              1042
                              1043
                              1044
                              1045
                              1046
                              1047
                              1048
                              1049
                              1050
                              1051
                              1052
                              1053
                              1054
                              1055
                              1056
                              1057
                              1058
                              1059
                              1060
                              1061
                              1062
                              1063
                              1064
                              1065
                              1066
                              1067
                              1068
                              1069
                              1070
                              1071
                              1072
                              1073
                              1074
                              1075
                              1076
                              1077
                              1078
                              1079
                              1080
                              1081
                              1082
                              1083
                              1084
                              1085
                              1086
                              1087
                              1088
                              1089
                              1090
                              1091
                              1092
                              1093
                              1094
                              1095
                              1096
                              1097
                              1098
                              1099
                              1100
                              1101
                              1102
                              1103
                              1104
                              1105
                              1106
                              1107
                              1108
                              1109
                              1110
                              1111
                              1112
                              1113
                              1114
                              1115
                              1116
                              1117
                              1118
                              1119
                              1120
                              1121
                              1122
                              1123
                              1124
                              1125
                              1126
                              1127
                              1128
                              1129
                              1130
                              1131
                              1132
                              1133
                              1134
                              1135
                              1136
                              1137
                              1138
                              1139
                              1140
                              1141
                              1142
                              1143
                              1144
                              1145
                              1146
                              1147
                              1148
                              1149
                              1150
                              1151
                              1152
                              1153
                              1154
                              1155
                              1156
                              1157
                              1158
                              1159
                              1160
                              1161
                              1162
                              1163
                              1164
                              1165
                              1166
                              1167
                              1168
                              1169
                              1170
                              1171
                              1172
                              1173
                              1174
                              1175
                              1176
                              1177
                              1178
                              1179
                              1180
                              1181
                              1182
                              1183
                              1184
                              1185
                              1186
                              1187
                              1188
                              1189
                              1190
                              1191
                              1192
                              1193
                              1194
                              1195
                              1196
                              1197
                              1198
                              1199
                              1200
                              1201
                              1202
                              1203
                              1204
                              1205
                              1206
                              1207
                              1208
                              1209
                              1210
                              1211
                              1212
                              1213
                              1214
                              1215
                              1216
                              1217
                              1218
                              1219
                              1220
                              1221
                              1222
                              1223
                              1224
                              1225
                              1226
                              1227
                              1228
                              1229
                              1230
                              1231
                              1232
                              1233
                              1234
                              1235
                              1236
                              1237
                              1238
                              1239
                              1240
                              1241
                              1242
                              1243
                              1244
                              1245
                              1246
                              1247
                              1248
                              1249
                              1250
                              1251
                              1252
                              1253
                              1254
                              1255
                              1256
                              1257
                              1258
                              1259
                              1260
                              1261
                              1262
                              1263
                              1264
                              1265
                              1266
                              1267
                              1268
                              1269
                              1270
                              1271
                              1272
                              1273
                              1274
                              1275
                              1276
                              1277
                              1278
                              1279
                              1280
                              1281
                              1282
                              1283
                              1284
                              1285
                              1286
                              1287
                              1288
                              1289
                              1290
                              1291
                              1292
                              1293
                              1294
                              1295
                              1296
                              1297
                              1298
                              1299
                              class Agent[TDeps](TaskManagerMixin):
                                  """Agent for AI-powered interaction with LLMling resources and tools.
                              
                                  Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                              
                                  This agent integrates LLMling's resource system with PydanticAI's agent capabilities.
                                  It provides:
                                  - Access to resources through RuntimeConfig
                                  - Tool registration for resource operations
                                  - System prompt customization
                                  - Signals
                                  - Message history management
                                  - Database logging
                                  """
                              
                                  # this fixes weird mypy issue
                                  conversation: ConversationManager
                                  connections: TalkManager
                                  talk: Interactions
                                  description: str | None
                              
                                  message_received = Signal(ChatMessage[str])  # Always string
                                  message_sent = Signal(ChatMessage)
                                  tool_used = Signal(ToolCallInfo)
                                  model_changed = Signal(object)  # Model | None
                                  chunk_streamed = Signal(str, str)  # (chunk, message_id)
                                  outbox = Signal(ChatMessage[Any], str)  # message, prompt
                              
                                  def __init__(
                                      self,
                                      runtime: RuntimeConfig | Config | StrPath | None = None,
                                      *,
                                      context: AgentContext[TDeps] | None = None,
                                      agent_type: AgentType = "ai",
                                      session: SessionIdType | SessionQuery = None,
                                      model: ModelType = None,
                                      system_prompt: str | Sequence[str] = (),
                                      name: str = "llmling-agent",
                                      description: str | None = None,
                                      tools: Sequence[ToolType] | None = None,
                                      mcp_servers: list[str | MCPServerConfig] | None = None,
                                      retries: int = 1,
                                      result_retries: int | None = None,
                                      tool_choice: bool | str | list[str] = True,
                                      end_strategy: EndStrategy = "early",
                                      defer_model_check: bool = False,
                                      enable_db_logging: bool = True,
                                      confirmation_callback: ConfirmationCallback | None = None,
                                      debug: bool = False,
                                      **kwargs,
                                  ):
                                      """Initialize agent with runtime configuration.
                              
                                      Args:
                                          runtime: Runtime configuration providing access to resources/tools
                                          context: Agent context with capabilities and configuration
                                          agent_type: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                          session: Optional id or Session query to recover a conversation
                                          model: The default model to use (defaults to GPT-4)
                                          system_prompt: Static system prompts to use for this agent
                                          name: Name of the agent for logging
                                          description: Description of the Agent ("what it can do")
                                          tools: List of tools to register with the agent
                                          mcp_servers: MCP servers to connect to
                                          retries: Default number of retries for failed operations
                                          result_retries: Max retries for result validation (defaults to retries)
                                          tool_choice: Ability to set a fixed tool or temporarily disable tools usage.
                                          end_strategy: Strategy for handling tool calls that are requested alongside
                                                        a final result
                                          defer_model_check: Whether to defer model evaluation until first run
                                          kwargs: Additional arguments for PydanticAI agent
                                          enable_db_logging: Whether to enable logging for the agent
                                          confirmation_callback: Callback for confirmation prompts
                                          debug: Whether to enable debug mode
                                      """
                                      super().__init__()
                                      self._debug = debug
                                      self._result_type = None
                              
                                      # save some stuff for asnyc init
                                      self._owns_runtime = False
                                      self._mcp_servers = [
                                          StdioMCPServer(command=s.split()[0], args=s.split()[1:])
                                          if isinstance(s, str)
                                          else s
                                          for s in (mcp_servers or [])
                                      ]
                              
                                      # prepare context
                                      ctx = context or AgentContext[TDeps].create_default(name)
                                      ctx.confirmation_callback = confirmation_callback
                                      match runtime:
                                          case None:
                                              ctx.runtime = RuntimeConfig.from_config(Config())
                                          case Config():
                                              ctx.runtime = RuntimeConfig.from_config(runtime)
                                          case str() | PathLike():
                                              ctx.runtime = RuntimeConfig.from_config(Config.from_file(runtime))
                                          case RuntimeConfig():
                                              ctx.runtime = runtime
                                      # connect signals
                                      self.message_sent.connect(self._forward_message)
                              
                                      # Initialize tool manager
                                      all_tools = list(tools or [])
                                      self._tool_manager = ToolManager(all_tools, tool_choice=tool_choice, context=ctx)
                              
                                      # set up conversation manager
                                      config_prompts = ctx.config.system_prompts if ctx else []
                                      all_prompts = list(config_prompts)
                                      if isinstance(system_prompt, str):
                                          all_prompts.append(system_prompt)
                                      else:
                                          all_prompts.extend(system_prompt)
                                      self.conversation = ConversationManager(self, session, all_prompts)
                              
                                      # Initialize provider based on type
                                      match agent_type:
                                          case "ai":
                                              if model and not isinstance(model, str):
                                                  from pydantic_ai import models
                              
                                                  assert isinstance(model, models.Model)
                                              self._provider: AgentProvider = PydanticAIProvider(
                                                  model=model,  # pyright: ignore
                                                  system_prompt=system_prompt,
                                                  retries=retries,
                                                  end_strategy=end_strategy,
                                                  result_retries=result_retries,
                                                  defer_model_check=defer_model_check,
                                                  debug=debug,
                                                  **kwargs,
                                              )
                                          case "human":
                                              self._provider = HumanProvider(name=name, debug=debug)
                                          case "litellm":
                                              from llmling_agent_providers.litellm_provider import LiteLLMProvider
                              
                                              self._provider = LiteLLMProvider(name=name, debug=debug, retries=retries)
                                          case AgentProvider():
                                              self._provider = agent_type
                                          case _:
                                              msg = f"Invalid agent type: {type}"
                                              raise ValueError(msg)
                                      self._provider.tool_manager = self._tool_manager
                                      self._provider.context = ctx
                                      self._provider.conversation = self.conversation
                                      ctx.capabilities.register_capability_tools(self)
                              
                                      # Forward provider signals
                                      self._provider.chunk_streamed.connect(self.chunk_streamed.emit)
                                      self._provider.model_changed.connect(self.model_changed.emit)
                                      self._provider.tool_used.connect(self.tool_used.emit)
                                      self._provider.model_changed.connect(self.model_changed.emit)
                              
                                      self.name = name
                                      self.description = description
                                      msg = "Initialized %s (model=%s)"
                                      logger.debug(msg, self.name, model)
                              
                                      from llmling_agent.agent import AgentLogger
                                      from llmling_agent.agent.talk import Interactions
                                      from llmling_agent.events import EventManager
                              
                                      self.connections = TalkManager(self)
                                      self.talk = Interactions(self)
                              
                                      self._logger = AgentLogger(self, enable_db_logging=enable_db_logging)
                                      self._events = EventManager(self, enable_events=True)
                              
                                      self._background_task: asyncio.Task[Any] | None = None
                              
                                  def __repr__(self) -> str:
                                      desc = f", {self.description!r}" if self.description else ""
                                      tools = f", tools={len(self.tools)}" if self.tools else ""
                                      return f"Agent({self._provider!r}{desc}{tools})"
                              
                                  def __prompt__(self) -> str:
                                      parts = [
                                          f"Agent: {self.name}",
                                          f"Type: {self._provider.__class__.__name__}",
                                          f"Model: {self.model_name or 'default'}",
                                      ]
                                      if self.description:
                                          parts.append(f"Description: {self.description}")
                                      parts.extend([self.tools.__prompt__(), self.conversation.__prompt__()])
                              
                                      return "\n".join(parts)
                              
                                  async def __aenter__(self) -> Self:
                                      """Enter async context and set up MCP servers."""
                                      try:
                                          # First initialize runtime
                                          runtime_ref = self.context.runtime
                                          if runtime_ref and not runtime_ref._initialized:
                                              self._owns_runtime = True
                                              await runtime_ref.__aenter__()
                                              runtime_tools = runtime_ref.tools.values()
                                              logger.debug(
                                                  "Registering runtime tools: %s", [t.name for t in runtime_tools]
                                              )
                                              for tool in runtime_tools:
                                                  self.tools.register_tool(tool, source="runtime")
                              
                                          # Then setup constructor MCP servers
                                          if self._mcp_servers:
                                              await self.tools.setup_mcp_servers(self._mcp_servers)
                              
                                          # Then setup config MCP servers if any
                                          if self.context and self.context.config and self.context.config.mcp_servers:
                                              await self.tools.setup_mcp_servers(self.context.config.get_mcp_servers())
                                      except Exception as e:
                                          # Clean up in reverse order
                                          if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                              await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                          msg = "Failed to initialize agent"
                                          raise RuntimeError(msg) from e
                                      else:
                                          return self
                              
                                  async def __aexit__(
                                      self,
                                      exc_type: type[BaseException] | None,
                                      exc_val: BaseException | None,
                                      exc_tb: TracebackType | None,
                                  ):
                                      """Exit async context."""
                                      try:
                                          await self.tools.cleanup()
                                      finally:
                                          if self._owns_runtime and self.context.runtime:
                                              await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                              
                                  @overload
                                  def __rshift__(self, other: AnyAgent[Any, Any] | str) -> Talk: ...
                              
                                  @overload
                                  def __rshift__(self, other: Team[Any]) -> TeamTalk: ...
                              
                                  def __rshift__(self, other: AnyAgent[Any, Any] | Team[Any] | str) -> Talk | TeamTalk:
                                      """Connect agent to another agent or group.
                              
                                      Example:
                                          agent >> other_agent  # Connect to single agent
                                          agent >> (agent2 | agent3)  # Connect to group
                                          agent >> "other_agent"  # Connect by name (needs pool)
                                      """
                                      return self.pass_results_to(other)
                              
                                  def __or__(self, other: AnyAgent[Any, Any] | Team[Any]) -> Team[TDeps]:
                                      """Create agent group using | operator.
                              
                                      Example:
                                          group = analyzer | planner | executor  # Create group of 3
                                          group = analyzer | existing_group  # Add to existing group
                                      """
                                      from llmling_agent.delegation.agentgroup import Team
                              
                                      if isinstance(other, Team):
                                          return Team([self, *other.agents])
                                      return Team([self, other])
                              
                                  @property
                                  def name(self) -> str:
                                      """Get agent name."""
                                      return self._provider.name or "llmling-agent"
                              
                                  @name.setter
                                  def name(self, value: str):
                                      self._provider.name = value
                              
                                  @property
                                  def context(self) -> AgentContext[TDeps]:
                                      """Get agent context."""
                                      return self._provider.context
                              
                                  @context.setter
                                  def context(self, value: AgentContext[TDeps]):
                                      """Set agent context and propagate to provider."""
                                      self._provider.context = value
                                      self._tool_manager.context = value
                              
                                  def set_result_type(
                                      self,
                                      result_type: type[TResult] | str | ResponseDefinition | None,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ):
                                      """Set or update the result type for this agent.
                              
                                      Args:
                                          result_type: New result type, can be:
                                              - A Python type for validation
                                              - Name of a response definition
                                              - Response definition instance
                                              - None to reset to unstructured mode
                                          tool_name: Optional override for tool name
                                          tool_description: Optional override for tool description
                                      """
                                      logger.debug("Setting result type to: %s", result_type)
                                      self._result_type = to_type(result_type)  # to_type?
                              
                                  @overload
                                  def to_structured(
                                      self,
                                      result_type: None,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ) -> Self: ...
                              
                                  @overload
                                  def to_structured[TResult](
                                      self,
                                      result_type: type[TResult] | str | ResponseDefinition,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ) -> StructuredAgent[TDeps, TResult]: ...
                              
                                  def to_structured[TResult](
                                      self,
                                      result_type: type[TResult] | str | ResponseDefinition | None,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ) -> StructuredAgent[TDeps, TResult] | Self:
                                      """Convert this agent to a structured agent.
                              
                                      If result_type is None, returns self unchanged (no wrapping).
                                      Otherwise creates a StructuredAgent wrapper.
                              
                                      Args:
                                          result_type: Type for structured responses. Can be:
                                              - A Python type (Pydantic model)
                                              - Name of response definition from context
                                              - Complete response definition
                                              - None to skip wrapping
                                          tool_name: Optional override for result tool name
                                          tool_description: Optional override for result tool description
                              
                                      Returns:
                                          Either StructuredAgent wrapper or self unchanged
                                      from llmling_agent.agent import StructuredAgent
                                      """
                                      if result_type is None:
                                          return self
                              
                                      from llmling_agent.agent import StructuredAgent
                              
                                      return StructuredAgent(
                                          self,
                                          result_type=result_type,
                                          tool_name=tool_name,
                                          tool_description=tool_description,
                                      )
                              
                                  @classmethod
                                  @overload
                                  def open(
                                      cls,
                                      config_path: StrPath | Config | None = None,
                                      *,
                                      result_type: None = None,
                                      model: ModelType = None,
                                      session: SessionIdType | SessionQuery = None,
                                      system_prompt: str | Sequence[str] = (),
                                      name: str = "llmling-agent",
                                      retries: int = 1,
                                      result_retries: int | None = None,
                                      end_strategy: EndStrategy = "early",
                                      defer_model_check: bool = False,
                                      **kwargs: Any,
                                  ) -> AbstractAsyncContextManager[Agent[TDeps]]: ...
                              
                                  @classmethod
                                  @overload
                                  def open[TResult](
                                      cls,
                                      config_path: StrPath | Config | None = None,
                                      *,
                                      result_type: type[TResult],
                                      model: ModelType = None,
                                      session: SessionIdType | SessionQuery = None,
                                      system_prompt: str | Sequence[str] = (),
                                      name: str = "llmling-agent",
                                      retries: int = 1,
                                      result_retries: int | None = None,
                                      end_strategy: EndStrategy = "early",
                                      defer_model_check: bool = False,
                                      **kwargs: Any,
                                  ) -> AbstractAsyncContextManager[StructuredAgent[TDeps, TResult]]: ...
                              
                                  @classmethod
                                  @asynccontextmanager
                                  async def open[TResult](
                                      cls,
                                      config_path: StrPath | Config | None = None,
                                      *,
                                      result_type: type[TResult] | None = None,
                                      model: ModelType = None,
                                      session: SessionIdType | SessionQuery = None,
                                      system_prompt: str | Sequence[str] = (),
                                      name: str = "llmling-agent",
                                      retries: int = 1,
                                      result_retries: int | None = None,
                                      end_strategy: EndStrategy = "early",
                                      defer_model_check: bool = False,
                                      **kwargs: Any,
                                  ) -> AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]:
                                      """Open and configure an agent with an auto-managed runtime configuration.
                              
                                      This is a convenience method that combines RuntimeConfig.open with agent creation.
                              
                                      Args:
                                          config_path: Path to the runtime configuration file or a Config instance
                                                      (defaults to Config())
                                          result_type: Optional type for structured responses
                                          model: The default model to use (defaults to GPT-4)
                                          session: Optional id or Session query to recover a conversation
                                          system_prompt: Static system prompts to use for this agent
                                          name: Name of the agent for logging
                                          retries: Default number of retries for failed operations
                                          result_retries: Max retries for result validation (defaults to retries)
                                          end_strategy: Strategy for handling tool calls that are requested alongside
                                                      a final result
                                          defer_model_check: Whether to defer model evaluation until first run
                                          **kwargs: Additional arguments for PydanticAI agent
                              
                                      Yields:
                                          Configured Agent instance
                              
                                      Example:
                                          ```python
                                          async with Agent.open("config.yml") as agent:
                                              result = await agent.run("Hello!")
                                              print(result.data)
                                          ```
                                      """
                                      if config_path is None:
                                          config_path = Config()
                                      async with RuntimeConfig.open(config_path) as runtime:
                                          agent = cls(
                                              runtime=runtime,
                                              model=model,
                                              session=session,
                                              system_prompt=system_prompt,
                                              name=name,
                                              retries=retries,
                                              end_strategy=end_strategy,
                                              result_retries=result_retries,
                                              defer_model_check=defer_model_check,
                                              result_type=result_type,
                                              **kwargs,
                                          )
                                          try:
                                              async with agent:
                                                  yield (
                                                      agent if result_type is None else agent.to_structured(result_type)
                                                  )
                                          finally:
                                              # Any cleanup if needed
                                              pass
                              
                                  @classmethod
                                  @overload
                                  def open_agent(
                                      cls,
                                      config: StrPath | AgentsManifest,
                                      agent_name: str,
                                      *,
                                      deps: TDeps | None = None,
                                      result_type: None = None,
                                      model: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      model_settings: dict[str, Any] | None = None,
                                      tools: list[ToolType] | None = None,
                                      tool_choice: bool | str | list[str] = True,
                                      end_strategy: EndStrategy = "early",
                                  ) -> AbstractAsyncContextManager[Agent[TDeps]]: ...
                              
                                  @classmethod
                                  @overload
                                  def open_agent[TResult](
                                      cls,
                                      config: StrPath | AgentsManifest,
                                      agent_name: str,
                                      *,
                                      deps: TDeps | None = None,
                                      result_type: type[TResult],
                                      model: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      model_settings: dict[str, Any] | None = None,
                                      tools: list[ToolType] | None = None,
                                      tool_choice: bool | str | list[str] = True,
                                      end_strategy: EndStrategy = "early",
                                  ) -> AbstractAsyncContextManager[StructuredAgent[TDeps, TResult]]: ...
                              
                                  @classmethod
                                  @asynccontextmanager
                                  async def open_agent[TResult](
                                      cls,
                                      config: StrPath | AgentsManifest,
                                      agent_name: str,
                                      *,
                                      deps: TDeps | None = None,  # TDeps from class
                                      result_type: type[TResult] | None = None,
                                      model: str | ModelType = None,
                                      session: SessionIdType | SessionQuery = None,
                                      model_settings: dict[str, Any] | None = None,
                                      tools: list[ToolType] | None = None,
                                      tool_choice: bool | str | list[str] = True,
                                      end_strategy: EndStrategy = "early",
                                      retries: int = 1,
                                      result_tool_name: str = "final_result",
                                      result_tool_description: str | None = None,
                                      result_retries: int | None = None,
                                      system_prompt: str | Sequence[str] | None = None,
                                      enable_db_logging: bool = True,
                                  ) -> AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]:
                                      """Open and configure a specific agent from configuration."""
                                      """Implementation with all parameters..."""
                                      """Open and configure a specific agent from configuration.
                              
                                      Args:
                                          config: Path to agent configuration file or AgentsManifest instance
                                          agent_name: Name of the agent to load
                              
                                          # Basic Configuration
                                          model: Optional model override
                                          result_type: Optional type for structured responses
                                          model_settings: Additional model-specific settings
                                          session: Optional id or Session query to recover a conversation
                              
                                          # Tool Configuration
                                          tools: Additional tools to register (import paths or callables)
                                          tool_choice: Control tool usage:
                                              - True: Allow all tools
                                              - False: No tools
                                              - str: Use specific tool
                                              - list[str]: Allow specific tools
                                          end_strategy: Strategy for handling tool calls that are requested alongside
                                                          a final result
                              
                                          # Execution Settings
                                          retries: Default number of retries for failed operations
                                          result_tool_name: Name of the tool used for final result
                                          result_tool_description: Description of the final result tool
                                          result_retries: Max retries for result validation (defaults to retries)
                              
                                          # Other Settings
                                          system_prompt: Additional system prompts
                                          enable_db_logging: Whether to enable logging for the agent
                              
                                      Yields:
                                          Configured Agent instance
                              
                                      Raises:
                                          ValueError: If agent not found or configuration invalid
                                          RuntimeError: If agent initialization fails
                              
                                      Example:
                                          ```python
                                          async with Agent.open_agent(
                                              "agents.yml",
                                              "my_agent",
                                              model="gpt-4",
                                              tools=[my_custom_tool],
                                          ) as agent:
                                              result = await agent.run("Do something")
                                          ```
                                      """
                                      if isinstance(config, AgentsManifest):
                                          agent_def = config
                                      else:
                                          agent_def = AgentsManifest.from_file(config)
                              
                                      if agent_name not in agent_def.agents:
                                          msg = f"Agent {agent_name!r} not found in {config}"
                                          raise ValueError(msg)
                              
                                      agent_config = agent_def.agents[agent_name]
                                      resolved_type = result_type or agent_def.get_result_type(agent_name)
                              
                                      # Use model from override or agent config
                                      actual_model = model or agent_config.model
                                      if not actual_model:
                                          msg = "Model must be specified either in config or as override"
                                          raise ValueError(msg)
                              
                                      # Create context
                                      context = AgentContext[TDeps](  # Use TDeps here
                                          agent_name=agent_name,
                                          capabilities=agent_config.capabilities,
                                          definition=agent_def,
                                          config=agent_config,
                                          model_settings=model_settings or {},
                                      )
                              
                                      # Set up runtime
                                      cfg = agent_config.get_config()
                                      async with RuntimeConfig.open(cfg) as runtime:
                                          # Create base agent with correct typing
                                          base_agent = cls(  # cls is Agent[TDeps]
                                              runtime=runtime,
                                              context=context,
                                              model=actual_model,  # type: ignore[arg-type]
                                              retries=retries,
                                              session=session,
                                              result_retries=result_retries,
                                              end_strategy=end_strategy,
                                              tool_choice=tool_choice,
                                              tools=tools,
                                              system_prompt=system_prompt or [],
                                              enable_db_logging=enable_db_logging,
                                          )
                                          try:
                                              async with base_agent:
                                                  if resolved_type is not None and resolved_type is not str:
                                                      # Yield structured agent with correct typing
                                                      from llmling_agent.agent.structured import StructuredAgent
                              
                                                      yield StructuredAgent[TDeps, TResult](  # Use TDeps and TResult
                                                          base_agent,
                                                          resolved_type,
                                                          tool_description=result_tool_description,
                                                          tool_name=result_tool_name,
                                                      )
                                                  else:
                                                      yield base_agent
                                          finally:
                                              # Any cleanup if needed
                                              pass
                              
                                  def _forward_message(self, message: ChatMessage[Any]):
                                      """Forward sent messages."""
                                      logger.debug(
                                          "forwarding message from %s: %s (type: %s) to %d connected agents",
                                          self.name,
                                          repr(message.content),
                                          type(message.content),
                                          len(self.connections.get_targets()),
                                      )
                                      # update = {"forwarded_from": [*message.forwarded_from, self.name]}
                                      # forwarded_msg = message.model_copy(update=update)
                                      message.forwarded_from.append(self.name)
                                      self.outbox.emit(message, None)
                              
                                  async def disconnect_all(self):
                                      """Disconnect from all agents."""
                                      for target in list(self.connections.get_targets()):
                                          self.stop_passing_results_to(target)
                              
                                  @overload
                                  def pass_results_to(
                                      self,
                                      other: AnyAgent[Any, Any] | str,
                                      prompt: str | None = None,
                                      connection_type: ConnectionType = "run",
                                      priority: int = 0,
                                      delay: timedelta | None = None,
                                  ) -> Talk: ...
                              
                                  @overload
                                  def pass_results_to(
                                      self,
                                      other: Team[Any],
                                      prompt: str | None = None,
                                      connection_type: ConnectionType = "run",
                                      priority: int = 0,
                                      delay: timedelta | None = None,
                                  ) -> TeamTalk: ...
                              
                                  def pass_results_to(
                                      self,
                                      other: AnyAgent[Any, Any] | Team[Any] | str,
                                      prompt: str | None = None,
                                      connection_type: ConnectionType = "run",
                                      priority: int = 0,
                                      delay: timedelta | None = None,
                                  ) -> Talk | TeamTalk:
                                      """Forward results to another agent or all agents in a team."""
                                      return self.connections.connect_agent_to(
                                          other,
                                          connection_type=connection_type,
                                          priority=priority,
                                          delay=delay,
                                      )
                              
                                  def stop_passing_results_to(self, other: AnyAgent[Any, Any]):
                                      """Stop forwarding results to another agent."""
                                      self.connections.disconnect(other)
                              
                                  def is_busy(self) -> bool:
                                      """Check if agent is currently processing tasks."""
                                      return bool(self._pending_tasks or self._background_task)
                              
                                  @property
                                  def model_name(self) -> str | None:
                                      """Get the model name in a consistent format."""
                                      return self._provider.model_name
                              
                                  @logfire.instrument("Calling Agent.run: {prompt}:")
                                  async def run(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[TResult] | None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      store_history: bool = True,
                                  ) -> ChatMessage[TResult]:
                                      """Run agent with prompt and get response.
                              
                                      Args:
                                          prompt: User query or instruction
                                          result_type: Optional type for structured responses
                                          deps: Optional dependencies for the agent
                                          model: Optional model override
                                          store_history: Whether the message exchange should be added to the
                                                         context window
                              
                                      Returns:
                                          Result containing response and run information
                              
                                      Raises:
                                          UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                      """
                                      """Run agent with prompt and get response."""
                                      prompts = [await to_prompt(p) for p in prompt]
                                      final_prompt = "\n\n".join(prompts)
                                      if deps is not None:
                                          self.context.data = deps
                                      self.context.current_prompt = final_prompt
                                      self.set_result_type(result_type)
                                      wait_for_chain = False  # TODO
                              
                                      try:
                                          # Create and emit user message
                                          user_msg = ChatMessage[str](content=final_prompt, role="user")
                                          self.message_received.emit(user_msg)
                              
                                          # Get response through provider
                                          message_id = str(uuid4())
                                          start_time = time.perf_counter()
                                          result = await self._provider.generate_response(
                                              final_prompt,
                                              message_id,
                                              result_type=result_type,
                                              model=model,
                                              store_history=store_history,
                                          )
                              
                                          # Get cost info for assistant response
                                          usage = result.usage
                                          cost_info = (
                                              await TokenCost.from_usage(
                                                  usage, result.model_name, final_prompt, str(result.content)
                                              )
                                              if self.model_name and usage
                                              else None
                                          )
                              
                                          # Create final message with all metrics
                                          assistant_msg = ChatMessage[TResult](
                                              content=result.content,
                                              role="assistant",
                                              name=self.name,
                                              model=self.model_name,
                                              message_id=message_id,
                                              tool_calls=result.tool_calls,
                                              cost_info=cost_info,
                                              response_time=time.perf_counter() - start_time,
                                          )
                                          if self._debug:
                                              import devtools
                              
                                              devtools.debug(assistant_msg)
                              
                                          self.message_sent.emit(assistant_msg)
                              
                                      except Exception:
                                          logger.exception("Agent run failed")
                                          raise
                              
                                      else:
                                          if wait_for_chain:
                                              await self.wait_for_chain()
                                          return assistant_msg
                              
                                  def to_agent_tool(
                                      self,
                                      *,
                                      name: str | None = None,
                                      reset_history_on_run: bool = True,
                                      pass_message_history: bool = False,
                                      share_context: bool = False,
                                      parent: AnyAgent[Any, Any] | None = None,
                                  ) -> LLMCallableTool:
                                      """Create a tool from this agent.
                              
                                      Args:
                                          name: Optional tool name override
                                          reset_history_on_run: Clear agent's history before each run
                                          pass_message_history: Pass parent's message history to agent
                                          share_context: Whether to pass parent's context/deps
                                          parent: Optional parent agent for history/context sharing
                                      """
                                      tool_name = f"ask_{self.name}"
                              
                                      async def wrapped_tool(ctx: RunContext[AgentContext[TDeps]], prompt: str) -> str:
                                          if pass_message_history and not parent:
                                              msg = "Parent agent required for message history sharing"
                                              raise ToolError(msg)
                              
                                          if reset_history_on_run:
                                              self.conversation.clear()
                              
                                          history = None
                                          deps = ctx.deps.data if share_context else None
                                          if pass_message_history and parent:
                                              history = parent.conversation.get_history()
                                              old = self.conversation.get_history()
                                              self.conversation.set_history(history)
                                          result = await self.run(prompt, deps=deps, result_type=self._result_type)
                                          if history:
                                              self.conversation.set_history(old)
                                          return result.data
                              
                                      normalized_name = self.name.replace("_", " ").title()
                                      docstring = f"Get expert answer from specialized agent: {normalized_name}"
                                      if self.description:
                                          docstring = f"{docstring}\n\n{self.description}"
                              
                                      wrapped_tool.__doc__ = docstring
                                      wrapped_tool.__name__ = tool_name
                              
                                      return LLMCallableTool.from_callable(
                                          wrapped_tool,
                                          name_override=tool_name,
                                          description_override=docstring,
                                      )
                              
                                  @asynccontextmanager
                                  async def run_stream(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[TResult] | None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      store_history: bool = True,
                                  ) -> AsyncIterator[StreamedRunResult[AgentContext[TDeps], TResult]]:
                                      """Run agent with prompt and get a streaming response.
                              
                                      Args:
                                          prompt: User query or instruction
                                          result_type: Optional type for structured responses
                                          deps: Optional dependencies for the agent
                                          model: Optional model override
                                          store_history: Whether the message exchange should be added to the
                                                         context window
                              
                                      Returns:
                                          A streaming result to iterate over.
                              
                                      Raises:
                                          UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                      """
                                      prompts = [await to_prompt(p) for p in prompt]
                                      final_prompt = "\n\n".join(prompts)
                                      self.set_result_type(result_type)
                              
                                      if deps is not None:
                                          self.context.data = deps
                                      self.context.current_prompt = final_prompt
                                      try:
                                          # Create and emit user message
                                          user_msg = ChatMessage[str](content=final_prompt, role="user")
                                          self.message_received.emit(user_msg)
                                          message_id = str(uuid4())
                                          start_time = time.perf_counter()
                              
                                          async with self._provider.stream_response(
                                              final_prompt,
                                              message_id,
                                              result_type=result_type,
                                              model=model,
                                              store_history=store_history,
                                          ) as stream:
                                              yield stream  # type: ignore
                              
                                              # After streaming is done, create and emit final message
                                              usage = stream.usage()
                                              cost_info = (
                                                  await TokenCost.from_usage(
                                                      usage,
                                                      stream.model_name,  # type: ignore
                                                      final_prompt,
                                                      str(stream.formatted_content),  # type: ignore
                                                  )
                                                  if self.model_name
                                                  else None
                                              )
                              
                                              assistant_msg = ChatMessage[TResult](
                                                  content=cast(TResult, stream.formatted_content),  # type: ignore
                                                  role="assistant",
                                                  name=self.name,
                                                  model=self.model_name,
                                                  message_id=message_id,
                                                  cost_info=cost_info,
                                                  response_time=time.perf_counter() - start_time,
                                              )
                                              self.message_sent.emit(assistant_msg)
                              
                                      except Exception:
                                          logger.exception("Agent stream failed")
                                          raise
                              
                                  def run_sync(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[TResult] | None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      store_history: bool = True,
                                  ) -> ChatMessage[TResult]:
                                      """Run agent synchronously (convenience wrapper).
                              
                                      Args:
                                          prompt: User query or instruction
                                          result_type: Optional type for structured responses
                                          deps: Optional dependencies for the agent
                                          model: Optional model override
                                          store_history: Whether the message exchange should be added to the
                                                         context window
                                      Returns:
                                          Result containing response and run information
                                      """
                                      try:
                                          return asyncio.run(
                                              self.run(
                                                  prompt,
                                                  deps=deps,
                                                  model=model,
                                                  store_history=store_history,
                                                  result_type=result_type,
                                              )
                                          )
                                      except KeyboardInterrupt:
                                          raise
                                      except Exception:
                                          logger.exception("Sync agent run failed")
                                          raise
                              
                                  async def wait_for_chain(self, _seen: set[str] | None = None):
                                      """Wait for this agent and all connected agents to complete their tasks."""
                                      # Track seen agents to avoid cycles
                                      seen = _seen or {self.name}
                              
                                      # Wait for our own tasks
                                      await self.complete_tasks()
                              
                                      # Wait for connected agents
                                      for agent in self.connections.get_targets():
                                          if agent.name not in seen:
                                              seen.add(agent.name)
                                              await agent.wait_for_chain(seen)
                              
                                  async def run_task[TResult](
                                      self,
                                      task: AgentTask[TDeps, TResult],
                                      *,
                                      store_history: bool = True,
                                      include_agent_tools: bool = True,
                                  ) -> ChatMessage[TResult]:
                                      """Execute a pre-defined task.
                              
                                      Args:
                                          task: Task configuration to execute
                                          store_history: Whether the message exchange should be added to the
                                                         context window
                                          include_agent_tools: Whether to include agent tools
                                      Returns:
                                          Task execution result
                              
                                      Raises:
                                          TaskError: If task execution fails
                                          ValueError: If task configuration is invalid
                                      """
                                      from llmling_agent.tasks import TaskError
                              
                                      original_result_type = self._result_type
                              
                                      self.set_result_type(task.result_type)
                              
                                      # Load task knowledge
                                      if task.knowledge:
                                          # Add knowledge sources to context
                                          resources: list[Resource | str] = list(task.knowledge.paths) + list(
                                              task.knowledge.resources
                                          )
                                          for source in resources:
                                              await self.conversation.load_context_source(source)
                                          for prompt in task.knowledge.prompts:
                                              if isinstance(prompt, StaticPrompt | DynamicPrompt | FilePrompt):
                                                  await self.conversation.add_context_from_prompt(prompt)
                                              else:
                                                  await self.conversation.load_context_source(prompt)
                              
                                      try:
                                          # Register task tools temporarily
                                          tools = [import_callable(cfg.import_path) for cfg in task.tool_configs]
                                          names = [cfg.name for cfg in task.tool_configs]
                                          descriptions = [cfg.description for cfg in task.tool_configs]
                                          tools = [
                                              LLMCallableTool.from_callable(
                                                  tool, name_override=name, description_override=description
                                              )
                                              for tool, name, description in zip(tools, names, descriptions)
                                          ]
                                          with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                              # Execute task with task-specific tools
                                              from llmling_agent.tasks.strategies import DirectStrategy
                              
                                              strategy = DirectStrategy[TDeps, TResult]()
                                              return await strategy.execute(
                                                  task=task,
                                                  agent=self,
                                                  store_history=store_history,
                                              )
                              
                                      except Exception as e:
                                          msg = f"Task execution failed: {e}"
                                          logger.exception(msg)
                                          raise TaskError(msg) from e
                                      finally:
                                          self.set_result_type(original_result_type)
                              
                                  async def run_continuous(
                                      self,
                                      prompt: AnyPromptType,
                                      *,
                                      max_count: int | None = None,
                                      interval: float = 1.0,
                                      block: bool = False,
                                      **kwargs: Any,
                                  ) -> ChatMessage[TResult] | None:
                                      """Run agent continuously with prompt or dynamic prompt function.
                              
                                      Args:
                                          prompt: Static prompt or function that generates prompts
                                          max_count: Maximum number of runs (None = infinite)
                                          interval: Seconds between runs
                                          block: Whether to block until completion
                                          **kwargs: Arguments passed to run()
                                      """
                              
                                      async def _continuous():
                                          count = 0
                                          msg = "%s: Starting continuous run (max_count=%s, interval=%s)"
                                          logger.debug(msg, self.name, max_count, interval)
                                          while max_count is None or count < max_count:
                                              try:
                                                  current_prompt = (
                                                      call_with_context(prompt, self.context, **kwargs)
                                                      if callable(prompt)
                                                      else to_prompt(prompt)
                                                  )
                                                  msg = "%s: Generated prompt #%d: %s"
                                                  logger.debug(msg, self.name, count, current_prompt)
                              
                                                  await self.run(current_prompt, **kwargs)
                                                  msg = "%s: Run continous result #%d"
                                                  logger.debug(msg, self.name, count)
                              
                                                  count += 1
                                                  await asyncio.sleep(interval)
                                              except asyncio.CancelledError:
                                                  logger.debug("%s: Continuous run cancelled", self.name)
                                                  break
                                              except Exception:
                                                  logger.exception("%s: Background run failed", self.name)
                                                  await asyncio.sleep(interval)
                                          msg = "%s: Continuous run completed after %d iterations"
                                          logger.debug(msg, self.name, count)
                              
                                      # Cancel any existing background task
                                      await self.stop()
                                      task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                              
                                      if block:
                                          try:
                                              await task
                                              return None
                                          finally:
                                              if not task.done():
                                                  task.cancel()
                                      else:
                                          logger.debug("%s: Started background task %s", self.name, task.get_name())
                                          self._background_task = task
                                          return None
                              
                                  async def stop(self):
                                      """Stop continuous execution if running."""
                                      if self._background_task and not self._background_task.done():
                                          self._background_task.cancel()
                                          await self._background_task
                                          self._background_task = None
                              
                                  def clear_history(self):
                                      """Clear both internal and pydantic-ai history."""
                                      self._logger.clear_state()
                                      self.conversation.clear()
                                      logger.debug("Cleared history and reset tool state")
                              
                                  async def get_token_limits(self) -> TokenLimits | None:
                                      """Get token limits for the current model."""
                                      if not self.model_name:
                                          return None
                              
                                      try:
                                          return await get_model_limits(self.model_name)
                                      except ValueError:
                                          logger.debug("Could not get token limits for model: %s", self.model_name)
                                          return None
                              
                                  async def share(
                                      self,
                                      target: AnyAgent[TDeps, Any],
                                      *,
                                      tools: list[str] | None = None,
                                      resources: list[str] | None = None,
                                      history: bool | int | None = None,  # bool or number of messages
                                      token_limit: int | None = None,
                                  ) -> None:
                                      """Share capabilities and knowledge with another agent.
                              
                                      Args:
                                          target: Agent to share with
                                          tools: List of tool names to share
                                          resources: List of resource names to share
                                          history: Share conversation history:
                                                  - True: Share full history
                                                  - int: Number of most recent messages to share
                                                  - None: Don't share history
                                          token_limit: Optional max tokens for history
                              
                                      Raises:
                                          ValueError: If requested items don't exist
                                          RuntimeError: If runtime not available for resources
                                      """
                                      # Share tools if requested
                                      for name in tools or []:
                                          if tool := self.tools.get(name):
                                              meta = {"shared_from": self.name}
                                              target.tools.register_tool(tool.callable, metadata=meta)
                                          else:
                                              msg = f"Tool not found: {name}"
                                              raise ValueError(msg)
                              
                                      # Share resources if requested
                                      if resources:
                                          if not self.runtime:
                                              msg = "No runtime available for sharing resources"
                                              raise RuntimeError(msg)
                                          for name in resources:
                                              if resource := self.runtime.get_resource(name):
                                                  await target.conversation.load_context_source(resource)
                                              else:
                                                  msg = f"Resource not found: {name}"
                                                  raise ValueError(msg)
                              
                                      # Share history if requested
                                      if history:
                                          history_text = await self.conversation.format_history(
                                              max_tokens=token_limit,
                                              num_messages=history if isinstance(history, int) else None,
                                          )
                                          await target.conversation.add_context_message(
                                              history_text, source=self.name, metadata={"type": "shared_history"}
                                          )
                              
                                  def register_worker(
                                      self,
                                      worker: Agent[Any],
                                      *,
                                      name: str | None = None,
                                      reset_history_on_run: bool = True,
                                      pass_message_history: bool = False,
                                      share_context: bool = False,
                                  ) -> ToolInfo:
                                      """Register another agent as a worker tool."""
                                      return self.tools.register_worker(
                                          worker,
                                          name=name,
                                          reset_history_on_run=reset_history_on_run,
                                          pass_message_history=pass_message_history,
                                          share_context=share_context,
                                          parent=self if (pass_message_history or share_context) else None,
                                      )
                              
                                  def set_model(self, model: ModelType):
                                      """Set the model for this agent.
                              
                                      Args:
                                          model: New model to use (name or instance)
                              
                                      Emits:
                                          model_changed signal with the new model
                                      """
                                      self._provider.set_model(model)
                              
                                  @property
                                  def runtime(self) -> RuntimeConfig:
                                      """Get runtime configuration from context."""
                                      assert self.context.runtime
                                      return self.context.runtime
                              
                                  @runtime.setter
                                  def runtime(self, value: RuntimeConfig):
                                      """Set runtime configuration and update context."""
                                      self.context.runtime = value
                              
                                  @property
                                  def tools(self) -> ToolManager:
                                      return self._tool_manager
                              

                              context property writable

                              context: AgentContext[TDeps]
                              

                              Get agent context.

                              model_name property

                              model_name: str | None
                              

                              Get the model name in a consistent format.

                              name property writable

                              name: str
                              

                              Get agent name.

                              runtime property writable

                              runtime: RuntimeConfig
                              

                              Get runtime configuration from context.

                              __aenter__ async

                              __aenter__() -> Self
                              

                              Enter async context and set up MCP servers.

                              Source code in src/llmling_agent/agent/agent.py
                              262
                              263
                              264
                              265
                              266
                              267
                              268
                              269
                              270
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              async def __aenter__(self) -> Self:
                                  """Enter async context and set up MCP servers."""
                                  try:
                                      # First initialize runtime
                                      runtime_ref = self.context.runtime
                                      if runtime_ref and not runtime_ref._initialized:
                                          self._owns_runtime = True
                                          await runtime_ref.__aenter__()
                                          runtime_tools = runtime_ref.tools.values()
                                          logger.debug(
                                              "Registering runtime tools: %s", [t.name for t in runtime_tools]
                                          )
                                          for tool in runtime_tools:
                                              self.tools.register_tool(tool, source="runtime")
                              
                                      # Then setup constructor MCP servers
                                      if self._mcp_servers:
                                          await self.tools.setup_mcp_servers(self._mcp_servers)
                              
                                      # Then setup config MCP servers if any
                                      if self.context and self.context.config and self.context.config.mcp_servers:
                                          await self.tools.setup_mcp_servers(self.context.config.get_mcp_servers())
                                  except Exception as e:
                                      # Clean up in reverse order
                                      if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                          await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                      msg = "Failed to initialize agent"
                                      raise RuntimeError(msg) from e
                                  else:
                                      return self
                              

                              __aexit__ async

                              __aexit__(
                                  exc_type: type[BaseException] | None,
                                  exc_val: BaseException | None,
                                  exc_tb: TracebackType | None,
                              )
                              

                              Exit async context.

                              Source code in src/llmling_agent/agent/agent.py
                              293
                              294
                              295
                              296
                              297
                              298
                              299
                              300
                              301
                              302
                              303
                              304
                              async def __aexit__(
                                  self,
                                  exc_type: type[BaseException] | None,
                                  exc_val: BaseException | None,
                                  exc_tb: TracebackType | None,
                              ):
                                  """Exit async context."""
                                  try:
                                      await self.tools.cleanup()
                                  finally:
                                      if self._owns_runtime and self.context.runtime:
                                          await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                              

                              __init__

                              __init__(
                                  runtime: RuntimeConfig | Config | StrPath | None = None,
                                  *,
                                  context: AgentContext[TDeps] | None = None,
                                  agent_type: AgentType = "ai",
                                  session: SessionIdType | SessionQuery = None,
                                  model: ModelType = None,
                                  system_prompt: str | Sequence[str] = (),
                                  name: str = "llmling-agent",
                                  description: str | None = None,
                                  tools: Sequence[ToolType] | None = None,
                                  mcp_servers: list[str | MCPServerConfig] | None = None,
                                  retries: int = 1,
                                  result_retries: int | None = None,
                                  tool_choice: bool | str | list[str] = True,
                                  end_strategy: EndStrategy = "early",
                                  defer_model_check: bool = False,
                                  enable_db_logging: bool = True,
                                  confirmation_callback: ConfirmationCallback | None = None,
                                  debug: bool = False,
                                  **kwargs,
                              )
                              

                              Initialize agent with runtime configuration.

                              Parameters:

                              Name Type Description Default
                              runtime RuntimeConfig | Config | StrPath | None

                              Runtime configuration providing access to resources/tools

                              None
                              context AgentContext[TDeps] | None

                              Agent context with capabilities and configuration

                              None
                              agent_type AgentType

                              Agent type to use (ai: PydanticAIProvider, human: HumanProvider)

                              'ai'
                              session SessionIdType | SessionQuery

                              Optional id or Session query to recover a conversation

                              None
                              model ModelType

                              The default model to use (defaults to GPT-4)

                              None
                              system_prompt str | Sequence[str]

                              Static system prompts to use for this agent

                              ()
                              name str

                              Name of the agent for logging

                              'llmling-agent'
                              description str | None

                              Description of the Agent ("what it can do")

                              None
                              tools Sequence[ToolType] | None

                              List of tools to register with the agent

                              None
                              mcp_servers list[str | MCPServerConfig] | None

                              MCP servers to connect to

                              None
                              retries int

                              Default number of retries for failed operations

                              1
                              result_retries int | None

                              Max retries for result validation (defaults to retries)

                              None
                              tool_choice bool | str | list[str]

                              Ability to set a fixed tool or temporarily disable tools usage.

                              True
                              end_strategy EndStrategy

                              Strategy for handling tool calls that are requested alongside a final result

                              'early'
                              defer_model_check bool

                              Whether to defer model evaluation until first run

                              False
                              kwargs

                              Additional arguments for PydanticAI agent

                              {}
                              enable_db_logging bool

                              Whether to enable logging for the agent

                              True
                              confirmation_callback ConfirmationCallback | None

                              Callback for confirmation prompts

                              None
                              debug bool

                              Whether to enable debug mode

                              False
                              Source code in src/llmling_agent/agent/agent.py
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              def __init__(
                                  self,
                                  runtime: RuntimeConfig | Config | StrPath | None = None,
                                  *,
                                  context: AgentContext[TDeps] | None = None,
                                  agent_type: AgentType = "ai",
                                  session: SessionIdType | SessionQuery = None,
                                  model: ModelType = None,
                                  system_prompt: str | Sequence[str] = (),
                                  name: str = "llmling-agent",
                                  description: str | None = None,
                                  tools: Sequence[ToolType] | None = None,
                                  mcp_servers: list[str | MCPServerConfig] | None = None,
                                  retries: int = 1,
                                  result_retries: int | None = None,
                                  tool_choice: bool | str | list[str] = True,
                                  end_strategy: EndStrategy = "early",
                                  defer_model_check: bool = False,
                                  enable_db_logging: bool = True,
                                  confirmation_callback: ConfirmationCallback | None = None,
                                  debug: bool = False,
                                  **kwargs,
                              ):
                                  """Initialize agent with runtime configuration.
                              
                                  Args:
                                      runtime: Runtime configuration providing access to resources/tools
                                      context: Agent context with capabilities and configuration
                                      agent_type: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                      session: Optional id or Session query to recover a conversation
                                      model: The default model to use (defaults to GPT-4)
                                      system_prompt: Static system prompts to use for this agent
                                      name: Name of the agent for logging
                                      description: Description of the Agent ("what it can do")
                                      tools: List of tools to register with the agent
                                      mcp_servers: MCP servers to connect to
                                      retries: Default number of retries for failed operations
                                      result_retries: Max retries for result validation (defaults to retries)
                                      tool_choice: Ability to set a fixed tool or temporarily disable tools usage.
                                      end_strategy: Strategy for handling tool calls that are requested alongside
                                                    a final result
                                      defer_model_check: Whether to defer model evaluation until first run
                                      kwargs: Additional arguments for PydanticAI agent
                                      enable_db_logging: Whether to enable logging for the agent
                                      confirmation_callback: Callback for confirmation prompts
                                      debug: Whether to enable debug mode
                                  """
                                  super().__init__()
                                  self._debug = debug
                                  self._result_type = None
                              
                                  # save some stuff for asnyc init
                                  self._owns_runtime = False
                                  self._mcp_servers = [
                                      StdioMCPServer(command=s.split()[0], args=s.split()[1:])
                                      if isinstance(s, str)
                                      else s
                                      for s in (mcp_servers or [])
                                  ]
                              
                                  # prepare context
                                  ctx = context or AgentContext[TDeps].create_default(name)
                                  ctx.confirmation_callback = confirmation_callback
                                  match runtime:
                                      case None:
                                          ctx.runtime = RuntimeConfig.from_config(Config())
                                      case Config():
                                          ctx.runtime = RuntimeConfig.from_config(runtime)
                                      case str() | PathLike():
                                          ctx.runtime = RuntimeConfig.from_config(Config.from_file(runtime))
                                      case RuntimeConfig():
                                          ctx.runtime = runtime
                                  # connect signals
                                  self.message_sent.connect(self._forward_message)
                              
                                  # Initialize tool manager
                                  all_tools = list(tools or [])
                                  self._tool_manager = ToolManager(all_tools, tool_choice=tool_choice, context=ctx)
                              
                                  # set up conversation manager
                                  config_prompts = ctx.config.system_prompts if ctx else []
                                  all_prompts = list(config_prompts)
                                  if isinstance(system_prompt, str):
                                      all_prompts.append(system_prompt)
                                  else:
                                      all_prompts.extend(system_prompt)
                                  self.conversation = ConversationManager(self, session, all_prompts)
                              
                                  # Initialize provider based on type
                                  match agent_type:
                                      case "ai":
                                          if model and not isinstance(model, str):
                                              from pydantic_ai import models
                              
                                              assert isinstance(model, models.Model)
                                          self._provider: AgentProvider = PydanticAIProvider(
                                              model=model,  # pyright: ignore
                                              system_prompt=system_prompt,
                                              retries=retries,
                                              end_strategy=end_strategy,
                                              result_retries=result_retries,
                                              defer_model_check=defer_model_check,
                                              debug=debug,
                                              **kwargs,
                                          )
                                      case "human":
                                          self._provider = HumanProvider(name=name, debug=debug)
                                      case "litellm":
                                          from llmling_agent_providers.litellm_provider import LiteLLMProvider
                              
                                          self._provider = LiteLLMProvider(name=name, debug=debug, retries=retries)
                                      case AgentProvider():
                                          self._provider = agent_type
                                      case _:
                                          msg = f"Invalid agent type: {type}"
                                          raise ValueError(msg)
                                  self._provider.tool_manager = self._tool_manager
                                  self._provider.context = ctx
                                  self._provider.conversation = self.conversation
                                  ctx.capabilities.register_capability_tools(self)
                              
                                  # Forward provider signals
                                  self._provider.chunk_streamed.connect(self.chunk_streamed.emit)
                                  self._provider.model_changed.connect(self.model_changed.emit)
                                  self._provider.tool_used.connect(self.tool_used.emit)
                                  self._provider.model_changed.connect(self.model_changed.emit)
                              
                                  self.name = name
                                  self.description = description
                                  msg = "Initialized %s (model=%s)"
                                  logger.debug(msg, self.name, model)
                              
                                  from llmling_agent.agent import AgentLogger
                                  from llmling_agent.agent.talk import Interactions
                                  from llmling_agent.events import EventManager
                              
                                  self.connections = TalkManager(self)
                                  self.talk = Interactions(self)
                              
                                  self._logger = AgentLogger(self, enable_db_logging=enable_db_logging)
                                  self._events = EventManager(self, enable_events=True)
                              
                                  self._background_task: asyncio.Task[Any] | None = None
                              

                              __or__

                              __or__(other: AnyAgent[Any, Any] | Team[Any]) -> Team[TDeps]
                              

                              Create agent group using | operator.

                              Example

                              group = analyzer | planner | executor # Create group of 3 group = analyzer | existing_group # Add to existing group

                              Source code in src/llmling_agent/agent/agent.py
                              322
                              323
                              324
                              325
                              326
                              327
                              328
                              329
                              330
                              331
                              332
                              333
                              def __or__(self, other: AnyAgent[Any, Any] | Team[Any]) -> Team[TDeps]:
                                  """Create agent group using | operator.
                              
                                  Example:
                                      group = analyzer | planner | executor  # Create group of 3
                                      group = analyzer | existing_group  # Add to existing group
                                  """
                                  from llmling_agent.delegation.agentgroup import Team
                              
                                  if isinstance(other, Team):
                                      return Team([self, *other.agents])
                                  return Team([self, other])
                              

                              __rshift__

                              __rshift__(other: AnyAgent[Any, Any] | str) -> Talk
                              
                              __rshift__(other: Team[Any]) -> TeamTalk
                              
                              __rshift__(other: AnyAgent[Any, Any] | Team[Any] | str) -> Talk | TeamTalk
                              

                              Connect agent to another agent or group.

                              Example

                              agent >> other_agent # Connect to single agent agent >> (agent2 | agent3) # Connect to group agent >> "other_agent" # Connect by name (needs pool)

                              Source code in src/llmling_agent/agent/agent.py
                              312
                              313
                              314
                              315
                              316
                              317
                              318
                              319
                              320
                              def __rshift__(self, other: AnyAgent[Any, Any] | Team[Any] | str) -> Talk | TeamTalk:
                                  """Connect agent to another agent or group.
                              
                                  Example:
                                      agent >> other_agent  # Connect to single agent
                                      agent >> (agent2 | agent3)  # Connect to group
                                      agent >> "other_agent"  # Connect by name (needs pool)
                                  """
                                  return self.pass_results_to(other)
                              

                              clear_history

                              clear_history()
                              

                              Clear both internal and pydantic-ai history.

                              Source code in src/llmling_agent/agent/agent.py
                              1183
                              1184
                              1185
                              1186
                              1187
                              def clear_history(self):
                                  """Clear both internal and pydantic-ai history."""
                                  self._logger.clear_state()
                                  self.conversation.clear()
                                  logger.debug("Cleared history and reset tool state")
                              

                              disconnect_all async

                              disconnect_all()
                              

                              Disconnect from all agents.

                              Source code in src/llmling_agent/agent/agent.py
                              722
                              723
                              724
                              725
                              async def disconnect_all(self):
                                  """Disconnect from all agents."""
                                  for target in list(self.connections.get_targets()):
                                      self.stop_passing_results_to(target)
                              

                              get_token_limits async

                              get_token_limits() -> TokenLimits | None
                              

                              Get token limits for the current model.

                              Source code in src/llmling_agent/agent/agent.py
                              1189
                              1190
                              1191
                              1192
                              1193
                              1194
                              1195
                              1196
                              1197
                              1198
                              async def get_token_limits(self) -> TokenLimits | None:
                                  """Get token limits for the current model."""
                                  if not self.model_name:
                                      return None
                              
                                  try:
                                      return await get_model_limits(self.model_name)
                                  except ValueError:
                                      logger.debug("Could not get token limits for model: %s", self.model_name)
                                      return None
                              

                              is_busy

                              is_busy() -> bool
                              

                              Check if agent is currently processing tasks.

                              Source code in src/llmling_agent/agent/agent.py
                              767
                              768
                              769
                              def is_busy(self) -> bool:
                                  """Check if agent is currently processing tasks."""
                                  return bool(self._pending_tasks or self._background_task)
                              

                              open async classmethod

                              open(
                                  config_path: StrPath | Config | None = None,
                                  *,
                                  result_type: None = None,
                                  model: ModelType = None,
                                  session: SessionIdType | SessionQuery = None,
                                  system_prompt: str | Sequence[str] = (),
                                  name: str = "llmling-agent",
                                  retries: int = 1,
                                  result_retries: int | None = None,
                                  end_strategy: EndStrategy = "early",
                                  defer_model_check: bool = False,
                                  **kwargs: Any,
                              ) -> AbstractAsyncContextManager[Agent[TDeps]]
                              
                              open(
                                  config_path: StrPath | Config | None = None,
                                  *,
                                  result_type: type[TResult],
                                  model: ModelType = None,
                                  session: SessionIdType | SessionQuery = None,
                                  system_prompt: str | Sequence[str] = (),
                                  name: str = "llmling-agent",
                                  retries: int = 1,
                                  result_retries: int | None = None,
                                  end_strategy: EndStrategy = "early",
                                  defer_model_check: bool = False,
                                  **kwargs: Any,
                              ) -> AbstractAsyncContextManager[StructuredAgent[TDeps, TResult]]
                              
                              open(
                                  config_path: StrPath | Config | None = None,
                                  *,
                                  result_type: type[TResult] | None = None,
                                  model: ModelType = None,
                                  session: SessionIdType | SessionQuery = None,
                                  system_prompt: str | Sequence[str] = (),
                                  name: str = "llmling-agent",
                                  retries: int = 1,
                                  result_retries: int | None = None,
                                  end_strategy: EndStrategy = "early",
                                  defer_model_check: bool = False,
                                  **kwargs: Any,
                              ) -> AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]
                              

                              Open and configure an agent with an auto-managed runtime configuration.

                              This is a convenience method that combines RuntimeConfig.open with agent creation.

                              Parameters:

                              Name Type Description Default
                              config_path StrPath | Config | None

                              Path to the runtime configuration file or a Config instance (defaults to Config())

                              None
                              result_type type[TResult] | None

                              Optional type for structured responses

                              None
                              model ModelType

                              The default model to use (defaults to GPT-4)

                              None
                              session SessionIdType | SessionQuery

                              Optional id or Session query to recover a conversation

                              None
                              system_prompt str | Sequence[str]

                              Static system prompts to use for this agent

                              ()
                              name str

                              Name of the agent for logging

                              'llmling-agent'
                              retries int

                              Default number of retries for failed operations

                              1
                              result_retries int | None

                              Max retries for result validation (defaults to retries)

                              None
                              end_strategy EndStrategy

                              Strategy for handling tool calls that are requested alongside a final result

                              'early'
                              defer_model_check bool

                              Whether to defer model evaluation until first run

                              False
                              **kwargs Any

                              Additional arguments for PydanticAI agent

                              {}

                              Yields:

                              Type Description
                              AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]

                              Configured Agent instance

                              Example
                              async with Agent.open("config.yml") as agent:
                                  result = await agent.run("Hello!")
                                  print(result.data)
                              
                              Source code in src/llmling_agent/agent/agent.py
                              467
                              468
                              469
                              470
                              471
                              472
                              473
                              474
                              475
                              476
                              477
                              478
                              479
                              480
                              481
                              482
                              483
                              484
                              485
                              486
                              487
                              488
                              489
                              490
                              491
                              492
                              493
                              494
                              495
                              496
                              497
                              498
                              499
                              500
                              501
                              502
                              503
                              504
                              505
                              506
                              507
                              508
                              509
                              510
                              511
                              512
                              513
                              514
                              515
                              516
                              517
                              518
                              519
                              520
                              521
                              522
                              523
                              524
                              525
                              526
                              527
                              528
                              529
                              530
                              531
                              532
                              533
                              534
                              535
                              536
                              @classmethod
                              @asynccontextmanager
                              async def open[TResult](
                                  cls,
                                  config_path: StrPath | Config | None = None,
                                  *,
                                  result_type: type[TResult] | None = None,
                                  model: ModelType = None,
                                  session: SessionIdType | SessionQuery = None,
                                  system_prompt: str | Sequence[str] = (),
                                  name: str = "llmling-agent",
                                  retries: int = 1,
                                  result_retries: int | None = None,
                                  end_strategy: EndStrategy = "early",
                                  defer_model_check: bool = False,
                                  **kwargs: Any,
                              ) -> AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]:
                                  """Open and configure an agent with an auto-managed runtime configuration.
                              
                                  This is a convenience method that combines RuntimeConfig.open with agent creation.
                              
                                  Args:
                                      config_path: Path to the runtime configuration file or a Config instance
                                                  (defaults to Config())
                                      result_type: Optional type for structured responses
                                      model: The default model to use (defaults to GPT-4)
                                      session: Optional id or Session query to recover a conversation
                                      system_prompt: Static system prompts to use for this agent
                                      name: Name of the agent for logging
                                      retries: Default number of retries for failed operations
                                      result_retries: Max retries for result validation (defaults to retries)
                                      end_strategy: Strategy for handling tool calls that are requested alongside
                                                  a final result
                                      defer_model_check: Whether to defer model evaluation until first run
                                      **kwargs: Additional arguments for PydanticAI agent
                              
                                  Yields:
                                      Configured Agent instance
                              
                                  Example:
                                      ```python
                                      async with Agent.open("config.yml") as agent:
                                          result = await agent.run("Hello!")
                                          print(result.data)
                                      ```
                                  """
                                  if config_path is None:
                                      config_path = Config()
                                  async with RuntimeConfig.open(config_path) as runtime:
                                      agent = cls(
                                          runtime=runtime,
                                          model=model,
                                          session=session,
                                          system_prompt=system_prompt,
                                          name=name,
                                          retries=retries,
                                          end_strategy=end_strategy,
                                          result_retries=result_retries,
                                          defer_model_check=defer_model_check,
                                          result_type=result_type,
                                          **kwargs,
                                      )
                                      try:
                                          async with agent:
                                              yield (
                                                  agent if result_type is None else agent.to_structured(result_type)
                                              )
                                      finally:
                                          # Any cleanup if needed
                                          pass
                              

                              open_agent async classmethod

                              open_agent(
                                  config: StrPath | AgentsManifest,
                                  agent_name: str,
                                  *,
                                  deps: TDeps | None = None,
                                  result_type: None = None,
                                  model: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  model_settings: dict[str, Any] | None = None,
                                  tools: list[ToolType] | None = None,
                                  tool_choice: bool | str | list[str] = True,
                                  end_strategy: EndStrategy = "early",
                              ) -> AbstractAsyncContextManager[Agent[TDeps]]
                              
                              open_agent(
                                  config: StrPath | AgentsManifest,
                                  agent_name: str,
                                  *,
                                  deps: TDeps | None = None,
                                  result_type: type[TResult],
                                  model: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  model_settings: dict[str, Any] | None = None,
                                  tools: list[ToolType] | None = None,
                                  tool_choice: bool | str | list[str] = True,
                                  end_strategy: EndStrategy = "early",
                              ) -> AbstractAsyncContextManager[StructuredAgent[TDeps, TResult]]
                              
                              open_agent(
                                  config: StrPath | AgentsManifest,
                                  agent_name: str,
                                  *,
                                  deps: TDeps | None = None,
                                  result_type: type[TResult] | None = None,
                                  model: str | ModelType = None,
                                  session: SessionIdType | SessionQuery = None,
                                  model_settings: dict[str, Any] | None = None,
                                  tools: list[ToolType] | None = None,
                                  tool_choice: bool | str | list[str] = True,
                                  end_strategy: EndStrategy = "early",
                                  retries: int = 1,
                                  result_tool_name: str = "final_result",
                                  result_tool_description: str | None = None,
                                  result_retries: int | None = None,
                                  system_prompt: str | Sequence[str] | None = None,
                                  enable_db_logging: bool = True,
                              ) -> AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]
                              

                              Open and configure a specific agent from configuration.

                              Source code in src/llmling_agent/agent/agent.py
                              572
                              573
                              574
                              575
                              576
                              577
                              578
                              579
                              580
                              581
                              582
                              583
                              584
                              585
                              586
                              587
                              588
                              589
                              590
                              591
                              592
                              593
                              594
                              595
                              596
                              597
                              598
                              599
                              600
                              601
                              602
                              603
                              604
                              605
                              606
                              607
                              608
                              609
                              610
                              611
                              612
                              613
                              614
                              615
                              616
                              617
                              618
                              619
                              620
                              621
                              622
                              623
                              624
                              625
                              626
                              627
                              628
                              629
                              630
                              631
                              632
                              633
                              634
                              635
                              636
                              637
                              638
                              639
                              640
                              641
                              642
                              643
                              644
                              645
                              646
                              647
                              648
                              649
                              650
                              651
                              652
                              653
                              654
                              655
                              656
                              657
                              658
                              659
                              660
                              661
                              662
                              663
                              664
                              665
                              666
                              667
                              668
                              669
                              670
                              671
                              672
                              673
                              674
                              675
                              676
                              677
                              678
                              679
                              680
                              681
                              682
                              683
                              684
                              685
                              686
                              687
                              688
                              689
                              690
                              691
                              692
                              693
                              694
                              695
                              696
                              697
                              698
                              699
                              700
                              701
                              702
                              703
                              704
                              705
                              706
                              @classmethod
                              @asynccontextmanager
                              async def open_agent[TResult](
                                  cls,
                                  config: StrPath | AgentsManifest,
                                  agent_name: str,
                                  *,
                                  deps: TDeps | None = None,  # TDeps from class
                                  result_type: type[TResult] | None = None,
                                  model: str | ModelType = None,
                                  session: SessionIdType | SessionQuery = None,
                                  model_settings: dict[str, Any] | None = None,
                                  tools: list[ToolType] | None = None,
                                  tool_choice: bool | str | list[str] = True,
                                  end_strategy: EndStrategy = "early",
                                  retries: int = 1,
                                  result_tool_name: str = "final_result",
                                  result_tool_description: str | None = None,
                                  result_retries: int | None = None,
                                  system_prompt: str | Sequence[str] | None = None,
                                  enable_db_logging: bool = True,
                              ) -> AsyncIterator[Agent[TDeps] | StructuredAgent[TDeps, TResult]]:
                                  """Open and configure a specific agent from configuration."""
                                  """Implementation with all parameters..."""
                                  """Open and configure a specific agent from configuration.
                              
                                  Args:
                                      config: Path to agent configuration file or AgentsManifest instance
                                      agent_name: Name of the agent to load
                              
                                      # Basic Configuration
                                      model: Optional model override
                                      result_type: Optional type for structured responses
                                      model_settings: Additional model-specific settings
                                      session: Optional id or Session query to recover a conversation
                              
                                      # Tool Configuration
                                      tools: Additional tools to register (import paths or callables)
                                      tool_choice: Control tool usage:
                                          - True: Allow all tools
                                          - False: No tools
                                          - str: Use specific tool
                                          - list[str]: Allow specific tools
                                      end_strategy: Strategy for handling tool calls that are requested alongside
                                                      a final result
                              
                                      # Execution Settings
                                      retries: Default number of retries for failed operations
                                      result_tool_name: Name of the tool used for final result
                                      result_tool_description: Description of the final result tool
                                      result_retries: Max retries for result validation (defaults to retries)
                              
                                      # Other Settings
                                      system_prompt: Additional system prompts
                                      enable_db_logging: Whether to enable logging for the agent
                              
                                  Yields:
                                      Configured Agent instance
                              
                                  Raises:
                                      ValueError: If agent not found or configuration invalid
                                      RuntimeError: If agent initialization fails
                              
                                  Example:
                                      ```python
                                      async with Agent.open_agent(
                                          "agents.yml",
                                          "my_agent",
                                          model="gpt-4",
                                          tools=[my_custom_tool],
                                      ) as agent:
                                          result = await agent.run("Do something")
                                      ```
                                  """
                                  if isinstance(config, AgentsManifest):
                                      agent_def = config
                                  else:
                                      agent_def = AgentsManifest.from_file(config)
                              
                                  if agent_name not in agent_def.agents:
                                      msg = f"Agent {agent_name!r} not found in {config}"
                                      raise ValueError(msg)
                              
                                  agent_config = agent_def.agents[agent_name]
                                  resolved_type = result_type or agent_def.get_result_type(agent_name)
                              
                                  # Use model from override or agent config
                                  actual_model = model or agent_config.model
                                  if not actual_model:
                                      msg = "Model must be specified either in config or as override"
                                      raise ValueError(msg)
                              
                                  # Create context
                                  context = AgentContext[TDeps](  # Use TDeps here
                                      agent_name=agent_name,
                                      capabilities=agent_config.capabilities,
                                      definition=agent_def,
                                      config=agent_config,
                                      model_settings=model_settings or {},
                                  )
                              
                                  # Set up runtime
                                  cfg = agent_config.get_config()
                                  async with RuntimeConfig.open(cfg) as runtime:
                                      # Create base agent with correct typing
                                      base_agent = cls(  # cls is Agent[TDeps]
                                          runtime=runtime,
                                          context=context,
                                          model=actual_model,  # type: ignore[arg-type]
                                          retries=retries,
                                          session=session,
                                          result_retries=result_retries,
                                          end_strategy=end_strategy,
                                          tool_choice=tool_choice,
                                          tools=tools,
                                          system_prompt=system_prompt or [],
                                          enable_db_logging=enable_db_logging,
                                      )
                                      try:
                                          async with base_agent:
                                              if resolved_type is not None and resolved_type is not str:
                                                  # Yield structured agent with correct typing
                                                  from llmling_agent.agent.structured import StructuredAgent
                              
                                                  yield StructuredAgent[TDeps, TResult](  # Use TDeps and TResult
                                                      base_agent,
                                                      resolved_type,
                                                      tool_description=result_tool_description,
                                                      tool_name=result_tool_name,
                                                  )
                                              else:
                                                  yield base_agent
                                      finally:
                                          # Any cleanup if needed
                                          pass
                              

                              pass_results_to

                              pass_results_to(
                                  other: AnyAgent[Any, Any] | str,
                                  prompt: str | None = None,
                                  connection_type: ConnectionType = "run",
                                  priority: int = 0,
                                  delay: timedelta | None = None,
                              ) -> Talk
                              
                              pass_results_to(
                                  other: Team[Any],
                                  prompt: str | None = None,
                                  connection_type: ConnectionType = "run",
                                  priority: int = 0,
                                  delay: timedelta | None = None,
                              ) -> TeamTalk
                              
                              pass_results_to(
                                  other: AnyAgent[Any, Any] | Team[Any] | str,
                                  prompt: str | None = None,
                                  connection_type: ConnectionType = "run",
                                  priority: int = 0,
                                  delay: timedelta | None = None,
                              ) -> Talk | TeamTalk
                              

                              Forward results to another agent or all agents in a team.

                              Source code in src/llmling_agent/agent/agent.py
                              747
                              748
                              749
                              750
                              751
                              752
                              753
                              754
                              755
                              756
                              757
                              758
                              759
                              760
                              761
                              def pass_results_to(
                                  self,
                                  other: AnyAgent[Any, Any] | Team[Any] | str,
                                  prompt: str | None = None,
                                  connection_type: ConnectionType = "run",
                                  priority: int = 0,
                                  delay: timedelta | None = None,
                              ) -> Talk | TeamTalk:
                                  """Forward results to another agent or all agents in a team."""
                                  return self.connections.connect_agent_to(
                                      other,
                                      connection_type=connection_type,
                                      priority=priority,
                                      delay=delay,
                                  )
                              

                              register_worker

                              register_worker(
                                  worker: Agent[Any],
                                  *,
                                  name: str | None = None,
                                  reset_history_on_run: bool = True,
                                  pass_message_history: bool = False,
                                  share_context: bool = False,
                              ) -> ToolInfo
                              

                              Register another agent as a worker tool.

                              Source code in src/llmling_agent/agent/agent.py
                              1256
                              1257
                              1258
                              1259
                              1260
                              1261
                              1262
                              1263
                              1264
                              1265
                              1266
                              1267
                              1268
                              1269
                              1270
                              1271
                              1272
                              1273
                              def register_worker(
                                  self,
                                  worker: Agent[Any],
                                  *,
                                  name: str | None = None,
                                  reset_history_on_run: bool = True,
                                  pass_message_history: bool = False,
                                  share_context: bool = False,
                              ) -> ToolInfo:
                                  """Register another agent as a worker tool."""
                                  return self.tools.register_worker(
                                      worker,
                                      name=name,
                                      reset_history_on_run=reset_history_on_run,
                                      pass_message_history=pass_message_history,
                                      share_context=share_context,
                                      parent=self if (pass_message_history or share_context) else None,
                                  )
                              

                              run async

                              run(
                                  *prompt: AnyPromptType,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  store_history: bool = True,
                              ) -> ChatMessage[TResult]
                              

                              Run agent with prompt and get response.

                              Parameters:

                              Name Type Description Default
                              prompt AnyPromptType

                              User query or instruction

                              ()
                              result_type type[TResult] | None

                              Optional type for structured responses

                              None
                              deps TDeps | None

                              Optional dependencies for the agent

                              None
                              model ModelType

                              Optional model override

                              None
                              store_history bool

                              Whether the message exchange should be added to the context window

                              True

                              Returns:

                              Type Description
                              ChatMessage[TResult]

                              Result containing response and run information

                              Raises:

                              Type Description
                              UnexpectedModelBehavior

                              If the model fails or behaves unexpectedly

                              Source code in src/llmling_agent/agent/agent.py
                              776
                              777
                              778
                              779
                              780
                              781
                              782
                              783
                              784
                              785
                              786
                              787
                              788
                              789
                              790
                              791
                              792
                              793
                              794
                              795
                              796
                              797
                              798
                              799
                              800
                              801
                              802
                              803
                              804
                              805
                              806
                              807
                              808
                              809
                              810
                              811
                              812
                              813
                              814
                              815
                              816
                              817
                              818
                              819
                              820
                              821
                              822
                              823
                              824
                              825
                              826
                              827
                              828
                              829
                              830
                              831
                              832
                              833
                              834
                              835
                              836
                              837
                              838
                              839
                              840
                              841
                              842
                              843
                              844
                              845
                              846
                              847
                              848
                              849
                              850
                              851
                              852
                              853
                              854
                              855
                              856
                              857
                              858
                              859
                              860
                              861
                              @logfire.instrument("Calling Agent.run: {prompt}:")
                              async def run(
                                  self,
                                  *prompt: AnyPromptType,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  store_history: bool = True,
                              ) -> ChatMessage[TResult]:
                                  """Run agent with prompt and get response.
                              
                                  Args:
                                      prompt: User query or instruction
                                      result_type: Optional type for structured responses
                                      deps: Optional dependencies for the agent
                                      model: Optional model override
                                      store_history: Whether the message exchange should be added to the
                                                     context window
                              
                                  Returns:
                                      Result containing response and run information
                              
                                  Raises:
                                      UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                  """
                                  """Run agent with prompt and get response."""
                                  prompts = [await to_prompt(p) for p in prompt]
                                  final_prompt = "\n\n".join(prompts)
                                  if deps is not None:
                                      self.context.data = deps
                                  self.context.current_prompt = final_prompt
                                  self.set_result_type(result_type)
                                  wait_for_chain = False  # TODO
                              
                                  try:
                                      # Create and emit user message
                                      user_msg = ChatMessage[str](content=final_prompt, role="user")
                                      self.message_received.emit(user_msg)
                              
                                      # Get response through provider
                                      message_id = str(uuid4())
                                      start_time = time.perf_counter()
                                      result = await self._provider.generate_response(
                                          final_prompt,
                                          message_id,
                                          result_type=result_type,
                                          model=model,
                                          store_history=store_history,
                                      )
                              
                                      # Get cost info for assistant response
                                      usage = result.usage
                                      cost_info = (
                                          await TokenCost.from_usage(
                                              usage, result.model_name, final_prompt, str(result.content)
                                          )
                                          if self.model_name and usage
                                          else None
                                      )
                              
                                      # Create final message with all metrics
                                      assistant_msg = ChatMessage[TResult](
                                          content=result.content,
                                          role="assistant",
                                          name=self.name,
                                          model=self.model_name,
                                          message_id=message_id,
                                          tool_calls=result.tool_calls,
                                          cost_info=cost_info,
                                          response_time=time.perf_counter() - start_time,
                                      )
                                      if self._debug:
                                          import devtools
                              
                                          devtools.debug(assistant_msg)
                              
                                      self.message_sent.emit(assistant_msg)
                              
                                  except Exception:
                                      logger.exception("Agent run failed")
                                      raise
                              
                                  else:
                                      if wait_for_chain:
                                          await self.wait_for_chain()
                                      return assistant_msg
                              

                              run_continuous async

                              run_continuous(
                                  prompt: AnyPromptType,
                                  *,
                                  max_count: int | None = None,
                                  interval: float = 1.0,
                                  block: bool = False,
                                  **kwargs: Any,
                              ) -> ChatMessage[TResult] | None
                              

                              Run agent continuously with prompt or dynamic prompt function.

                              Parameters:

                              Name Type Description Default
                              prompt AnyPromptType

                              Static prompt or function that generates prompts

                              required
                              max_count int | None

                              Maximum number of runs (None = infinite)

                              None
                              interval float

                              Seconds between runs

                              1.0
                              block bool

                              Whether to block until completion

                              False
                              **kwargs Any

                              Arguments passed to run()

                              {}
                              Source code in src/llmling_agent/agent/agent.py
                              1112
                              1113
                              1114
                              1115
                              1116
                              1117
                              1118
                              1119
                              1120
                              1121
                              1122
                              1123
                              1124
                              1125
                              1126
                              1127
                              1128
                              1129
                              1130
                              1131
                              1132
                              1133
                              1134
                              1135
                              1136
                              1137
                              1138
                              1139
                              1140
                              1141
                              1142
                              1143
                              1144
                              1145
                              1146
                              1147
                              1148
                              1149
                              1150
                              1151
                              1152
                              1153
                              1154
                              1155
                              1156
                              1157
                              1158
                              1159
                              1160
                              1161
                              1162
                              1163
                              1164
                              1165
                              1166
                              1167
                              1168
                              1169
                              1170
                              1171
                              1172
                              1173
                              1174
                              async def run_continuous(
                                  self,
                                  prompt: AnyPromptType,
                                  *,
                                  max_count: int | None = None,
                                  interval: float = 1.0,
                                  block: bool = False,
                                  **kwargs: Any,
                              ) -> ChatMessage[TResult] | None:
                                  """Run agent continuously with prompt or dynamic prompt function.
                              
                                  Args:
                                      prompt: Static prompt or function that generates prompts
                                      max_count: Maximum number of runs (None = infinite)
                                      interval: Seconds between runs
                                      block: Whether to block until completion
                                      **kwargs: Arguments passed to run()
                                  """
                              
                                  async def _continuous():
                                      count = 0
                                      msg = "%s: Starting continuous run (max_count=%s, interval=%s)"
                                      logger.debug(msg, self.name, max_count, interval)
                                      while max_count is None or count < max_count:
                                          try:
                                              current_prompt = (
                                                  call_with_context(prompt, self.context, **kwargs)
                                                  if callable(prompt)
                                                  else to_prompt(prompt)
                                              )
                                              msg = "%s: Generated prompt #%d: %s"
                                              logger.debug(msg, self.name, count, current_prompt)
                              
                                              await self.run(current_prompt, **kwargs)
                                              msg = "%s: Run continous result #%d"
                                              logger.debug(msg, self.name, count)
                              
                                              count += 1
                                              await asyncio.sleep(interval)
                                          except asyncio.CancelledError:
                                              logger.debug("%s: Continuous run cancelled", self.name)
                                              break
                                          except Exception:
                                              logger.exception("%s: Background run failed", self.name)
                                              await asyncio.sleep(interval)
                                      msg = "%s: Continuous run completed after %d iterations"
                                      logger.debug(msg, self.name, count)
                              
                                  # Cancel any existing background task
                                  await self.stop()
                                  task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                              
                                  if block:
                                      try:
                                          await task
                                          return None
                                      finally:
                                          if not task.done():
                                              task.cancel()
                                  else:
                                      logger.debug("%s: Started background task %s", self.name, task.get_name())
                                      self._background_task = task
                                      return None
                              

                              run_stream async

                              run_stream(
                                  *prompt: AnyPromptType,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  store_history: bool = True,
                              ) -> AsyncIterator[StreamedRunResult[AgentContext[TDeps], TResult]]
                              

                              Run agent with prompt and get a streaming response.

                              Parameters:

                              Name Type Description Default
                              prompt AnyPromptType

                              User query or instruction

                              ()
                              result_type type[TResult] | None

                              Optional type for structured responses

                              None
                              deps TDeps | None

                              Optional dependencies for the agent

                              None
                              model ModelType

                              Optional model override

                              None
                              store_history bool

                              Whether the message exchange should be added to the context window

                              True

                              Returns:

                              Type Description
                              AsyncIterator[StreamedRunResult[AgentContext[TDeps], TResult]]

                              A streaming result to iterate over.

                              Raises:

                              Type Description
                              UnexpectedModelBehavior

                              If the model fails or behaves unexpectedly

                              Source code in src/llmling_agent/agent/agent.py
                              916
                              917
                              918
                              919
                              920
                              921
                              922
                              923
                              924
                              925
                              926
                              927
                              928
                              929
                              930
                              931
                              932
                              933
                              934
                              935
                              936
                              937
                              938
                              939
                              940
                              941
                              942
                              943
                              944
                              945
                              946
                              947
                              948
                              949
                              950
                              951
                              952
                              953
                              954
                              955
                              956
                              957
                              958
                              959
                              960
                              961
                              962
                              963
                              964
                              965
                              966
                              967
                              968
                              969
                              970
                              971
                              972
                              973
                              974
                              975
                              976
                              977
                              978
                              979
                              980
                              981
                              982
                              983
                              984
                              985
                              986
                              987
                              988
                              989
                              990
                              @asynccontextmanager
                              async def run_stream(
                                  self,
                                  *prompt: AnyPromptType,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  store_history: bool = True,
                              ) -> AsyncIterator[StreamedRunResult[AgentContext[TDeps], TResult]]:
                                  """Run agent with prompt and get a streaming response.
                              
                                  Args:
                                      prompt: User query or instruction
                                      result_type: Optional type for structured responses
                                      deps: Optional dependencies for the agent
                                      model: Optional model override
                                      store_history: Whether the message exchange should be added to the
                                                     context window
                              
                                  Returns:
                                      A streaming result to iterate over.
                              
                                  Raises:
                                      UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                  """
                                  prompts = [await to_prompt(p) for p in prompt]
                                  final_prompt = "\n\n".join(prompts)
                                  self.set_result_type(result_type)
                              
                                  if deps is not None:
                                      self.context.data = deps
                                  self.context.current_prompt = final_prompt
                                  try:
                                      # Create and emit user message
                                      user_msg = ChatMessage[str](content=final_prompt, role="user")
                                      self.message_received.emit(user_msg)
                                      message_id = str(uuid4())
                                      start_time = time.perf_counter()
                              
                                      async with self._provider.stream_response(
                                          final_prompt,
                                          message_id,
                                          result_type=result_type,
                                          model=model,
                                          store_history=store_history,
                                      ) as stream:
                                          yield stream  # type: ignore
                              
                                          # After streaming is done, create and emit final message
                                          usage = stream.usage()
                                          cost_info = (
                                              await TokenCost.from_usage(
                                                  usage,
                                                  stream.model_name,  # type: ignore
                                                  final_prompt,
                                                  str(stream.formatted_content),  # type: ignore
                                              )
                                              if self.model_name
                                              else None
                                          )
                              
                                          assistant_msg = ChatMessage[TResult](
                                              content=cast(TResult, stream.formatted_content),  # type: ignore
                                              role="assistant",
                                              name=self.name,
                                              model=self.model_name,
                                              message_id=message_id,
                                              cost_info=cost_info,
                                              response_time=time.perf_counter() - start_time,
                                          )
                                          self.message_sent.emit(assistant_msg)
                              
                                  except Exception:
                                      logger.exception("Agent stream failed")
                                      raise
                              

                              run_sync

                              run_sync(
                                  *prompt: AnyPromptType,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  store_history: bool = True,
                              ) -> ChatMessage[TResult]
                              

                              Run agent synchronously (convenience wrapper).

                              Parameters:

                              Name Type Description Default
                              prompt AnyPromptType

                              User query or instruction

                              ()
                              result_type type[TResult] | None

                              Optional type for structured responses

                              None
                              deps TDeps | None

                              Optional dependencies for the agent

                              None
                              model ModelType

                              Optional model override

                              None
                              store_history bool

                              Whether the message exchange should be added to the context window

                              True

                              Returns: Result containing response and run information

                              Source code in src/llmling_agent/agent/agent.py
                               992
                               993
                               994
                               995
                               996
                               997
                               998
                               999
                              1000
                              1001
                              1002
                              1003
                              1004
                              1005
                              1006
                              1007
                              1008
                              1009
                              1010
                              1011
                              1012
                              1013
                              1014
                              1015
                              1016
                              1017
                              1018
                              1019
                              1020
                              1021
                              1022
                              1023
                              1024
                              1025
                              1026
                              def run_sync(
                                  self,
                                  *prompt: AnyPromptType,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  store_history: bool = True,
                              ) -> ChatMessage[TResult]:
                                  """Run agent synchronously (convenience wrapper).
                              
                                  Args:
                                      prompt: User query or instruction
                                      result_type: Optional type for structured responses
                                      deps: Optional dependencies for the agent
                                      model: Optional model override
                                      store_history: Whether the message exchange should be added to the
                                                     context window
                                  Returns:
                                      Result containing response and run information
                                  """
                                  try:
                                      return asyncio.run(
                                          self.run(
                                              prompt,
                                              deps=deps,
                                              model=model,
                                              store_history=store_history,
                                              result_type=result_type,
                                          )
                                      )
                                  except KeyboardInterrupt:
                                      raise
                                  except Exception:
                                      logger.exception("Sync agent run failed")
                                      raise
                              

                              run_task async

                              run_task(
                                  task: AgentTask[TDeps, TResult],
                                  *,
                                  store_history: bool = True,
                                  include_agent_tools: bool = True,
                              ) -> ChatMessage[TResult]
                              

                              Execute a pre-defined task.

                              Parameters:

                              Name Type Description Default
                              task AgentTask[TDeps, TResult]

                              Task configuration to execute

                              required
                              store_history bool

                              Whether the message exchange should be added to the context window

                              True
                              include_agent_tools bool

                              Whether to include agent tools

                              True

                              Returns: Task execution result

                              Raises:

                              Type Description
                              TaskError

                              If task execution fails

                              ValueError

                              If task configuration is invalid

                              Source code in src/llmling_agent/agent/agent.py
                              1042
                              1043
                              1044
                              1045
                              1046
                              1047
                              1048
                              1049
                              1050
                              1051
                              1052
                              1053
                              1054
                              1055
                              1056
                              1057
                              1058
                              1059
                              1060
                              1061
                              1062
                              1063
                              1064
                              1065
                              1066
                              1067
                              1068
                              1069
                              1070
                              1071
                              1072
                              1073
                              1074
                              1075
                              1076
                              1077
                              1078
                              1079
                              1080
                              1081
                              1082
                              1083
                              1084
                              1085
                              1086
                              1087
                              1088
                              1089
                              1090
                              1091
                              1092
                              1093
                              1094
                              1095
                              1096
                              1097
                              1098
                              1099
                              1100
                              1101
                              1102
                              1103
                              1104
                              1105
                              1106
                              1107
                              1108
                              1109
                              1110
                              async def run_task[TResult](
                                  self,
                                  task: AgentTask[TDeps, TResult],
                                  *,
                                  store_history: bool = True,
                                  include_agent_tools: bool = True,
                              ) -> ChatMessage[TResult]:
                                  """Execute a pre-defined task.
                              
                                  Args:
                                      task: Task configuration to execute
                                      store_history: Whether the message exchange should be added to the
                                                     context window
                                      include_agent_tools: Whether to include agent tools
                                  Returns:
                                      Task execution result
                              
                                  Raises:
                                      TaskError: If task execution fails
                                      ValueError: If task configuration is invalid
                                  """
                                  from llmling_agent.tasks import TaskError
                              
                                  original_result_type = self._result_type
                              
                                  self.set_result_type(task.result_type)
                              
                                  # Load task knowledge
                                  if task.knowledge:
                                      # Add knowledge sources to context
                                      resources: list[Resource | str] = list(task.knowledge.paths) + list(
                                          task.knowledge.resources
                                      )
                                      for source in resources:
                                          await self.conversation.load_context_source(source)
                                      for prompt in task.knowledge.prompts:
                                          if isinstance(prompt, StaticPrompt | DynamicPrompt | FilePrompt):
                                              await self.conversation.add_context_from_prompt(prompt)
                                          else:
                                              await self.conversation.load_context_source(prompt)
                              
                                  try:
                                      # Register task tools temporarily
                                      tools = [import_callable(cfg.import_path) for cfg in task.tool_configs]
                                      names = [cfg.name for cfg in task.tool_configs]
                                      descriptions = [cfg.description for cfg in task.tool_configs]
                                      tools = [
                                          LLMCallableTool.from_callable(
                                              tool, name_override=name, description_override=description
                                          )
                                          for tool, name, description in zip(tools, names, descriptions)
                                      ]
                                      with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                          # Execute task with task-specific tools
                                          from llmling_agent.tasks.strategies import DirectStrategy
                              
                                          strategy = DirectStrategy[TDeps, TResult]()
                                          return await strategy.execute(
                                              task=task,
                                              agent=self,
                                              store_history=store_history,
                                          )
                              
                                  except Exception as e:
                                      msg = f"Task execution failed: {e}"
                                      logger.exception(msg)
                                      raise TaskError(msg) from e
                                  finally:
                                      self.set_result_type(original_result_type)
                              

                              set_model

                              set_model(model: ModelType)
                              

                              Set the model for this agent.

                              Parameters:

                              Name Type Description Default
                              model ModelType

                              New model to use (name or instance)

                              required
                              Emits

                              model_changed signal with the new model

                              Source code in src/llmling_agent/agent/agent.py
                              1275
                              1276
                              1277
                              1278
                              1279
                              1280
                              1281
                              1282
                              1283
                              1284
                              def set_model(self, model: ModelType):
                                  """Set the model for this agent.
                              
                                  Args:
                                      model: New model to use (name or instance)
                              
                                  Emits:
                                      model_changed signal with the new model
                                  """
                                  self._provider.set_model(model)
                              

                              set_result_type

                              set_result_type(
                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              )
                              

                              Set or update the result type for this agent.

                              Parameters:

                              Name Type Description Default
                              result_type type[TResult] | str | ResponseDefinition | None

                              New result type, can be: - A Python type for validation - Name of a response definition - Response definition instance - None to reset to unstructured mode

                              required
                              tool_name str | None

                              Optional override for tool name

                              None
                              tool_description str | None

                              Optional override for tool description

                              None
                              Source code in src/llmling_agent/agent/agent.py
                              355
                              356
                              357
                              358
                              359
                              360
                              361
                              362
                              363
                              364
                              365
                              366
                              367
                              368
                              369
                              370
                              371
                              372
                              373
                              374
                              def set_result_type(
                                  self,
                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              ):
                                  """Set or update the result type for this agent.
                              
                                  Args:
                                      result_type: New result type, can be:
                                          - A Python type for validation
                                          - Name of a response definition
                                          - Response definition instance
                                          - None to reset to unstructured mode
                                      tool_name: Optional override for tool name
                                      tool_description: Optional override for tool description
                                  """
                                  logger.debug("Setting result type to: %s", result_type)
                                  self._result_type = to_type(result_type)  # to_type?
                              

                              share async

                              share(
                                  target: AnyAgent[TDeps, Any],
                                  *,
                                  tools: list[str] | None = None,
                                  resources: list[str] | None = None,
                                  history: bool | int | None = None,
                                  token_limit: int | None = None,
                              ) -> None
                              

                              Share capabilities and knowledge with another agent.

                              Parameters:

                              Name Type Description Default
                              target AnyAgent[TDeps, Any]

                              Agent to share with

                              required
                              tools list[str] | None

                              List of tool names to share

                              None
                              resources list[str] | None

                              List of resource names to share

                              None
                              history bool | int | None

                              Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

                              None
                              token_limit int | None

                              Optional max tokens for history

                              None

                              Raises:

                              Type Description
                              ValueError

                              If requested items don't exist

                              RuntimeError

                              If runtime not available for resources

                              Source code in src/llmling_agent/agent/agent.py
                              1200
                              1201
                              1202
                              1203
                              1204
                              1205
                              1206
                              1207
                              1208
                              1209
                              1210
                              1211
                              1212
                              1213
                              1214
                              1215
                              1216
                              1217
                              1218
                              1219
                              1220
                              1221
                              1222
                              1223
                              1224
                              1225
                              1226
                              1227
                              1228
                              1229
                              1230
                              1231
                              1232
                              1233
                              1234
                              1235
                              1236
                              1237
                              1238
                              1239
                              1240
                              1241
                              1242
                              1243
                              1244
                              1245
                              1246
                              1247
                              1248
                              1249
                              1250
                              1251
                              1252
                              1253
                              1254
                              async def share(
                                  self,
                                  target: AnyAgent[TDeps, Any],
                                  *,
                                  tools: list[str] | None = None,
                                  resources: list[str] | None = None,
                                  history: bool | int | None = None,  # bool or number of messages
                                  token_limit: int | None = None,
                              ) -> None:
                                  """Share capabilities and knowledge with another agent.
                              
                                  Args:
                                      target: Agent to share with
                                      tools: List of tool names to share
                                      resources: List of resource names to share
                                      history: Share conversation history:
                                              - True: Share full history
                                              - int: Number of most recent messages to share
                                              - None: Don't share history
                                      token_limit: Optional max tokens for history
                              
                                  Raises:
                                      ValueError: If requested items don't exist
                                      RuntimeError: If runtime not available for resources
                                  """
                                  # Share tools if requested
                                  for name in tools or []:
                                      if tool := self.tools.get(name):
                                          meta = {"shared_from": self.name}
                                          target.tools.register_tool(tool.callable, metadata=meta)
                                      else:
                                          msg = f"Tool not found: {name}"
                                          raise ValueError(msg)
                              
                                  # Share resources if requested
                                  if resources:
                                      if not self.runtime:
                                          msg = "No runtime available for sharing resources"
                                          raise RuntimeError(msg)
                                      for name in resources:
                                          if resource := self.runtime.get_resource(name):
                                              await target.conversation.load_context_source(resource)
                                          else:
                                              msg = f"Resource not found: {name}"
                                              raise ValueError(msg)
                              
                                  # Share history if requested
                                  if history:
                                      history_text = await self.conversation.format_history(
                                          max_tokens=token_limit,
                                          num_messages=history if isinstance(history, int) else None,
                                      )
                                      await target.conversation.add_context_message(
                                          history_text, source=self.name, metadata={"type": "shared_history"}
                                      )
                              

                              stop async

                              stop()
                              

                              Stop continuous execution if running.

                              Source code in src/llmling_agent/agent/agent.py
                              1176
                              1177
                              1178
                              1179
                              1180
                              1181
                              async def stop(self):
                                  """Stop continuous execution if running."""
                                  if self._background_task and not self._background_task.done():
                                      self._background_task.cancel()
                                      await self._background_task
                                      self._background_task = None
                              

                              stop_passing_results_to

                              stop_passing_results_to(other: AnyAgent[Any, Any])
                              

                              Stop forwarding results to another agent.

                              Source code in src/llmling_agent/agent/agent.py
                              763
                              764
                              765
                              def stop_passing_results_to(self, other: AnyAgent[Any, Any]):
                                  """Stop forwarding results to another agent."""
                                  self.connections.disconnect(other)
                              

                              to_agent_tool

                              to_agent_tool(
                                  *,
                                  name: str | None = None,
                                  reset_history_on_run: bool = True,
                                  pass_message_history: bool = False,
                                  share_context: bool = False,
                                  parent: AnyAgent[Any, Any] | None = None,
                              ) -> LLMCallableTool
                              

                              Create a tool from this agent.

                              Parameters:

                              Name Type Description Default
                              name str | None

                              Optional tool name override

                              None
                              reset_history_on_run bool

                              Clear agent's history before each run

                              True
                              pass_message_history bool

                              Pass parent's message history to agent

                              False
                              share_context bool

                              Whether to pass parent's context/deps

                              False
                              parent AnyAgent[Any, Any] | None

                              Optional parent agent for history/context sharing

                              None
                              Source code in src/llmling_agent/agent/agent.py
                              863
                              864
                              865
                              866
                              867
                              868
                              869
                              870
                              871
                              872
                              873
                              874
                              875
                              876
                              877
                              878
                              879
                              880
                              881
                              882
                              883
                              884
                              885
                              886
                              887
                              888
                              889
                              890
                              891
                              892
                              893
                              894
                              895
                              896
                              897
                              898
                              899
                              900
                              901
                              902
                              903
                              904
                              905
                              906
                              907
                              908
                              909
                              910
                              911
                              912
                              913
                              914
                              def to_agent_tool(
                                  self,
                                  *,
                                  name: str | None = None,
                                  reset_history_on_run: bool = True,
                                  pass_message_history: bool = False,
                                  share_context: bool = False,
                                  parent: AnyAgent[Any, Any] | None = None,
                              ) -> LLMCallableTool:
                                  """Create a tool from this agent.
                              
                                  Args:
                                      name: Optional tool name override
                                      reset_history_on_run: Clear agent's history before each run
                                      pass_message_history: Pass parent's message history to agent
                                      share_context: Whether to pass parent's context/deps
                                      parent: Optional parent agent for history/context sharing
                                  """
                                  tool_name = f"ask_{self.name}"
                              
                                  async def wrapped_tool(ctx: RunContext[AgentContext[TDeps]], prompt: str) -> str:
                                      if pass_message_history and not parent:
                                          msg = "Parent agent required for message history sharing"
                                          raise ToolError(msg)
                              
                                      if reset_history_on_run:
                                          self.conversation.clear()
                              
                                      history = None
                                      deps = ctx.deps.data if share_context else None
                                      if pass_message_history and parent:
                                          history = parent.conversation.get_history()
                                          old = self.conversation.get_history()
                                          self.conversation.set_history(history)
                                      result = await self.run(prompt, deps=deps, result_type=self._result_type)
                                      if history:
                                          self.conversation.set_history(old)
                                      return result.data
                              
                                  normalized_name = self.name.replace("_", " ").title()
                                  docstring = f"Get expert answer from specialized agent: {normalized_name}"
                                  if self.description:
                                      docstring = f"{docstring}\n\n{self.description}"
                              
                                  wrapped_tool.__doc__ = docstring
                                  wrapped_tool.__name__ = tool_name
                              
                                  return LLMCallableTool.from_callable(
                                      wrapped_tool,
                                      name_override=tool_name,
                                      description_override=docstring,
                                  )
                              

                              to_structured

                              to_structured(
                                  result_type: None,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              ) -> Self
                              
                              to_structured(
                                  result_type: type[TResult] | str | ResponseDefinition,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              ) -> StructuredAgent[TDeps, TResult]
                              
                              to_structured(
                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              ) -> StructuredAgent[TDeps, TResult] | Self
                              

                              Convert this agent to a structured agent.

                              If result_type is None, returns self unchanged (no wrapping). Otherwise creates a StructuredAgent wrapper.

                              Parameters:

                              Name Type Description Default
                              result_type type[TResult] | str | ResponseDefinition | None

                              Type for structured responses. Can be: - A Python type (Pydantic model) - Name of response definition from context - Complete response definition - None to skip wrapping

                              required
                              tool_name str | None

                              Optional override for result tool name

                              None
                              tool_description str | None

                              Optional override for result tool description

                              None

                              Returns:

                              Type Description
                              StructuredAgent[TDeps, TResult] | Self

                              Either StructuredAgent wrapper or self unchanged

                              from llmling_agent.agent import StructuredAgent

                              Source code in src/llmling_agent/agent/agent.py
                              394
                              395
                              396
                              397
                              398
                              399
                              400
                              401
                              402
                              403
                              404
                              405
                              406
                              407
                              408
                              409
                              410
                              411
                              412
                              413
                              414
                              415
                              416
                              417
                              418
                              419
                              420
                              421
                              422
                              423
                              424
                              425
                              426
                              427
                              428
                              429
                              def to_structured[TResult](
                                  self,
                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              ) -> StructuredAgent[TDeps, TResult] | Self:
                                  """Convert this agent to a structured agent.
                              
                                  If result_type is None, returns self unchanged (no wrapping).
                                  Otherwise creates a StructuredAgent wrapper.
                              
                                  Args:
                                      result_type: Type for structured responses. Can be:
                                          - A Python type (Pydantic model)
                                          - Name of response definition from context
                                          - Complete response definition
                                          - None to skip wrapping
                                      tool_name: Optional override for result tool name
                                      tool_description: Optional override for result tool description
                              
                                  Returns:
                                      Either StructuredAgent wrapper or self unchanged
                                  from llmling_agent.agent import StructuredAgent
                                  """
                                  if result_type is None:
                                      return self
                              
                                  from llmling_agent.agent import StructuredAgent
                              
                                  return StructuredAgent(
                                      self,
                                      result_type=result_type,
                                      tool_name=tool_name,
                                      tool_description=tool_description,
                                  )
                              

                              wait_for_chain async

                              wait_for_chain(_seen: set[str] | None = None)
                              

                              Wait for this agent and all connected agents to complete their tasks.

                              Source code in src/llmling_agent/agent/agent.py
                              1028
                              1029
                              1030
                              1031
                              1032
                              1033
                              1034
                              1035
                              1036
                              1037
                              1038
                              1039
                              1040
                              async def wait_for_chain(self, _seen: set[str] | None = None):
                                  """Wait for this agent and all connected agents to complete their tasks."""
                                  # Track seen agents to avoid cycles
                                  seen = _seen or {self.name}
                              
                                  # Wait for our own tasks
                                  await self.complete_tasks()
                              
                                  # Wait for connected agents
                                  for agent in self.connections.get_targets():
                                      if agent.name not in seen:
                                          seen.add(agent.name)
                                          await agent.wait_for_chain(seen)
                              

                              AgentConfig

                              Bases: BaseModel

                              Configuration for a single agent in the system.

                              Defines an agent's complete configuration including its model, environment, capabilities, and behavior settings. Each agent can have its own: - Language model configuration - Environment setup (tools and resources) - Response type definitions - System prompts and default user prompts - Role-based capabilities

                              The configuration can be loaded from YAML or created programmatically.

                              Source code in src/llmling_agent/models/agents.py
                               88
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              244
                              245
                              246
                              247
                              248
                              249
                              250
                              251
                              252
                              253
                              254
                              255
                              256
                              257
                              258
                              259
                              260
                              261
                              262
                              263
                              264
                              265
                              266
                              267
                              268
                              269
                              270
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              292
                              293
                              294
                              295
                              296
                              297
                              298
                              299
                              300
                              301
                              302
                              303
                              304
                              305
                              306
                              307
                              308
                              309
                              310
                              311
                              312
                              313
                              314
                              315
                              316
                              317
                              318
                              319
                              320
                              321
                              322
                              323
                              324
                              325
                              326
                              327
                              328
                              329
                              330
                              331
                              332
                              333
                              334
                              335
                              336
                              337
                              338
                              339
                              340
                              341
                              342
                              343
                              344
                              345
                              346
                              347
                              348
                              349
                              350
                              351
                              352
                              353
                              354
                              355
                              356
                              357
                              358
                              359
                              360
                              361
                              362
                              363
                              364
                              365
                              366
                              367
                              368
                              369
                              370
                              371
                              372
                              373
                              374
                              375
                              376
                              377
                              378
                              379
                              380
                              381
                              382
                              383
                              384
                              385
                              386
                              387
                              388
                              389
                              390
                              391
                              392
                              393
                              394
                              395
                              396
                              397
                              398
                              399
                              400
                              401
                              402
                              403
                              404
                              405
                              406
                              407
                              408
                              409
                              410
                              411
                              412
                              413
                              414
                              415
                              416
                              417
                              418
                              419
                              420
                              421
                              422
                              423
                              424
                              425
                              426
                              427
                              428
                              429
                              430
                              431
                              432
                              433
                              class AgentConfig(BaseModel):
                                  """Configuration for a single agent in the system.
                              
                                  Defines an agent's complete configuration including its model, environment,
                                  capabilities, and behavior settings. Each agent can have its own:
                                  - Language model configuration
                                  - Environment setup (tools and resources)
                                  - Response type definitions
                                  - System prompts and default user prompts
                                  - Role-based capabilities
                              
                                  The configuration can be loaded from YAML or created programmatically.
                                  """
                              
                                  type: ProviderConfig | Literal["ai", "human", "litellm"] = "ai"
                                  """Provider configuration or shorthand type"""
                              
                                  name: str | None = None
                                  """Name of the agent"""
                              
                                  inherits: str | None = None
                                  """Name of agent config to inherit from"""
                              
                                  description: str | None = None
                                  """Optional description of the agent's purpose"""
                              
                                  model: str | AnyModel | None = None  # pyright: ignore[reportInvalidTypeForm]
                                  """The model to use for this agent. Can be either a simple model name
                                  string (e.g. 'openai:gpt-4') or a structured model definition."""
                              
                                  environment: str | AgentEnvironment | None = None
                                  """Environment configuration (path or object)"""
                              
                                  capabilities: Capabilities = Field(default_factory=Capabilities)
                                  """Current agent's capabilities."""
                              
                                  mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                                  """List of MCP server configurations:
                                  - str entries are converted to StdioMCPServer
                                  - MCPServerConfig for full server configuration
                                  """
                              
                                  session: str | SessionQuery | None = None
                                  """Session configuration for conversation recovery."""
                              
                                  enable_db_logging: bool = True
                                  """Enable session database logging."""
                              
                                  result_type: str | ResponseDefinition | None = None
                                  """Name of the response definition to use"""
                              
                                  retries: int = 1
                                  """Number of retries for failed operations (maps to pydantic-ai's retries)"""
                              
                                  result_tool_name: str = "final_result"
                                  """Name of the tool used for structured responses"""
                              
                                  result_tool_description: str | None = None
                                  """Custom description for the result tool"""
                              
                                  result_retries: int | None = None
                                  """Max retries for result validation"""
                              
                                  end_strategy: EndStrategy = "early"
                                  """The strategy for handling multiple tool calls when a final result is found"""
                              
                                  # defer_model_check: bool = False
                                  # """Whether to defer model evaluation until first run"""
                              
                                  avatar: str | None = None
                                  """URL or path to agent's avatar image"""
                              
                                  system_prompts: list[str] = Field(default_factory=list)
                                  """System prompts for the agent"""
                              
                                  user_prompts: list[str] = Field(default_factory=list)
                                  """Default user prompts for the agent"""
                              
                                  # context_sources: list[ContextSource] = Field(default_factory=list)
                                  # """Initial context sources to load"""
                              
                                  include_role_prompts: bool = True
                                  """Whether to include default prompts based on the agent's role."""
                              
                                  model_settings: dict[str, Any] = Field(default_factory=dict)
                                  """Additional settings to pass to the model"""
                              
                                  config_file_path: str | None = None
                                  """Config file path for resolving environment."""
                              
                                  triggers: list[EventConfig] = Field(default_factory=list)
                                  """Event sources that activate this agent"""
                              
                                  knowledge: Knowledge | None = None
                                  """Knowledge sources for this agent."""
                              
                                  forward_to: list[ForwardingTarget] = Field(default_factory=list)
                                  """Targets to forward results to."""
                              
                                  workers: list[WorkerConfig] = Field(default_factory=list)
                                  """Worker agents which will be available as tools."""
                              
                                  debug: bool = False
                                  """Enable debug output for this agent."""
                              
                                  def is_structured(self) -> bool:
                                      """Check if this config defines a structured agent."""
                                      return self.result_type is not None
                              
                                  model_config = ConfigDict(
                                      frozen=True,
                                      arbitrary_types_allowed=True,
                                      extra="forbid",
                                      use_attribute_docstrings=True,
                                  )
                              
                                  @model_validator(mode="before")
                                  @classmethod
                                  def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                                      """Convert string workers to WorkerConfig."""
                                      if workers := data.get("workers"):
                                          data["workers"] = [
                                              WorkerConfig.from_str(w)
                                              if isinstance(w, str)
                                              else w
                                              if isinstance(w, WorkerConfig)  # Keep existing WorkerConfig
                                              else WorkerConfig(**w)  # Convert dict to WorkerConfig
                                              for w in workers
                                          ]
                                      return data
                              
                                  @model_validator(mode="before")
                                  @classmethod
                                  def validate_result_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                                      """Convert result type and apply its settings."""
                                      result_type = data.get("result_type")
                                      if isinstance(result_type, dict):
                                          # Extract response-specific settings
                                          tool_name = result_type.pop("result_tool_name", None)
                                          tool_description = result_type.pop("result_tool_description", None)
                                          retries = result_type.pop("result_retries", None)
                              
                                          # Convert remaining dict to ResponseDefinition
                                          if "type" not in result_type:
                                              result_type["type"] = "inline"
                                          data["result_type"] = InlineResponseDefinition(**result_type)
                              
                                          # Apply extracted settings to agent config
                                          if tool_name:
                                              data["result_tool_name"] = tool_name
                                          if tool_description:
                                              data["result_tool_description"] = tool_description
                                          if retries is not None:
                                              data["result_retries"] = retries
                              
                                      return data
                              
                                  @model_validator(mode="before")
                                  @classmethod
                                  def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                                      """Convert model inputs to appropriate format."""
                                      model = data.get("model")
                                      match model:
                                          case str():
                                              data["model"] = {"type": "string", "identifier": model}
                                          case TestModel():
                                              # Wrap TestModel in our custom wrapper
                                              data["model"] = {"type": "test", "model": model}
                                      return data
                              
                                  def get_session_query(self) -> SessionQuery | None:
                                      """Get session query from config."""
                                      if self.session is None:
                                          return None
                                      if isinstance(self.session, str):
                                          return SessionQuery(name=self.session)
                                      return self.session
                              
                                  def get_provider(self) -> AgentProvider:
                                      """Get resolved provider instance.
                              
                                      Creates provider instance based on configuration:
                                      - Full provider config: Use as-is
                                      - Shorthand type: Create default provider config
                                      """
                                      # If string shorthand is used, convert to default provider config
                                      from llmling_agent.models.providers import (
                                          AIProviderConfig,
                                          HumanProviderConfig,
                                          LiteLLMProviderConfig,
                                      )
                              
                                      provider_config = self.type
                                      if isinstance(provider_config, str):
                                          match provider_config:
                                              case "ai":
                                                  provider_config = AIProviderConfig()
                                              case "human":
                                                  provider_config = HumanProviderConfig()
                                              case "litellm":
                                                  provider_config = LiteLLMProviderConfig()
                                              case _:
                                                  msg = f"Invalid provider type: {provider_config}"
                                                  raise ValueError(msg)
                              
                                      # Create provider instance from config
                                      return provider_config.get_provider()
                              
                                  def get_mcp_servers(self) -> list[MCPServerConfig]:
                                      """Get processed MCP server configurations.
                              
                                      Converts string entries to StdioMCPServer configs by splitting
                                      into command and arguments.
                              
                                      Returns:
                                          List of MCPServerConfig instances
                              
                                      Raises:
                                          ValueError: If string entry is empty
                                      """
                                      configs: list[MCPServerConfig] = []
                              
                                      for server in self.mcp_servers:
                                          match server:
                                              case str():
                                                  parts = server.split()
                                                  if not parts:
                                                      msg = "Empty MCP server command"
                                                      raise ValueError(msg)
                              
                                                  configs.append(StdioMCPServer(command=parts[0], args=parts[1:]))
                                              case MCPServerBase():
                                                  configs.append(server)
                              
                                      return configs
                              
                                  def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                                      """Render system prompts with context."""
                                      if not context:
                                          # Default context
                                          context = {"name": self.name, "id": 1, "model": self.model}
                                      return [render_prompt(p, {"agent": context}) for p in self.system_prompts]
                              
                                  def get_config(self) -> Config:
                                      """Get configuration for this agent."""
                                      match self.environment:
                                          case None:
                                              # Create minimal config
                                              caps = LLMCapabilitiesConfig()
                                              global_settings = GlobalSettings(llm_capabilities=caps)
                                              return Config(global_settings=global_settings)
                                          case str() as path:
                                              # Backward compatibility: treat as file path
                                              resolved = self._resolve_environment_path(path, self.config_file_path)
                                              return Config.from_file(resolved)
                                          case FileEnvironment(uri=uri) as env:
                                              # Handle FileEnvironment instance
                                              resolved = env.get_file_path()
                                              return Config.from_file(resolved)
                                          case {"type": "file", "uri": uri}:
                                              # Handle raw dict matching file environment structure
                                              return Config.from_file(uri)
                                          case {"type": "inline", "config": config}:
                                              return config
                                          case InlineEnvironment() as config:
                                              return config
                                          case _:
                                              msg = f"Invalid environment configuration: {self.environment}"
                                              raise ValueError(msg)
                              
                                  def get_environment_path(self) -> str | None:
                                      """Get environment file path if available."""
                                      match self.environment:
                                          case str() as path:
                                              return self._resolve_environment_path(path, self.config_file_path)
                                          case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                              return uri
                                          case _:
                                              return None
                              
                                  def get_environment_display(self) -> str:
                                      """Get human-readable environment description."""
                                      match self.environment:
                                          case str() as path:
                                              return f"File: {path}"
                                          case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                              return f"File: {uri}"
                                          case {"type": "inline", "uri": uri} | InlineEnvironment(uri=uri) if uri:
                                              return f"Inline: {uri}"
                                          case {"type": "inline"} | InlineEnvironment():
                                              return "Inline configuration"
                                          case None:
                                              return "No environment configured"
                                          case _:
                                              return "Invalid environment configuration"
                              
                                  @staticmethod
                                  def _resolve_environment_path(env: str, config_file_path: str | None = None) -> str:
                                      """Resolve environment path from config store or relative path."""
                                      try:
                                          config_store = ConfigStore()
                                          return config_store.get_config(env)
                                      except KeyError:
                                          if config_file_path:
                                              base_dir = UPath(config_file_path).parent
                                              return str(base_dir / env)
                                          return env
                              
                                  @model_validator(mode="before")
                                  @classmethod
                                  def resolve_paths(cls, data: dict[str, Any]) -> dict[str, Any]:
                                      """Store config file path for later use."""
                                      if "environment" in data:
                                          # Just store the config path for later use
                                          data["config_file_path"] = data.get("config_file_path")
                                      return data
                              
                                  def get_agent_kwargs(self, **overrides) -> dict[str, Any]:
                                      """Get kwargs for Agent constructor.
                              
                                      Returns:
                                          dict[str, Any]: Kwargs to pass to Agent
                                      """
                                      # Include only the fields that Agent expects
                                      dct = {
                                          "name": self.name,
                                          "description": self.description,
                                          "agent_type": self.type,
                                          "model": self.model,
                                          "system_prompt": self.system_prompts,
                                          "retries": self.retries,
                                          "enable_db_logging": self.enable_db_logging,
                                          # "result_tool_name": self.result_tool_name,
                                          "session": self.session,
                                          # "result_tool_description": self.result_tool_description,
                                          "result_retries": self.result_retries,
                                          "end_strategy": self.end_strategy,
                                          "debug": self.debug,
                                          # "defer_model_check": self.defer_model_check,
                                          **self.model_settings,
                                      }
                                      # Note: result_type is handled separately as it needs to be resolved
                                      # from string to actual type in Agent initialization
                              
                                      dct.update(overrides)
                                      return dct
                              

                              avatar class-attribute instance-attribute

                              avatar: str | None = None
                              

                              URL or path to agent's avatar image

                              capabilities class-attribute instance-attribute

                              capabilities: Capabilities = Field(default_factory=Capabilities)
                              

                              Current agent's capabilities.

                              config_file_path class-attribute instance-attribute

                              config_file_path: str | None = None
                              

                              Config file path for resolving environment.

                              debug class-attribute instance-attribute

                              debug: bool = False
                              

                              Enable debug output for this agent.

                              description class-attribute instance-attribute

                              description: str | None = None
                              

                              Optional description of the agent's purpose

                              enable_db_logging class-attribute instance-attribute

                              enable_db_logging: bool = True
                              

                              Enable session database logging.

                              end_strategy class-attribute instance-attribute

                              end_strategy: EndStrategy = 'early'
                              

                              The strategy for handling multiple tool calls when a final result is found

                              environment class-attribute instance-attribute

                              environment: str | AgentEnvironment | None = None
                              

                              Environment configuration (path or object)

                              forward_to class-attribute instance-attribute

                              forward_to: list[ForwardingTarget] = Field(default_factory=list)
                              

                              Targets to forward results to.

                              include_role_prompts class-attribute instance-attribute

                              include_role_prompts: bool = True
                              

                              Whether to include default prompts based on the agent's role.

                              inherits class-attribute instance-attribute

                              inherits: str | None = None
                              

                              Name of agent config to inherit from

                              knowledge class-attribute instance-attribute

                              knowledge: Knowledge | None = None
                              

                              Knowledge sources for this agent.

                              mcp_servers class-attribute instance-attribute

                              mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                              

                              List of MCP server configurations: - str entries are converted to StdioMCPServer - MCPServerConfig for full server configuration

                              model class-attribute instance-attribute

                              model: str | AnyModel | None = None
                              

                              The model to use for this agent. Can be either a simple model name string (e.g. 'openai:gpt-4') or a structured model definition.

                              model_settings class-attribute instance-attribute

                              model_settings: dict[str, Any] = Field(default_factory=dict)
                              

                              Additional settings to pass to the model

                              name class-attribute instance-attribute

                              name: str | None = None
                              

                              Name of the agent

                              result_retries class-attribute instance-attribute

                              result_retries: int | None = None
                              

                              Max retries for result validation

                              result_tool_description class-attribute instance-attribute

                              result_tool_description: str | None = None
                              

                              Custom description for the result tool

                              result_tool_name class-attribute instance-attribute

                              result_tool_name: str = 'final_result'
                              

                              Name of the tool used for structured responses

                              result_type class-attribute instance-attribute

                              result_type: str | ResponseDefinition | None = None
                              

                              Name of the response definition to use

                              retries class-attribute instance-attribute

                              retries: int = 1
                              

                              Number of retries for failed operations (maps to pydantic-ai's retries)

                              session class-attribute instance-attribute

                              session: str | SessionQuery | None = None
                              

                              Session configuration for conversation recovery.

                              system_prompts class-attribute instance-attribute

                              system_prompts: list[str] = Field(default_factory=list)
                              

                              System prompts for the agent

                              triggers class-attribute instance-attribute

                              triggers: list[EventConfig] = Field(default_factory=list)
                              

                              Event sources that activate this agent

                              type class-attribute instance-attribute

                              type: ProviderConfig | Literal['ai', 'human', 'litellm'] = 'ai'
                              

                              Provider configuration or shorthand type

                              user_prompts class-attribute instance-attribute

                              user_prompts: list[str] = Field(default_factory=list)
                              

                              Default user prompts for the agent

                              workers class-attribute instance-attribute

                              workers: list[WorkerConfig] = Field(default_factory=list)
                              

                              Worker agents which will be available as tools.

                              get_agent_kwargs

                              get_agent_kwargs(**overrides) -> dict[str, Any]
                              

                              Get kwargs for Agent constructor.

                              Returns:

                              Type Description
                              dict[str, Any]

                              dict[str, Any]: Kwargs to pass to Agent

                              Source code in src/llmling_agent/models/agents.py
                              405
                              406
                              407
                              408
                              409
                              410
                              411
                              412
                              413
                              414
                              415
                              416
                              417
                              418
                              419
                              420
                              421
                              422
                              423
                              424
                              425
                              426
                              427
                              428
                              429
                              430
                              431
                              432
                              433
                              def get_agent_kwargs(self, **overrides) -> dict[str, Any]:
                                  """Get kwargs for Agent constructor.
                              
                                  Returns:
                                      dict[str, Any]: Kwargs to pass to Agent
                                  """
                                  # Include only the fields that Agent expects
                                  dct = {
                                      "name": self.name,
                                      "description": self.description,
                                      "agent_type": self.type,
                                      "model": self.model,
                                      "system_prompt": self.system_prompts,
                                      "retries": self.retries,
                                      "enable_db_logging": self.enable_db_logging,
                                      # "result_tool_name": self.result_tool_name,
                                      "session": self.session,
                                      # "result_tool_description": self.result_tool_description,
                                      "result_retries": self.result_retries,
                                      "end_strategy": self.end_strategy,
                                      "debug": self.debug,
                                      # "defer_model_check": self.defer_model_check,
                                      **self.model_settings,
                                  }
                                  # Note: result_type is handled separately as it needs to be resolved
                                  # from string to actual type in Agent initialization
                              
                                  dct.update(overrides)
                                  return dct
                              

                              get_config

                              get_config() -> Config
                              

                              Get configuration for this agent.

                              Source code in src/llmling_agent/models/agents.py
                              331
                              332
                              333
                              334
                              335
                              336
                              337
                              338
                              339
                              340
                              341
                              342
                              343
                              344
                              345
                              346
                              347
                              348
                              349
                              350
                              351
                              352
                              353
                              354
                              355
                              356
                              def get_config(self) -> Config:
                                  """Get configuration for this agent."""
                                  match self.environment:
                                      case None:
                                          # Create minimal config
                                          caps = LLMCapabilitiesConfig()
                                          global_settings = GlobalSettings(llm_capabilities=caps)
                                          return Config(global_settings=global_settings)
                                      case str() as path:
                                          # Backward compatibility: treat as file path
                                          resolved = self._resolve_environment_path(path, self.config_file_path)
                                          return Config.from_file(resolved)
                                      case FileEnvironment(uri=uri) as env:
                                          # Handle FileEnvironment instance
                                          resolved = env.get_file_path()
                                          return Config.from_file(resolved)
                                      case {"type": "file", "uri": uri}:
                                          # Handle raw dict matching file environment structure
                                          return Config.from_file(uri)
                                      case {"type": "inline", "config": config}:
                                          return config
                                      case InlineEnvironment() as config:
                                          return config
                                      case _:
                                          msg = f"Invalid environment configuration: {self.environment}"
                                          raise ValueError(msg)
                              

                              get_environment_display

                              get_environment_display() -> str
                              

                              Get human-readable environment description.

                              Source code in src/llmling_agent/models/agents.py
                              368
                              369
                              370
                              371
                              372
                              373
                              374
                              375
                              376
                              377
                              378
                              379
                              380
                              381
                              382
                              def get_environment_display(self) -> str:
                                  """Get human-readable environment description."""
                                  match self.environment:
                                      case str() as path:
                                          return f"File: {path}"
                                      case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                          return f"File: {uri}"
                                      case {"type": "inline", "uri": uri} | InlineEnvironment(uri=uri) if uri:
                                          return f"Inline: {uri}"
                                      case {"type": "inline"} | InlineEnvironment():
                                          return "Inline configuration"
                                      case None:
                                          return "No environment configured"
                                      case _:
                                          return "Invalid environment configuration"
                              

                              get_environment_path

                              get_environment_path() -> str | None
                              

                              Get environment file path if available.

                              Source code in src/llmling_agent/models/agents.py
                              358
                              359
                              360
                              361
                              362
                              363
                              364
                              365
                              366
                              def get_environment_path(self) -> str | None:
                                  """Get environment file path if available."""
                                  match self.environment:
                                      case str() as path:
                                          return self._resolve_environment_path(path, self.config_file_path)
                                      case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                          return uri
                                      case _:
                                          return None
                              

                              get_mcp_servers

                              get_mcp_servers() -> list[MCPServerConfig]
                              

                              Get processed MCP server configurations.

                              Converts string entries to StdioMCPServer configs by splitting into command and arguments.

                              Returns:

                              Type Description
                              list[MCPServerConfig]

                              List of MCPServerConfig instances

                              Raises:

                              Type Description
                              ValueError

                              If string entry is empty

                              Source code in src/llmling_agent/models/agents.py
                              296
                              297
                              298
                              299
                              300
                              301
                              302
                              303
                              304
                              305
                              306
                              307
                              308
                              309
                              310
                              311
                              312
                              313
                              314
                              315
                              316
                              317
                              318
                              319
                              320
                              321
                              322
                              def get_mcp_servers(self) -> list[MCPServerConfig]:
                                  """Get processed MCP server configurations.
                              
                                  Converts string entries to StdioMCPServer configs by splitting
                                  into command and arguments.
                              
                                  Returns:
                                      List of MCPServerConfig instances
                              
                                  Raises:
                                      ValueError: If string entry is empty
                                  """
                                  configs: list[MCPServerConfig] = []
                              
                                  for server in self.mcp_servers:
                                      match server:
                                          case str():
                                              parts = server.split()
                                              if not parts:
                                                  msg = "Empty MCP server command"
                                                  raise ValueError(msg)
                              
                                              configs.append(StdioMCPServer(command=parts[0], args=parts[1:]))
                                          case MCPServerBase():
                                              configs.append(server)
                              
                                  return configs
                              

                              get_provider

                              get_provider() -> AgentProvider
                              

                              Get resolved provider instance.

                              Creates provider instance based on configuration: - Full provider config: Use as-is - Shorthand type: Create default provider config

                              Source code in src/llmling_agent/models/agents.py
                              266
                              267
                              268
                              269
                              270
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              292
                              293
                              294
                              def get_provider(self) -> AgentProvider:
                                  """Get resolved provider instance.
                              
                                  Creates provider instance based on configuration:
                                  - Full provider config: Use as-is
                                  - Shorthand type: Create default provider config
                                  """
                                  # If string shorthand is used, convert to default provider config
                                  from llmling_agent.models.providers import (
                                      AIProviderConfig,
                                      HumanProviderConfig,
                                      LiteLLMProviderConfig,
                                  )
                              
                                  provider_config = self.type
                                  if isinstance(provider_config, str):
                                      match provider_config:
                                          case "ai":
                                              provider_config = AIProviderConfig()
                                          case "human":
                                              provider_config = HumanProviderConfig()
                                          case "litellm":
                                              provider_config = LiteLLMProviderConfig()
                                          case _:
                                              msg = f"Invalid provider type: {provider_config}"
                                              raise ValueError(msg)
                              
                                  # Create provider instance from config
                                  return provider_config.get_provider()
                              

                              get_session_query

                              get_session_query() -> SessionQuery | None
                              

                              Get session query from config.

                              Source code in src/llmling_agent/models/agents.py
                              258
                              259
                              260
                              261
                              262
                              263
                              264
                              def get_session_query(self) -> SessionQuery | None:
                                  """Get session query from config."""
                                  if self.session is None:
                                      return None
                                  if isinstance(self.session, str):
                                      return SessionQuery(name=self.session)
                                  return self.session
                              

                              handle_model_types classmethod

                              handle_model_types(data: dict[str, Any]) -> dict[str, Any]
                              

                              Convert model inputs to appropriate format.

                              Source code in src/llmling_agent/models/agents.py
                              245
                              246
                              247
                              248
                              249
                              250
                              251
                              252
                              253
                              254
                              255
                              256
                              @model_validator(mode="before")
                              @classmethod
                              def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                                  """Convert model inputs to appropriate format."""
                                  model = data.get("model")
                                  match model:
                                      case str():
                                          data["model"] = {"type": "string", "identifier": model}
                                      case TestModel():
                                          # Wrap TestModel in our custom wrapper
                                          data["model"] = {"type": "test", "model": model}
                                  return data
                              

                              is_structured

                              is_structured() -> bool
                              

                              Check if this config defines a structured agent.

                              Source code in src/llmling_agent/models/agents.py
                              193
                              194
                              195
                              def is_structured(self) -> bool:
                                  """Check if this config defines a structured agent."""
                                  return self.result_type is not None
                              

                              normalize_workers classmethod

                              normalize_workers(data: dict[str, Any]) -> dict[str, Any]
                              

                              Convert string workers to WorkerConfig.

                              Source code in src/llmling_agent/models/agents.py
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              @model_validator(mode="before")
                              @classmethod
                              def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                                  """Convert string workers to WorkerConfig."""
                                  if workers := data.get("workers"):
                                      data["workers"] = [
                                          WorkerConfig.from_str(w)
                                          if isinstance(w, str)
                                          else w
                                          if isinstance(w, WorkerConfig)  # Keep existing WorkerConfig
                                          else WorkerConfig(**w)  # Convert dict to WorkerConfig
                                          for w in workers
                                      ]
                                  return data
                              

                              render_system_prompts

                              render_system_prompts(context: dict[str, Any] | None = None) -> list[str]
                              

                              Render system prompts with context.

                              Source code in src/llmling_agent/models/agents.py
                              324
                              325
                              326
                              327
                              328
                              329
                              def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                                  """Render system prompts with context."""
                                  if not context:
                                      # Default context
                                      context = {"name": self.name, "id": 1, "model": self.model}
                                  return [render_prompt(p, {"agent": context}) for p in self.system_prompts]
                              

                              resolve_paths classmethod

                              resolve_paths(data: dict[str, Any]) -> dict[str, Any]
                              

                              Store config file path for later use.

                              Source code in src/llmling_agent/models/agents.py
                              396
                              397
                              398
                              399
                              400
                              401
                              402
                              403
                              @model_validator(mode="before")
                              @classmethod
                              def resolve_paths(cls, data: dict[str, Any]) -> dict[str, Any]:
                                  """Store config file path for later use."""
                                  if "environment" in data:
                                      # Just store the config path for later use
                                      data["config_file_path"] = data.get("config_file_path")
                                  return data
                              

                              validate_result_type classmethod

                              validate_result_type(data: dict[str, Any]) -> dict[str, Any]
                              

                              Convert result type and apply its settings.

                              Source code in src/llmling_agent/models/agents.py
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              @model_validator(mode="before")
                              @classmethod
                              def validate_result_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                                  """Convert result type and apply its settings."""
                                  result_type = data.get("result_type")
                                  if isinstance(result_type, dict):
                                      # Extract response-specific settings
                                      tool_name = result_type.pop("result_tool_name", None)
                                      tool_description = result_type.pop("result_tool_description", None)
                                      retries = result_type.pop("result_retries", None)
                              
                                      # Convert remaining dict to ResponseDefinition
                                      if "type" not in result_type:
                                          result_type["type"] = "inline"
                                      data["result_type"] = InlineResponseDefinition(**result_type)
                              
                                      # Apply extracted settings to agent config
                                      if tool_name:
                                          data["result_tool_name"] = tool_name
                                      if tool_description:
                                          data["result_tool_description"] = tool_description
                                      if retries is not None:
                                          data["result_retries"] = retries
                              
                                  return data
                              

                              AgentPool

                              Bases: BaseRegistry[str, AnyAgent[Any, Any]]

                              Pool of initialized agents.

                              Each agent maintains its own runtime environment based on its configuration.

                              Source code in src/llmling_agent/delegation/pool.py
                               74
                               75
                               76
                               77
                               78
                               79
                               80
                               81
                               82
                               83
                               84
                               85
                               86
                               87
                               88
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              244
                              245
                              246
                              247
                              248
                              249
                              250
                              251
                              252
                              253
                              254
                              255
                              256
                              257
                              258
                              259
                              260
                              261
                              262
                              263
                              264
                              265
                              266
                              267
                              268
                              269
                              270
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              292
                              293
                              294
                              295
                              296
                              297
                              298
                              299
                              300
                              301
                              302
                              303
                              304
                              305
                              306
                              307
                              308
                              309
                              310
                              311
                              312
                              313
                              314
                              315
                              316
                              317
                              318
                              319
                              320
                              321
                              322
                              323
                              324
                              325
                              326
                              327
                              328
                              329
                              330
                              331
                              332
                              333
                              334
                              335
                              336
                              337
                              338
                              339
                              340
                              341
                              342
                              343
                              344
                              345
                              346
                              347
                              348
                              349
                              350
                              351
                              352
                              353
                              354
                              355
                              356
                              357
                              358
                              359
                              360
                              361
                              362
                              363
                              364
                              365
                              366
                              367
                              368
                              369
                              370
                              371
                              372
                              373
                              374
                              375
                              376
                              377
                              378
                              379
                              380
                              381
                              382
                              383
                              384
                              385
                              386
                              387
                              388
                              389
                              390
                              391
                              392
                              393
                              394
                              395
                              396
                              397
                              398
                              399
                              400
                              401
                              402
                              403
                              404
                              405
                              406
                              407
                              408
                              409
                              410
                              411
                              412
                              413
                              414
                              415
                              416
                              417
                              418
                              419
                              420
                              421
                              422
                              423
                              424
                              425
                              426
                              427
                              428
                              429
                              430
                              431
                              432
                              433
                              434
                              435
                              436
                              437
                              438
                              439
                              440
                              441
                              442
                              443
                              444
                              445
                              446
                              447
                              448
                              449
                              450
                              451
                              452
                              453
                              454
                              455
                              456
                              457
                              458
                              459
                              460
                              461
                              462
                              463
                              464
                              465
                              466
                              467
                              468
                              469
                              470
                              471
                              472
                              473
                              474
                              475
                              476
                              477
                              478
                              479
                              480
                              481
                              482
                              483
                              484
                              485
                              486
                              487
                              488
                              489
                              490
                              491
                              492
                              493
                              494
                              495
                              496
                              497
                              498
                              499
                              500
                              501
                              502
                              503
                              504
                              505
                              506
                              507
                              508
                              509
                              510
                              511
                              512
                              513
                              514
                              515
                              516
                              517
                              518
                              519
                              520
                              521
                              522
                              523
                              524
                              525
                              526
                              527
                              528
                              529
                              530
                              531
                              532
                              533
                              534
                              535
                              536
                              537
                              538
                              539
                              540
                              541
                              542
                              543
                              544
                              545
                              546
                              547
                              548
                              549
                              550
                              551
                              552
                              553
                              554
                              555
                              556
                              557
                              558
                              559
                              560
                              561
                              562
                              563
                              564
                              565
                              566
                              567
                              568
                              569
                              570
                              571
                              572
                              573
                              574
                              575
                              576
                              577
                              578
                              579
                              580
                              581
                              582
                              583
                              584
                              585
                              586
                              587
                              588
                              589
                              590
                              591
                              592
                              593
                              594
                              595
                              596
                              597
                              598
                              599
                              600
                              601
                              602
                              603
                              604
                              605
                              606
                              607
                              608
                              609
                              610
                              611
                              612
                              613
                              614
                              615
                              616
                              617
                              618
                              619
                              620
                              621
                              622
                              623
                              624
                              625
                              626
                              627
                              628
                              629
                              630
                              631
                              class AgentPool(BaseRegistry[str, AnyAgent[Any, Any]]):
                                  """Pool of initialized agents.
                              
                                  Each agent maintains its own runtime environment based on its configuration.
                                  """
                              
                                  def __init__(
                                      self,
                                      manifest: AgentsManifest,
                                      *,
                                      agents_to_load: list[str] | None = None,
                                      connect_agents: bool = True,
                                      confirmation_callback: ConfirmationCallback | None = None,
                                  ):
                                      """Initialize agent pool with immediate agent creation.
                              
                                      Args:
                                          manifest: Agent configuration manifest
                                          agents_to_load: Optional list of agent names to initialize
                                                        If None, all agents from manifest are loaded
                                          connect_agents: Whether to set up forwarding connections
                                          confirmation_callback: Handler callback for tool / step confirmations.
                                      """
                                      super().__init__()
                                      from llmling_agent.models.context import AgentContext
                                      from llmling_agent.storage.manager import StorageManager
                              
                                      self.manifest = manifest
                                      self._confirmation_callback = confirmation_callback
                                      self.exit_stack = AsyncExitStack()
                                      self.storage = StorageManager(manifest.storage)
                              
                                      # Validate requested agents exist
                                      to_load = set(agents_to_load) if agents_to_load else set(manifest.agents)
                                      if invalid := (to_load - set(manifest.agents)):
                                          msg = f"Unknown agents: {', '.join(invalid)}"
                                          raise ValueError(msg)
                                      # register tasks
                                      self._tasks = TaskRegistry()
                                      # Register tasks from manifest
                                      for name, task in manifest.tasks.items():
                                          self._tasks.register(name, task)
                                      self.pool_talk = TeamTalk.from_agents(list(self.agents.values()))
                                      # Create requested agents immediately using sync initialization
                                      for name in to_load:
                                          config = manifest.agents[name]
                                          # Create runtime without async context
                                          cfg = config.get_config()
                                          runtime = RuntimeConfig.from_config(cfg)
                              
                                          # Create context with config path and capabilities
                                          context = AgentContext[Any](
                                              agent_name=name,
                                              capabilities=config.capabilities,
                                              definition=self.manifest,
                                              config=config,
                                              pool=self,
                                              confirmation_callback=confirmation_callback,
                                          )
                              
                                          # Create agent with runtime and context
                                          agent = Agent[Any](
                                              runtime=runtime,
                                              context=context,
                                              result_type=None,  # type: ignore[arg-type]
                                              model=config.model,  # type: ignore[arg-type]
                                              system_prompt=config.system_prompts,
                                              name=name,
                                              enable_db_logging=config.enable_db_logging,
                                          )
                                          self.register(name, agent)
                              
                                      # Then set up worker relationships
                                      for name, config in manifest.agents.items():
                                          if name in self and config.workers:
                                              self.setup_agent_workers(self[name], config.workers)
                              
                                      # Set up forwarding connections
                                      if connect_agents:
                                          self._connect_signals()
                              
                                  async def __aenter__(self) -> Self:
                                      """Enter async context and initialize all agents."""
                                      try:
                                          # Enter async context for all agents
                                          for agent in self.agents.values():
                                              await self.exit_stack.enter_async_context(agent)
                                      except Exception as e:
                                          await self.cleanup()
                                          msg = "Failed to initialize agent pool"
                                          logger.exception(msg, exc_info=e)
                                          raise RuntimeError(msg) from e
                                      else:
                                          return self
                              
                                  async def __aexit__(
                                      self,
                                      exc_type: type[BaseException] | None,
                                      exc_val: BaseException | None,
                                      exc_tb: TracebackType | None,
                                  ):
                                      """Exit async context."""
                                      await self.cleanup()
                              
                                  async def cleanup(self):
                                      """Clean up all agents."""
                                      for agent in self.values():
                                          if agent.runtime:
                                              await agent.runtime.shutdown()
                                      await self.exit_stack.aclose()
                                      self.clear()
                              
                                  def create_group[TDeps](
                                      self,
                                      agents: Sequence[str | AnyAgent[TDeps, Any]] | None = None,
                                      *,
                                      model_override: str | None = None,
                                      environment_override: StrPath | Config | None = None,
                                      shared_prompt: str | None = None,
                                      shared_deps: TDeps | None = None,
                                  ) -> Team[TDeps]:
                                      """Create a group from agent names or instances.
                              
                                      Args:
                                          agents: List of agent names or instances (all if None)
                                          model_override: Optional model to use for all agents
                                          environment_override: Optional environment for all agents
                                          shared_prompt: Optional prompt for all agents
                                          shared_deps: Optional shared dependencies
                                      """
                                      from llmling_agent.delegation.agentgroup import Team
                              
                                      if agents is None:
                                          agents = list(self.agents.keys())
                              
                                      # First resolve/configure agents
                                      resolved_agents: list[AnyAgent[TDeps, Any]] = []
                                      for agent in agents:
                                          if isinstance(agent, str):
                                              agent = self.get_agent(
                                                  agent,
                                                  model_override=model_override,
                                                  environment_override=environment_override,
                                              )
                                          resolved_agents.append(agent)
                              
                                      return Team(
                                          agents=resolved_agents,
                                          # pool=self,
                                          shared_prompt=shared_prompt,
                                          shared_deps=shared_deps,
                                      )
                              
                                  def start_supervision(self) -> OptionalAwaitable[None]:
                                      """Start supervision interface.
                              
                                      Can be called either synchronously or asynchronously:
                              
                                      # Sync usage:
                                      start_supervision(pool)
                              
                                      # Async usage:
                                      await start_supervision(pool)
                                      """
                                      from llmling_agent.delegation.supervisor_ui import SupervisorApp
                              
                                      app = SupervisorApp(self)
                                      if asyncio.get_event_loop().is_running():
                                          # We're in an async context
                                          return app.run_async()
                                      # We're in a sync context
                                      app.run()
                                      return None
                              
                                  @property
                                  def agents(self) -> EventedDict[str, AnyAgent[Any, Any]]:
                                      """Get agents dict (backward compatibility)."""
                                      return self._items
                              
                                  @property
                                  def _error_class(self) -> type[LLMLingError]:
                                      """Error class for agent operations."""
                                      return LLMLingError
                              
                                  def _validate_item(self, item: Agent[Any] | Any) -> Agent[Any]:
                                      """Validate and convert items before registration.
                              
                                      Args:
                                          item: Item to validate
                              
                                      Returns:
                                          Validated Agent
                              
                                      Raises:
                                          LLMlingError: If item is not a valid agent
                                      """
                                      if not isinstance(item, Agent):
                                          msg = f"Item must be Agent, got {type(item)}"
                                          raise self._error_class(msg)
                                      return item
                              
                                  def _setup_connections(self):
                                      """Set up forwarding connections between agents."""
                                      from llmling_agent.models.forward_targets import AgentTarget
                              
                                      for name, config in self.manifest.agents.items():
                                          if name not in self.agents:
                                              continue
                                          agent = self.agents[name]
                                          for target in config.forward_to:
                                              if isinstance(target, AgentTarget):
                                                  if target.name not in self.agents:
                                                      msg = f"Forward target {target.name} not loaded for {name}"
                                                      raise ValueError(msg)
                                                  target_agent = self.agents[target.name]
                                                  agent.pass_results_to(target_agent)
                              
                                  def _connect_signals(self):
                                      """Set up forwarding connections between agents."""
                                      from llmling_agent.models.forward_targets import AgentTarget
                              
                                      for name, config in self.manifest.agents.items():
                                          if name not in self.agents:
                                              continue
                                          agent = self.agents[name]
                                          for target in config.forward_to:
                                              if isinstance(target, AgentTarget):
                                                  if target.name not in self.agents:
                                                      msg = f"Forward target {target.name} not loaded for {name}"
                                                      raise ValueError(msg)
                                                  target_agent = self.agents[target.name]
                                                  agent.pass_results_to(
                                                      target_agent,
                                                      connection_type=target.connection_type,
                                                  )
                              
                                  async def create_agent(
                                      self,
                                      name: str,
                                      config: AgentConfig,
                                      *,
                                      temporary: bool = True,
                                  ) -> Agent[Any]:
                                      """Create and register a new agent in the pool.
                              
                                      Args:
                                          name: Name of the new agent
                                          config: Agent configuration
                                          temporary: If True, agent won't be added to manifest
                              
                                      Returns:
                                          Created and initialized agent
                              
                                      Raises:
                                          ValueError: If agent name already exists
                                          RuntimeError: If agent initialization fails
                                      """
                                      from llmling_agent.models.context import AgentContext
                              
                                      if name in self.agents:
                                          msg = f"Agent {name} already exists"
                                          raise ValueError(msg)
                              
                                      try:
                                          # Create runtime from agent's config
                                          cfg = config.get_config()
                                          runtime = RuntimeConfig.from_config(cfg)
                              
                                          # Create context with config path and capabilities
                                          context = AgentContext[Any](
                                              agent_name=name,
                                              capabilities=config.capabilities,
                                              definition=self.manifest,
                                              config=config,
                                              pool=self,
                                          )
                              
                                          # Create agent with runtime and context
                                          agent = Agent[Any](
                                              agent_type=config.get_provider(),
                                              runtime=runtime,
                                              context=context,
                                              result_type=None,  # type: ignore[arg-type]
                                              model=config.model,  # type: ignore[arg-type]
                                              system_prompt=config.system_prompts,
                                              name=name,
                                          )
                              
                                          # Enter agent's async context through pool's exit stack
                                          agent = await self.exit_stack.enter_async_context(agent)
                              
                                          # Set up workers if defined
                                          if config.workers:
                                              self.setup_agent_workers(agent, config.workers)
                              
                                          # Register in pool and optionally manifest
                                          self.agents[name] = agent
                                          if not temporary:
                                              self.manifest.agents[name] = config
                                      except Exception as e:
                                          msg = f"Failed to create agent {name}"
                                          raise RuntimeError(msg) from e
                                      else:
                                          return agent
                              
                                  async def clone_agent[TDeps, TResult](
                                      self,
                                      agent: Agent[TDeps] | str,
                                      new_name: str | None = None,
                                      *,
                                      model_override: str | None = None,
                                      system_prompts: list[str] | None = None,
                                      template_context: dict[str, Any] | None = None,
                                  ) -> Agent[TDeps]:
                                      """Create a copy of an agent.
                              
                                      Args:
                                          agent: Agent instance or name to clone
                                          new_name: Optional name for the clone
                                          model_override: Optional different model
                                          system_prompts: Optional different prompts
                                          template_context: Variables for template rendering
                              
                                      Returns:
                                          The new agent instance
                                      """
                                      # Get original config
                                      if isinstance(agent, str):
                                          if agent not in self.manifest.agents:
                                              msg = f"Agent {agent} not found"
                                              raise KeyError(msg)
                                          config = self.manifest.agents[agent]
                                          original_agent: Agent[TDeps] = self.get_agent(agent)
                                      else:
                                          config = agent.context.config  # type: ignore
                                          original_agent = agent
                              
                                      # Create new config
                                      new_config = config.model_copy(deep=True)
                              
                                      # Apply overrides
                                      if model_override:
                                          new_config.model = model_override
                                      if system_prompts:
                                          new_config.system_prompts = system_prompts
                              
                                      # Handle template rendering
                                      if template_context:
                                          new_config.system_prompts = new_config.render_system_prompts(template_context)
                              
                                      # Create new agent with same runtime
                                      new_agent = Agent[TDeps](
                                          runtime=original_agent.runtime,
                                          context=original_agent.context,
                                          # result_type=original_agent.actual_type,
                                          model=new_config.model,  # type: ignore
                                          system_prompt=new_config.system_prompts,
                                          name=new_name or f"{config.name}_copy_{len(self.agents)}",
                                      )
                              
                                      # Register in pool
                                      agent_name = new_agent.name
                                      self.manifest.agents[agent_name] = new_config
                                      self.agents[agent_name] = new_agent
                              
                                      return new_agent
                              
                                  def setup_agent_workers(self, agent: AnyAgent[Any, Any], workers: list[WorkerConfig]):
                                      """Set up workers for an agent from configuration."""
                                      for worker_config in workers:
                                          try:
                                              worker = self.get_agent(worker_config.name)
                                              agent.register_worker(
                                                  worker,
                                                  name=worker_config.name,
                                                  reset_history_on_run=worker_config.reset_history_on_run,
                                                  pass_message_history=worker_config.pass_message_history,
                                                  share_context=worker_config.share_context,
                                              )
                                          except KeyError as e:
                                              msg = f"Worker agent {worker_config.name!r} not found"
                                              raise ValueError(msg) from e
                              
                                  @overload
                                  def get_agent[TDeps, TResult](
                                      self,
                                      agent: str | Agent[Any],
                                      *,
                                      deps: TDeps,
                                      return_type: type[TResult],
                                      model_override: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      environment_override: StrPath | Config | None = None,
                                  ) -> StructuredAgent[TDeps, TResult]: ...
                              
                                  @overload
                                  def get_agent[TDeps](
                                      self,
                                      agent: str | Agent[Any],
                                      *,
                                      deps: TDeps,
                                      model_override: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      environment_override: StrPath | Config | None = None,
                                  ) -> Agent[TDeps]: ...
                              
                                  @overload
                                  def get_agent[TResult](
                                      self,
                                      agent: str | Agent[Any],
                                      *,
                                      return_type: type[TResult],
                                      model_override: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      environment_override: StrPath | Config | None = None,
                                  ) -> StructuredAgent[Any, TResult]: ...
                              
                                  @overload
                                  def get_agent(
                                      self,
                                      agent: str | Agent[Any],
                                      *,
                                      model_override: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      environment_override: StrPath | Config | None = None,
                                  ) -> Agent[Any]: ...
                              
                                  def get_agent[TDeps, TResult](
                                      self,
                                      agent: str | Agent[Any],
                                      *,
                                      deps: TDeps | None = None,
                                      return_type: type[TResult] | None = None,
                                      model_override: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                      environment_override: StrPath | Config | None = None,
                                  ) -> AnyAgent[TDeps, TResult]:
                                      """Get or wrap an agent.
                              
                                      Args:
                                          agent: Either agent name or instance
                                          deps: Dependencies for the agent
                                          return_type: Optional type to make agent structured
                                          model_override: Optional model override
                                          session: Optional session ID or Session query to recover conversation
                                          environment_override: Optional environment configuration:
                                              - Path to environment file
                                              - Complete Config instance
                                              - None to use agent's default environment
                              
                                      Returns:
                                          Either regular Agent or StructuredAgent depending on return_type
                              
                                      Raises:
                                          KeyError: If agent name not found
                                          ValueError: If environment configuration is invalid
                                      """
                                      # Get base agent
                                      base = agent if isinstance(agent, Agent) else self.agents[agent]
                                      if deps is not None:
                                          base.context = base.context or AgentContext[TDeps].create_default(base.name)
                                          base.context.data = deps
                              
                                      # Apply overrides
                                      if model_override:
                                          base.set_model(model_override)  # type: ignore
                              
                                      if session:
                                          base.conversation.load_history_from_database(session=session)
                                      match environment_override:
                                          case Config():
                                              base.context.runtime = RuntimeConfig.from_config(environment_override)
                                          case str() | PathLike():
                                              base.context.runtime = RuntimeConfig.from_file(environment_override)
                              
                                      # Wrap in StructuredAgent if return_type provided
                                      if return_type is not None:
                                          return StructuredAgent[Any, TResult](base, return_type)
                              
                                      return base
                              
                                  @classmethod
                                  @asynccontextmanager
                                  async def open[TDeps, TResult](
                                      cls,
                                      config_path: StrPath | AgentsManifest[TDeps, TResult] | None = None,
                                      *,
                                      agents: list[str] | None = None,
                                      connect_agents: bool = True,
                                      confirmation_callback: ConfirmationCallback | None = None,
                                  ) -> AsyncIterator[AgentPool]:
                                      """Open an agent pool from configuration.
                              
                                      Args:
                                          config_path: Path to agent configuration file or manifest
                                          agents: Optional list of agent names to initialize
                                          connect_agents: Whether to set up forwarding connections
                                          confirmation_callback: Callback to confirm agent tool selection
                              
                                      Yields:
                                          Configured agent pool
                                      """
                                      from llmling_agent.models import AgentsManifest
                              
                                      match config_path:
                                          case None:
                                              manifest = AgentsManifest[Any, Any]()
                                          case str():
                                              manifest = AgentsManifest[Any, Any].from_file(config_path)
                                          case AgentsManifest():
                                              manifest = config_path
                                          case _:
                                              msg = f"Invalid config path: {config_path}"
                                              raise ValueError(msg)
                                      pool = cls(
                                          manifest,
                                          agents_to_load=agents,
                                          connect_agents=connect_agents,
                                          confirmation_callback=confirmation_callback,
                                      )
                                      try:
                                          async with pool:
                                              yield pool
                                      finally:
                                          await pool.cleanup()
                              
                                  def list_agents(self) -> list[str]:
                                      """List available agent names."""
                                      return list(self.manifest.agents)
                              
                                  def get_task(self, name: str) -> AgentTask[Any, Any]:
                                      return self._tasks[name]
                              
                                  def register_task(self, name: str, task: AgentTask[Any, Any]):
                                      self._tasks.register(name, task)
                              
                                  async def controlled_conversation(
                                      self,
                                      initial_agent: str | Agent[Any] = "starter",
                                      initial_prompt: str = "Hello!",
                                      decision_callback: DecisionCallback = interactive_controller,
                                  ):
                                      """Start a controlled conversation between agents.
                              
                                      Args:
                                          initial_agent: Agent instance or name to start with
                                          initial_prompt: First message to start conversation
                                          decision_callback: Callback for routing decisions
                                      """
                                      from llmling_agent.delegation.agentgroup import Team
                              
                                      group = Team(list(self.agents.values()))
                              
                                      await group.run_controlled(
                                          prompt=initial_prompt,
                                          initial_agent=initial_agent,
                                          decision_callback=decision_callback,
                                      )
                              

                              agents property

                              agents: EventedDict[str, AnyAgent[Any, Any]]
                              

                              Get agents dict (backward compatibility).

                              __aenter__ async

                              __aenter__() -> Self
                              

                              Enter async context and initialize all agents.

                              Source code in src/llmling_agent/delegation/pool.py
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              async def __aenter__(self) -> Self:
                                  """Enter async context and initialize all agents."""
                                  try:
                                      # Enter async context for all agents
                                      for agent in self.agents.values():
                                          await self.exit_stack.enter_async_context(agent)
                                  except Exception as e:
                                      await self.cleanup()
                                      msg = "Failed to initialize agent pool"
                                      logger.exception(msg, exc_info=e)
                                      raise RuntimeError(msg) from e
                                  else:
                                      return self
                              

                              __aexit__ async

                              __aexit__(
                                  exc_type: type[BaseException] | None,
                                  exc_val: BaseException | None,
                                  exc_tb: TracebackType | None,
                              )
                              

                              Exit async context.

                              Source code in src/llmling_agent/delegation/pool.py
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              async def __aexit__(
                                  self,
                                  exc_type: type[BaseException] | None,
                                  exc_val: BaseException | None,
                                  exc_tb: TracebackType | None,
                              ):
                                  """Exit async context."""
                                  await self.cleanup()
                              

                              __init__

                              __init__(
                                  manifest: AgentsManifest,
                                  *,
                                  agents_to_load: list[str] | None = None,
                                  connect_agents: bool = True,
                                  confirmation_callback: ConfirmationCallback | None = None,
                              )
                              

                              Initialize agent pool with immediate agent creation.

                              Parameters:

                              Name Type Description Default
                              manifest AgentsManifest

                              Agent configuration manifest

                              required
                              agents_to_load list[str] | None

                              Optional list of agent names to initialize If None, all agents from manifest are loaded

                              None
                              connect_agents bool

                              Whether to set up forwarding connections

                              True
                              confirmation_callback ConfirmationCallback | None

                              Handler callback for tool / step confirmations.

                              None
                              Source code in src/llmling_agent/delegation/pool.py
                               80
                               81
                               82
                               83
                               84
                               85
                               86
                               87
                               88
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              def __init__(
                                  self,
                                  manifest: AgentsManifest,
                                  *,
                                  agents_to_load: list[str] | None = None,
                                  connect_agents: bool = True,
                                  confirmation_callback: ConfirmationCallback | None = None,
                              ):
                                  """Initialize agent pool with immediate agent creation.
                              
                                  Args:
                                      manifest: Agent configuration manifest
                                      agents_to_load: Optional list of agent names to initialize
                                                    If None, all agents from manifest are loaded
                                      connect_agents: Whether to set up forwarding connections
                                      confirmation_callback: Handler callback for tool / step confirmations.
                                  """
                                  super().__init__()
                                  from llmling_agent.models.context import AgentContext
                                  from llmling_agent.storage.manager import StorageManager
                              
                                  self.manifest = manifest
                                  self._confirmation_callback = confirmation_callback
                                  self.exit_stack = AsyncExitStack()
                                  self.storage = StorageManager(manifest.storage)
                              
                                  # Validate requested agents exist
                                  to_load = set(agents_to_load) if agents_to_load else set(manifest.agents)
                                  if invalid := (to_load - set(manifest.agents)):
                                      msg = f"Unknown agents: {', '.join(invalid)}"
                                      raise ValueError(msg)
                                  # register tasks
                                  self._tasks = TaskRegistry()
                                  # Register tasks from manifest
                                  for name, task in manifest.tasks.items():
                                      self._tasks.register(name, task)
                                  self.pool_talk = TeamTalk.from_agents(list(self.agents.values()))
                                  # Create requested agents immediately using sync initialization
                                  for name in to_load:
                                      config = manifest.agents[name]
                                      # Create runtime without async context
                                      cfg = config.get_config()
                                      runtime = RuntimeConfig.from_config(cfg)
                              
                                      # Create context with config path and capabilities
                                      context = AgentContext[Any](
                                          agent_name=name,
                                          capabilities=config.capabilities,
                                          definition=self.manifest,
                                          config=config,
                                          pool=self,
                                          confirmation_callback=confirmation_callback,
                                      )
                              
                                      # Create agent with runtime and context
                                      agent = Agent[Any](
                                          runtime=runtime,
                                          context=context,
                                          result_type=None,  # type: ignore[arg-type]
                                          model=config.model,  # type: ignore[arg-type]
                                          system_prompt=config.system_prompts,
                                          name=name,
                                          enable_db_logging=config.enable_db_logging,
                                      )
                                      self.register(name, agent)
                              
                                  # Then set up worker relationships
                                  for name, config in manifest.agents.items():
                                      if name in self and config.workers:
                                          self.setup_agent_workers(self[name], config.workers)
                              
                                  # Set up forwarding connections
                                  if connect_agents:
                                      self._connect_signals()
                              

                              cleanup async

                              cleanup()
                              

                              Clean up all agents.

                              Source code in src/llmling_agent/delegation/pool.py
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              async def cleanup(self):
                                  """Clean up all agents."""
                                  for agent in self.values():
                                      if agent.runtime:
                                          await agent.runtime.shutdown()
                                  await self.exit_stack.aclose()
                                  self.clear()
                              

                              clone_agent async

                              clone_agent(
                                  agent: Agent[TDeps] | str,
                                  new_name: str | None = None,
                                  *,
                                  model_override: str | None = None,
                                  system_prompts: list[str] | None = None,
                                  template_context: dict[str, Any] | None = None,
                              ) -> Agent[TDeps]
                              

                              Create a copy of an agent.

                              Parameters:

                              Name Type Description Default
                              agent Agent[TDeps] | str

                              Agent instance or name to clone

                              required
                              new_name str | None

                              Optional name for the clone

                              None
                              model_override str | None

                              Optional different model

                              None
                              system_prompts list[str] | None

                              Optional different prompts

                              None
                              template_context dict[str, Any] | None

                              Variables for template rendering

                              None

                              Returns:

                              Type Description
                              Agent[TDeps]

                              The new agent instance

                              Source code in src/llmling_agent/delegation/pool.py
                              379
                              380
                              381
                              382
                              383
                              384
                              385
                              386
                              387
                              388
                              389
                              390
                              391
                              392
                              393
                              394
                              395
                              396
                              397
                              398
                              399
                              400
                              401
                              402
                              403
                              404
                              405
                              406
                              407
                              408
                              409
                              410
                              411
                              412
                              413
                              414
                              415
                              416
                              417
                              418
                              419
                              420
                              421
                              422
                              423
                              424
                              425
                              426
                              427
                              428
                              429
                              430
                              431
                              432
                              433
                              434
                              435
                              436
                              437
                              438
                              439
                              async def clone_agent[TDeps, TResult](
                                  self,
                                  agent: Agent[TDeps] | str,
                                  new_name: str | None = None,
                                  *,
                                  model_override: str | None = None,
                                  system_prompts: list[str] | None = None,
                                  template_context: dict[str, Any] | None = None,
                              ) -> Agent[TDeps]:
                                  """Create a copy of an agent.
                              
                                  Args:
                                      agent: Agent instance or name to clone
                                      new_name: Optional name for the clone
                                      model_override: Optional different model
                                      system_prompts: Optional different prompts
                                      template_context: Variables for template rendering
                              
                                  Returns:
                                      The new agent instance
                                  """
                                  # Get original config
                                  if isinstance(agent, str):
                                      if agent not in self.manifest.agents:
                                          msg = f"Agent {agent} not found"
                                          raise KeyError(msg)
                                      config = self.manifest.agents[agent]
                                      original_agent: Agent[TDeps] = self.get_agent(agent)
                                  else:
                                      config = agent.context.config  # type: ignore
                                      original_agent = agent
                              
                                  # Create new config
                                  new_config = config.model_copy(deep=True)
                              
                                  # Apply overrides
                                  if model_override:
                                      new_config.model = model_override
                                  if system_prompts:
                                      new_config.system_prompts = system_prompts
                              
                                  # Handle template rendering
                                  if template_context:
                                      new_config.system_prompts = new_config.render_system_prompts(template_context)
                              
                                  # Create new agent with same runtime
                                  new_agent = Agent[TDeps](
                                      runtime=original_agent.runtime,
                                      context=original_agent.context,
                                      # result_type=original_agent.actual_type,
                                      model=new_config.model,  # type: ignore
                                      system_prompt=new_config.system_prompts,
                                      name=new_name or f"{config.name}_copy_{len(self.agents)}",
                                  )
                              
                                  # Register in pool
                                  agent_name = new_agent.name
                                  self.manifest.agents[agent_name] = new_config
                                  self.agents[agent_name] = new_agent
                              
                                  return new_agent
                              

                              controlled_conversation async

                              controlled_conversation(
                                  initial_agent: str | Agent[Any] = "starter",
                                  initial_prompt: str = "Hello!",
                                  decision_callback: DecisionCallback = interactive_controller,
                              )
                              

                              Start a controlled conversation between agents.

                              Parameters:

                              Name Type Description Default
                              initial_agent str | Agent[Any]

                              Agent instance or name to start with

                              'starter'
                              initial_prompt str

                              First message to start conversation

                              'Hello!'
                              decision_callback DecisionCallback

                              Callback for routing decisions

                              interactive_controller
                              Source code in src/llmling_agent/delegation/pool.py
                              610
                              611
                              612
                              613
                              614
                              615
                              616
                              617
                              618
                              619
                              620
                              621
                              622
                              623
                              624
                              625
                              626
                              627
                              628
                              629
                              630
                              631
                              async def controlled_conversation(
                                  self,
                                  initial_agent: str | Agent[Any] = "starter",
                                  initial_prompt: str = "Hello!",
                                  decision_callback: DecisionCallback = interactive_controller,
                              ):
                                  """Start a controlled conversation between agents.
                              
                                  Args:
                                      initial_agent: Agent instance or name to start with
                                      initial_prompt: First message to start conversation
                                      decision_callback: Callback for routing decisions
                                  """
                                  from llmling_agent.delegation.agentgroup import Team
                              
                                  group = Team(list(self.agents.values()))
                              
                                  await group.run_controlled(
                                      prompt=initial_prompt,
                                      initial_agent=initial_agent,
                                      decision_callback=decision_callback,
                                  )
                              

                              create_agent async

                              create_agent(name: str, config: AgentConfig, *, temporary: bool = True) -> Agent[Any]
                              

                              Create and register a new agent in the pool.

                              Parameters:

                              Name Type Description Default
                              name str

                              Name of the new agent

                              required
                              config AgentConfig

                              Agent configuration

                              required
                              temporary bool

                              If True, agent won't be added to manifest

                              True

                              Returns:

                              Type Description
                              Agent[Any]

                              Created and initialized agent

                              Raises:

                              Type Description
                              ValueError

                              If agent name already exists

                              RuntimeError

                              If agent initialization fails

                              Source code in src/llmling_agent/delegation/pool.py
                              310
                              311
                              312
                              313
                              314
                              315
                              316
                              317
                              318
                              319
                              320
                              321
                              322
                              323
                              324
                              325
                              326
                              327
                              328
                              329
                              330
                              331
                              332
                              333
                              334
                              335
                              336
                              337
                              338
                              339
                              340
                              341
                              342
                              343
                              344
                              345
                              346
                              347
                              348
                              349
                              350
                              351
                              352
                              353
                              354
                              355
                              356
                              357
                              358
                              359
                              360
                              361
                              362
                              363
                              364
                              365
                              366
                              367
                              368
                              369
                              370
                              371
                              372
                              373
                              374
                              375
                              376
                              377
                              async def create_agent(
                                  self,
                                  name: str,
                                  config: AgentConfig,
                                  *,
                                  temporary: bool = True,
                              ) -> Agent[Any]:
                                  """Create and register a new agent in the pool.
                              
                                  Args:
                                      name: Name of the new agent
                                      config: Agent configuration
                                      temporary: If True, agent won't be added to manifest
                              
                                  Returns:
                                      Created and initialized agent
                              
                                  Raises:
                                      ValueError: If agent name already exists
                                      RuntimeError: If agent initialization fails
                                  """
                                  from llmling_agent.models.context import AgentContext
                              
                                  if name in self.agents:
                                      msg = f"Agent {name} already exists"
                                      raise ValueError(msg)
                              
                                  try:
                                      # Create runtime from agent's config
                                      cfg = config.get_config()
                                      runtime = RuntimeConfig.from_config(cfg)
                              
                                      # Create context with config path and capabilities
                                      context = AgentContext[Any](
                                          agent_name=name,
                                          capabilities=config.capabilities,
                                          definition=self.manifest,
                                          config=config,
                                          pool=self,
                                      )
                              
                                      # Create agent with runtime and context
                                      agent = Agent[Any](
                                          agent_type=config.get_provider(),
                                          runtime=runtime,
                                          context=context,
                                          result_type=None,  # type: ignore[arg-type]
                                          model=config.model,  # type: ignore[arg-type]
                                          system_prompt=config.system_prompts,
                                          name=name,
                                      )
                              
                                      # Enter agent's async context through pool's exit stack
                                      agent = await self.exit_stack.enter_async_context(agent)
                              
                                      # Set up workers if defined
                                      if config.workers:
                                          self.setup_agent_workers(agent, config.workers)
                              
                                      # Register in pool and optionally manifest
                                      self.agents[name] = agent
                                      if not temporary:
                                          self.manifest.agents[name] = config
                                  except Exception as e:
                                      msg = f"Failed to create agent {name}"
                                      raise RuntimeError(msg) from e
                                  else:
                                      return agent
                              

                              create_group

                              create_group(
                                  agents: Sequence[str | AnyAgent[TDeps, Any]] | None = None,
                                  *,
                                  model_override: str | None = None,
                                  environment_override: StrPath | Config | None = None,
                                  shared_prompt: str | None = None,
                                  shared_deps: TDeps | None = None,
                              ) -> Team[TDeps]
                              

                              Create a group from agent names or instances.

                              Parameters:

                              Name Type Description Default
                              agents Sequence[str | AnyAgent[TDeps, Any]] | None

                              List of agent names or instances (all if None)

                              None
                              model_override str | None

                              Optional model to use for all agents

                              None
                              environment_override StrPath | Config | None

                              Optional environment for all agents

                              None
                              shared_prompt str | None

                              Optional prompt for all agents

                              None
                              shared_deps TDeps | None

                              Optional shared dependencies

                              None
                              Source code in src/llmling_agent/delegation/pool.py
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              def create_group[TDeps](
                                  self,
                                  agents: Sequence[str | AnyAgent[TDeps, Any]] | None = None,
                                  *,
                                  model_override: str | None = None,
                                  environment_override: StrPath | Config | None = None,
                                  shared_prompt: str | None = None,
                                  shared_deps: TDeps | None = None,
                              ) -> Team[TDeps]:
                                  """Create a group from agent names or instances.
                              
                                  Args:
                                      agents: List of agent names or instances (all if None)
                                      model_override: Optional model to use for all agents
                                      environment_override: Optional environment for all agents
                                      shared_prompt: Optional prompt for all agents
                                      shared_deps: Optional shared dependencies
                                  """
                                  from llmling_agent.delegation.agentgroup import Team
                              
                                  if agents is None:
                                      agents = list(self.agents.keys())
                              
                                  # First resolve/configure agents
                                  resolved_agents: list[AnyAgent[TDeps, Any]] = []
                                  for agent in agents:
                                      if isinstance(agent, str):
                                          agent = self.get_agent(
                                              agent,
                                              model_override=model_override,
                                              environment_override=environment_override,
                                          )
                                      resolved_agents.append(agent)
                              
                                  return Team(
                                      agents=resolved_agents,
                                      # pool=self,
                                      shared_prompt=shared_prompt,
                                      shared_deps=shared_deps,
                                  )
                              

                              get_agent

                              get_agent(
                                  agent: str | Agent[Any],
                                  *,
                                  deps: TDeps,
                                  return_type: type[TResult],
                                  model_override: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  environment_override: StrPath | Config | None = None,
                              ) -> StructuredAgent[TDeps, TResult]
                              
                              get_agent(
                                  agent: str | Agent[Any],
                                  *,
                                  deps: TDeps,
                                  model_override: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  environment_override: StrPath | Config | None = None,
                              ) -> Agent[TDeps]
                              
                              get_agent(
                                  agent: str | Agent[Any],
                                  *,
                                  return_type: type[TResult],
                                  model_override: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  environment_override: StrPath | Config | None = None,
                              ) -> StructuredAgent[Any, TResult]
                              
                              get_agent(
                                  agent: str | Agent[Any],
                                  *,
                                  model_override: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  environment_override: StrPath | Config | None = None,
                              ) -> Agent[Any]
                              
                              get_agent(
                                  agent: str | Agent[Any],
                                  *,
                                  deps: TDeps | None = None,
                                  return_type: type[TResult] | None = None,
                                  model_override: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  environment_override: StrPath | Config | None = None,
                              ) -> AnyAgent[TDeps, TResult]
                              

                              Get or wrap an agent.

                              Parameters:

                              Name Type Description Default
                              agent str | Agent[Any]

                              Either agent name or instance

                              required
                              deps TDeps | None

                              Dependencies for the agent

                              None
                              return_type type[TResult] | None

                              Optional type to make agent structured

                              None
                              model_override str | None

                              Optional model override

                              None
                              session SessionIdType | SessionQuery

                              Optional session ID or Session query to recover conversation

                              None
                              environment_override StrPath | Config | None

                              Optional environment configuration: - Path to environment file - Complete Config instance - None to use agent's default environment

                              None

                              Returns:

                              Type Description
                              AnyAgent[TDeps, TResult]

                              Either regular Agent or StructuredAgent depending on return_type

                              Raises:

                              Type Description
                              KeyError

                              If agent name not found

                              ValueError

                              If environment configuration is invalid

                              Source code in src/llmling_agent/delegation/pool.py
                              501
                              502
                              503
                              504
                              505
                              506
                              507
                              508
                              509
                              510
                              511
                              512
                              513
                              514
                              515
                              516
                              517
                              518
                              519
                              520
                              521
                              522
                              523
                              524
                              525
                              526
                              527
                              528
                              529
                              530
                              531
                              532
                              533
                              534
                              535
                              536
                              537
                              538
                              539
                              540
                              541
                              542
                              543
                              544
                              545
                              546
                              547
                              548
                              549
                              550
                              551
                              552
                              553
                              def get_agent[TDeps, TResult](
                                  self,
                                  agent: str | Agent[Any],
                                  *,
                                  deps: TDeps | None = None,
                                  return_type: type[TResult] | None = None,
                                  model_override: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                                  environment_override: StrPath | Config | None = None,
                              ) -> AnyAgent[TDeps, TResult]:
                                  """Get or wrap an agent.
                              
                                  Args:
                                      agent: Either agent name or instance
                                      deps: Dependencies for the agent
                                      return_type: Optional type to make agent structured
                                      model_override: Optional model override
                                      session: Optional session ID or Session query to recover conversation
                                      environment_override: Optional environment configuration:
                                          - Path to environment file
                                          - Complete Config instance
                                          - None to use agent's default environment
                              
                                  Returns:
                                      Either regular Agent or StructuredAgent depending on return_type
                              
                                  Raises:
                                      KeyError: If agent name not found
                                      ValueError: If environment configuration is invalid
                                  """
                                  # Get base agent
                                  base = agent if isinstance(agent, Agent) else self.agents[agent]
                                  if deps is not None:
                                      base.context = base.context or AgentContext[TDeps].create_default(base.name)
                                      base.context.data = deps
                              
                                  # Apply overrides
                                  if model_override:
                                      base.set_model(model_override)  # type: ignore
                              
                                  if session:
                                      base.conversation.load_history_from_database(session=session)
                                  match environment_override:
                                      case Config():
                                          base.context.runtime = RuntimeConfig.from_config(environment_override)
                                      case str() | PathLike():
                                          base.context.runtime = RuntimeConfig.from_file(environment_override)
                              
                                  # Wrap in StructuredAgent if return_type provided
                                  if return_type is not None:
                                      return StructuredAgent[Any, TResult](base, return_type)
                              
                                  return base
                              

                              list_agents

                              list_agents() -> list[str]
                              

                              List available agent names.

                              Source code in src/llmling_agent/delegation/pool.py
                              600
                              601
                              602
                              def list_agents(self) -> list[str]:
                                  """List available agent names."""
                                  return list(self.manifest.agents)
                              

                              open async classmethod

                              open(
                                  config_path: StrPath | AgentsManifest[TDeps, TResult] | None = None,
                                  *,
                                  agents: list[str] | None = None,
                                  connect_agents: bool = True,
                                  confirmation_callback: ConfirmationCallback | None = None,
                              ) -> AsyncIterator[AgentPool]
                              

                              Open an agent pool from configuration.

                              Parameters:

                              Name Type Description Default
                              config_path StrPath | AgentsManifest[TDeps, TResult] | None

                              Path to agent configuration file or manifest

                              None
                              agents list[str] | None

                              Optional list of agent names to initialize

                              None
                              connect_agents bool

                              Whether to set up forwarding connections

                              True
                              confirmation_callback ConfirmationCallback | None

                              Callback to confirm agent tool selection

                              None

                              Yields:

                              Type Description
                              AsyncIterator[AgentPool]

                              Configured agent pool

                              Source code in src/llmling_agent/delegation/pool.py
                              555
                              556
                              557
                              558
                              559
                              560
                              561
                              562
                              563
                              564
                              565
                              566
                              567
                              568
                              569
                              570
                              571
                              572
                              573
                              574
                              575
                              576
                              577
                              578
                              579
                              580
                              581
                              582
                              583
                              584
                              585
                              586
                              587
                              588
                              589
                              590
                              591
                              592
                              593
                              594
                              595
                              596
                              597
                              598
                              @classmethod
                              @asynccontextmanager
                              async def open[TDeps, TResult](
                                  cls,
                                  config_path: StrPath | AgentsManifest[TDeps, TResult] | None = None,
                                  *,
                                  agents: list[str] | None = None,
                                  connect_agents: bool = True,
                                  confirmation_callback: ConfirmationCallback | None = None,
                              ) -> AsyncIterator[AgentPool]:
                                  """Open an agent pool from configuration.
                              
                                  Args:
                                      config_path: Path to agent configuration file or manifest
                                      agents: Optional list of agent names to initialize
                                      connect_agents: Whether to set up forwarding connections
                                      confirmation_callback: Callback to confirm agent tool selection
                              
                                  Yields:
                                      Configured agent pool
                                  """
                                  from llmling_agent.models import AgentsManifest
                              
                                  match config_path:
                                      case None:
                                          manifest = AgentsManifest[Any, Any]()
                                      case str():
                                          manifest = AgentsManifest[Any, Any].from_file(config_path)
                                      case AgentsManifest():
                                          manifest = config_path
                                      case _:
                                          msg = f"Invalid config path: {config_path}"
                                          raise ValueError(msg)
                                  pool = cls(
                                      manifest,
                                      agents_to_load=agents,
                                      connect_agents=connect_agents,
                                      confirmation_callback=confirmation_callback,
                                  )
                                  try:
                                      async with pool:
                                          yield pool
                                  finally:
                                      await pool.cleanup()
                              

                              setup_agent_workers

                              setup_agent_workers(agent: AnyAgent[Any, Any], workers: list[WorkerConfig])
                              

                              Set up workers for an agent from configuration.

                              Source code in src/llmling_agent/delegation/pool.py
                              441
                              442
                              443
                              444
                              445
                              446
                              447
                              448
                              449
                              450
                              451
                              452
                              453
                              454
                              455
                              def setup_agent_workers(self, agent: AnyAgent[Any, Any], workers: list[WorkerConfig]):
                                  """Set up workers for an agent from configuration."""
                                  for worker_config in workers:
                                      try:
                                          worker = self.get_agent(worker_config.name)
                                          agent.register_worker(
                                              worker,
                                              name=worker_config.name,
                                              reset_history_on_run=worker_config.reset_history_on_run,
                                              pass_message_history=worker_config.pass_message_history,
                                              share_context=worker_config.share_context,
                                          )
                                      except KeyError as e:
                                          msg = f"Worker agent {worker_config.name!r} not found"
                                          raise ValueError(msg) from e
                              

                              start_supervision

                              start_supervision() -> OptionalAwaitable[None]
                              

                              Start supervision interface.

                              Can be called either synchronously or asynchronously:

                              Sync usage:

                              start_supervision(pool)

                              Async usage:

                              await start_supervision(pool)

                              Source code in src/llmling_agent/delegation/pool.py
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              244
                              245
                              246
                              def start_supervision(self) -> OptionalAwaitable[None]:
                                  """Start supervision interface.
                              
                                  Can be called either synchronously or asynchronously:
                              
                                  # Sync usage:
                                  start_supervision(pool)
                              
                                  # Async usage:
                                  await start_supervision(pool)
                                  """
                                  from llmling_agent.delegation.supervisor_ui import SupervisorApp
                              
                                  app = SupervisorApp(self)
                                  if asyncio.get_event_loop().is_running():
                                      # We're in an async context
                                      return app.run_async()
                                  # We're in a sync context
                                  app.run()
                                  return None
                              

                              AgentPoolView

                              User's view and control point for interacting with an agent in a pool.

                              This class provides a focused way to interact with one primary agent that is part of a larger agent pool. Through this view, users can: 1. Interact with the primary agent directly 2. Manage connections to other agents in the pool 3. Control tool availability and settings 4. Handle commands and responses

                              Think of it as looking at the agent pool through the lens of one specific agent, while still being able to utilize the pool's collaborative capabilities.

                              Source code in src/llmling_agent/chat_session/base.py
                               39
                               40
                               41
                               42
                               43
                               44
                               45
                               46
                               47
                               48
                               49
                               50
                               51
                               52
                               53
                               54
                               55
                               56
                               57
                               58
                               59
                               60
                               61
                               62
                               63
                               64
                               65
                               66
                               67
                               68
                               69
                               70
                               71
                               72
                               73
                               74
                               75
                               76
                               77
                               78
                               79
                               80
                               81
                               82
                               83
                               84
                               85
                               86
                               87
                               88
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              244
                              245
                              246
                              247
                              248
                              249
                              250
                              251
                              252
                              253
                              254
                              255
                              256
                              257
                              258
                              259
                              260
                              261
                              262
                              263
                              264
                              265
                              266
                              267
                              268
                              269
                              270
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              292
                              293
                              294
                              295
                              296
                              297
                              298
                              299
                              300
                              301
                              302
                              303
                              304
                              305
                              306
                              307
                              308
                              309
                              310
                              311
                              312
                              313
                              314
                              315
                              316
                              317
                              318
                              319
                              320
                              321
                              322
                              323
                              324
                              325
                              326
                              327
                              328
                              329
                              330
                              331
                              332
                              333
                              334
                              335
                              336
                              337
                              338
                              339
                              340
                              341
                              342
                              343
                              344
                              345
                              346
                              347
                              348
                              349
                              350
                              351
                              352
                              353
                              354
                              355
                              356
                              357
                              358
                              359
                              360
                              361
                              362
                              363
                              364
                              class AgentPoolView:
                                  """User's view and control point for interacting with an agent in a pool.
                              
                                  This class provides a focused way to interact with one primary agent that is part
                                  of a larger agent pool. Through this view, users can:
                                  1. Interact with the primary agent directly
                                  2. Manage connections to other agents in the pool
                                  3. Control tool availability and settings
                                  4. Handle commands and responses
                              
                                  Think of it as looking at the agent pool through the lens of one specific agent,
                                  while still being able to utilize the pool's collaborative capabilities.
                                  """
                              
                                  @dataclass(frozen=True)
                                  class SessionReset:
                                      """Emitted when session is reset."""
                              
                                      session_id: str
                                      previous_tools: dict[str, bool]
                                      new_tools: dict[str, bool]
                                      timestamp: datetime = field(default_factory=datetime.now)
                              
                                  history_cleared = Signal(ConversationManager.HistoryCleared)
                                  session_reset = Signal(SessionReset)
                                  tool_added = Signal(str, ToolInfo)
                                  tool_removed = Signal(str)  # tool_name
                                  tool_changed = Signal(str, ToolInfo)  # name, new_info
                                  agent_connected = Signal(Agent)
                              
                                  def __init__(
                                      self,
                                      agent: AnyAgent[Any, Any],
                                      *,
                                      pool: AgentPool | None = None,
                                      wait_chain: bool = True,
                                  ):
                                      """Initialize chat session.
                              
                                      Args:
                                          agent: The LLMling agent to use
                                          pool: Optional agent pool for multi-agent interactions
                                          wait_chain: Whether to wait for chain completion
                                      """
                                      # Basic setup that doesn't need async
                                      self._agent = agent
                                      self._pool = pool
                                      self.wait_chain = wait_chain
                                      # forward ToolManager signals to ours
                                      self._agent.tools.events.added.connect(self.tool_added.emit)
                                      self._agent.tools.events.removed.connect(self.tool_removed.emit)
                                      self._agent.tools.events.changed.connect(self.tool_changed.emit)
                                      self._agent.conversation.history_cleared.connect(self.history_cleared.emit)
                                      self._initialized = False  # Track initialization state
                                      file_path = HISTORY_DIR / f"{agent.name}.history"
                                      self.commands = CommandStore(history_file=file_path, enable_system_commands=True)
                                      self.start_time = datetime.now()
                                      self._state = SessionState(current_model=self._agent.model_name)
                              
                                  @classmethod
                                  async def create(
                                      cls,
                                      agent: Agent[Any],
                                      *,
                                      pool: AgentPool | None = None,
                                      wait_chain: bool = True,
                                  ) -> AgentPoolView:
                                      """Create and initialize a new agent pool view.
                              
                                      Args:
                                          agent: The primary agent to interact with
                                          pool: Optional agent pool for multi-agent interactions
                                          wait_chain: Whether to wait for chain completion
                              
                                      Returns:
                                          Initialized AgentPoolView
                                      """
                                      view = cls(agent, pool=pool, wait_chain=wait_chain)
                                      await view.initialize()
                                      return view
                              
                                  @property
                                  def pool(self) -> AgentPool | None:
                                      """Get the agent pool if available."""
                                      return self._pool
                              
                                  async def connect_to(self, target: str, wait: bool | None = None):
                                      """Connect to another agent.
                              
                                      Args:
                                          target: Name of target agent
                                          wait: Override session's wait_chain setting
                              
                                      Raises:
                                          ValueError: If target agent not found or pool not available
                                      """
                                      logger.debug("Connecting to %s (wait=%s)", target, wait)
                                      if not self._pool:
                                          msg = "No agent pool available"
                                          raise ValueError(msg)
                              
                                      try:
                                          target_agent = self._pool.get_agent(target)
                                      except KeyError as e:
                                          msg = f"Target agent not found: {target}"
                                          raise ValueError(msg) from e
                              
                                      self._agent.pass_results_to(target_agent)
                                      self.agent_connected.emit(target_agent)
                              
                                      if wait is not None:
                                          self.wait_chain = wait
                              
                                  def _ensure_initialized(self):
                                      """Check if session is initialized."""
                                      if not self._initialized:
                                          msg = "Session not initialized. Call initialize() first."
                                          raise RuntimeError(msg)
                              
                                  async def initialize(self):
                                      """Initialize async resources and load data."""
                                      if self._initialized:
                                          return
                              
                                      # Load command history
                                      await self.commands.initialize()
                                      for cmd in get_commands():
                                          self.commands.register_command(cmd)
                              
                                      self._initialized = True
                                      logger.debug("Initialized chat session for agent %r", self._agent.name)
                              
                                  async def cleanup(self):
                                      """Clean up session resources."""
                                      if self._pool:
                                          await self._agent.disconnect_all()
                              
                                  def add_command(self, command: str):
                                      """Add command to history."""
                                      if not command.strip():
                                          return
                                      from llmling_agent.storage.models import CommandHistory
                              
                                      id_ = str(self._agent.conversation.id)
                                      CommandHistory.log(agent_name=self._agent.name, session_id=id_, command=command)
                              
                                  def get_commands(
                                      self, limit: int | None = None, current_session_only: bool = False
                                  ) -> list[str]:
                                      """Get command history ordered by newest first."""
                                      from llmling_agent.storage.models import CommandHistory
                              
                                      return CommandHistory.get_commands(
                                          agent_name=self._agent.name,
                                          session_id=str(self._agent.conversation.id),
                                          limit=limit,
                                          current_session_only=current_session_only,
                                      )
                              
                                  async def clear(self):
                                      """Clear chat history."""
                                      self._agent.conversation.clear()
                              
                                  async def reset(self):
                                      """Reset session state."""
                                      old_tools = self.tools.list_tools()
                                      self._agent.conversation.clear()
                                      self.tools.reset_states()
                                      new_tools = self.tools.list_tools()
                              
                                      event = self.SessionReset(
                                          session_id=str(self._agent.conversation.id),
                                          previous_tools=old_tools,
                                          new_tools=new_tools,
                                      )
                                      self.session_reset.emit(event)
                              
                                  async def handle_command(
                                      self,
                                      command_str: str,
                                      output: OutputWriter,
                                      metadata: dict[str, Any] | None = None,
                                  ):
                                      """Handle a slash command.
                              
                                      Args:
                                          command_str: Command string without leading slash
                                          output: Output writer implementation
                                          metadata: Optional interface-specific metadata
                                      """
                                      self._ensure_initialized()
                                      meta = metadata or {}
                                      ctx = self.commands.create_context(self, output_writer=output, metadata=meta)
                                      await self.commands.execute_command(command_str, ctx)
                              
                                  async def send_slash_command(
                                      self,
                                      content: str,
                                      *,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[str]:
                                      writer = output or DefaultOutputWriter()
                                      try:
                                          await self.handle_command(content[1:], output=writer, metadata=metadata)
                                          return ChatMessage(content="", role="system")
                                      except ExitCommandError:
                                          # Re-raise without wrapping in CommandError
                                          raise
                                      except CommandError as e:
                                          return ChatMessage(content=f"Command error: {e}", role="system")
                              
                                  @overload
                                  async def send_message(
                                      self,
                                      content: str,
                                      *,
                                      stream: Literal[False] = False,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[str]: ...
                              
                                  @overload
                                  async def send_message(
                                      self,
                                      content: str,
                                      *,
                                      stream: Literal[True],
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> AsyncIterator[ChatMessage[str]]: ...
                              
                                  async def send_message(
                                      self,
                                      content: str,
                                      *,
                                      stream: bool = False,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[str] | AsyncIterator[ChatMessage[str]]:
                                      """Send a message and get response(s)."""
                                      self._ensure_initialized()
                                      if not content.strip():
                                          msg = "Message cannot be empty"
                                          raise ValueError(msg)
                              
                                      if content.startswith("/"):
                                          return await self.send_slash_command(
                                              content,
                                              output=output,
                                              metadata=metadata,
                                          )
                                      try:
                                          if stream:
                                              return self._stream_message(content)
                                          return await self._send_normal(content)
                              
                                      except Exception as e:
                                          logger.exception("Error processing message")
                                          msg = f"Error processing message: {e}"
                                          raise ChatSessionConfigError(msg) from e
                              
                                  async def _send_normal(self, content: str) -> ChatMessage[str]:
                                      """Send message and get single response."""
                                      result = await self._agent.run(content)
                                      text_message = result.to_text_message()
                              
                                      # Update session state metrics
                                      self._state.message_count += 2  # User and assistant messages
                                      if text_message.cost_info:
                                          self._state.update_tokens(text_message)
                                          self._state.total_cost = float(text_message.cost_info.total_cost)
                                      if text_message.response_time:
                                          self._state.last_response_time = text_message.response_time
                              
                                      # Add chain waiting if enabled
                                      if self.wait_chain and self._pool:
                                          await self._agent.wait_for_chain()
                              
                                      return text_message
                              
                                  async def _stream_message(self, content: str) -> AsyncIterator[ChatMessage[str]]:
                                      """Send message and stream responses."""
                                      async with self._agent.run_stream(content) as stream_result:
                                          # Stream intermediate chunks
                                          async for response in stream_result.stream():
                                              yield ChatMessage[str](content=str(response), role="assistant")
                              
                                          # Final message with complete metrics after stream completes
                                          start_time = time.perf_counter()
                              
                                          # Get usage info if available
                                          usage = stream_result.usage()
                                          cost_info = (
                                              await TokenCost.from_usage(
                                                  usage, self._agent.model_name, content, response
                                              )
                                              if usage and self._agent.model_name
                                              else None
                                          )
                              
                                          # Create final status message with all metrics
                                          final_msg = ChatMessage[str](
                                              content="",  # Empty content for final status message
                                              role="assistant",
                                              name=self._agent.name,
                                              model=self._agent.model_name,
                                              message_id=str(uuid4()),
                                              cost_info=cost_info,
                                              response_time=time.perf_counter() - start_time,
                                          )
                              
                                          # Update session state
                                          self._state.message_count += 2  # User and assistant messages
                                          self._state.update_tokens(final_msg)
                              
                                          # Add chain waiting if enabled
                                          if self.wait_chain and self._pool:
                                              await self._agent.wait_for_chain()
                              
                                          yield final_msg
                              
                                  @property
                                  def tools(self) -> ToolManager:
                                      """Get current tool states."""
                                      return self._agent.tools
                              

                              pool property

                              pool: AgentPool | None
                              

                              Get the agent pool if available.

                              tools property

                              tools: ToolManager
                              

                              Get current tool states.

                              SessionReset dataclass

                              Emitted when session is reset.

                              Source code in src/llmling_agent/chat_session/base.py
                              53
                              54
                              55
                              56
                              57
                              58
                              59
                              60
                              @dataclass(frozen=True)
                              class SessionReset:
                                  """Emitted when session is reset."""
                              
                                  session_id: str
                                  previous_tools: dict[str, bool]
                                  new_tools: dict[str, bool]
                                  timestamp: datetime = field(default_factory=datetime.now)
                              

                              __init__

                              __init__(
                                  agent: AnyAgent[Any, Any], *, pool: AgentPool | None = None, wait_chain: bool = True
                              )
                              

                              Initialize chat session.

                              Parameters:

                              Name Type Description Default
                              agent AnyAgent[Any, Any]

                              The LLMling agent to use

                              required
                              pool AgentPool | None

                              Optional agent pool for multi-agent interactions

                              None
                              wait_chain bool

                              Whether to wait for chain completion

                              True
                              Source code in src/llmling_agent/chat_session/base.py
                              69
                              70
                              71
                              72
                              73
                              74
                              75
                              76
                              77
                              78
                              79
                              80
                              81
                              82
                              83
                              84
                              85
                              86
                              87
                              88
                              89
                              90
                              91
                              92
                              93
                              94
                              95
                              96
                              def __init__(
                                  self,
                                  agent: AnyAgent[Any, Any],
                                  *,
                                  pool: AgentPool | None = None,
                                  wait_chain: bool = True,
                              ):
                                  """Initialize chat session.
                              
                                  Args:
                                      agent: The LLMling agent to use
                                      pool: Optional agent pool for multi-agent interactions
                                      wait_chain: Whether to wait for chain completion
                                  """
                                  # Basic setup that doesn't need async
                                  self._agent = agent
                                  self._pool = pool
                                  self.wait_chain = wait_chain
                                  # forward ToolManager signals to ours
                                  self._agent.tools.events.added.connect(self.tool_added.emit)
                                  self._agent.tools.events.removed.connect(self.tool_removed.emit)
                                  self._agent.tools.events.changed.connect(self.tool_changed.emit)
                                  self._agent.conversation.history_cleared.connect(self.history_cleared.emit)
                                  self._initialized = False  # Track initialization state
                                  file_path = HISTORY_DIR / f"{agent.name}.history"
                                  self.commands = CommandStore(history_file=file_path, enable_system_commands=True)
                                  self.start_time = datetime.now()
                                  self._state = SessionState(current_model=self._agent.model_name)
                              

                              add_command

                              add_command(command: str)
                              

                              Add command to history.

                              Source code in src/llmling_agent/chat_session/base.py
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              def add_command(self, command: str):
                                  """Add command to history."""
                                  if not command.strip():
                                      return
                                  from llmling_agent.storage.models import CommandHistory
                              
                                  id_ = str(self._agent.conversation.id)
                                  CommandHistory.log(agent_name=self._agent.name, session_id=id_, command=command)
                              

                              cleanup async

                              cleanup()
                              

                              Clean up session resources.

                              Source code in src/llmling_agent/chat_session/base.py
                              171
                              172
                              173
                              174
                              async def cleanup(self):
                                  """Clean up session resources."""
                                  if self._pool:
                                      await self._agent.disconnect_all()
                              

                              clear async

                              clear()
                              

                              Clear chat history.

                              Source code in src/llmling_agent/chat_session/base.py
                              198
                              199
                              200
                              async def clear(self):
                                  """Clear chat history."""
                                  self._agent.conversation.clear()
                              

                              connect_to async

                              connect_to(target: str, wait: bool | None = None)
                              

                              Connect to another agent.

                              Parameters:

                              Name Type Description Default
                              target str

                              Name of target agent

                              required
                              wait bool | None

                              Override session's wait_chain setting

                              None

                              Raises:

                              Type Description
                              ValueError

                              If target agent not found or pool not available

                              Source code in src/llmling_agent/chat_session/base.py
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              async def connect_to(self, target: str, wait: bool | None = None):
                                  """Connect to another agent.
                              
                                  Args:
                                      target: Name of target agent
                                      wait: Override session's wait_chain setting
                              
                                  Raises:
                                      ValueError: If target agent not found or pool not available
                                  """
                                  logger.debug("Connecting to %s (wait=%s)", target, wait)
                                  if not self._pool:
                                      msg = "No agent pool available"
                                      raise ValueError(msg)
                              
                                  try:
                                      target_agent = self._pool.get_agent(target)
                                  except KeyError as e:
                                      msg = f"Target agent not found: {target}"
                                      raise ValueError(msg) from e
                              
                                  self._agent.pass_results_to(target_agent)
                                  self.agent_connected.emit(target_agent)
                              
                                  if wait is not None:
                                      self.wait_chain = wait
                              

                              create async classmethod

                              create(
                                  agent: Agent[Any], *, pool: AgentPool | None = None, wait_chain: bool = True
                              ) -> AgentPoolView
                              

                              Create and initialize a new agent pool view.

                              Parameters:

                              Name Type Description Default
                              agent Agent[Any]

                              The primary agent to interact with

                              required
                              pool AgentPool | None

                              Optional agent pool for multi-agent interactions

                              None
                              wait_chain bool

                              Whether to wait for chain completion

                              True

                              Returns:

                              Type Description
                              AgentPoolView

                              Initialized AgentPoolView

                              Source code in src/llmling_agent/chat_session/base.py
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              @classmethod
                              async def create(
                                  cls,
                                  agent: Agent[Any],
                                  *,
                                  pool: AgentPool | None = None,
                                  wait_chain: bool = True,
                              ) -> AgentPoolView:
                                  """Create and initialize a new agent pool view.
                              
                                  Args:
                                      agent: The primary agent to interact with
                                      pool: Optional agent pool for multi-agent interactions
                                      wait_chain: Whether to wait for chain completion
                              
                                  Returns:
                                      Initialized AgentPoolView
                                  """
                                  view = cls(agent, pool=pool, wait_chain=wait_chain)
                                  await view.initialize()
                                  return view
                              

                              get_commands

                              get_commands(limit: int | None = None, current_session_only: bool = False) -> list[str]
                              

                              Get command history ordered by newest first.

                              Source code in src/llmling_agent/chat_session/base.py
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              def get_commands(
                                  self, limit: int | None = None, current_session_only: bool = False
                              ) -> list[str]:
                                  """Get command history ordered by newest first."""
                                  from llmling_agent.storage.models import CommandHistory
                              
                                  return CommandHistory.get_commands(
                                      agent_name=self._agent.name,
                                      session_id=str(self._agent.conversation.id),
                                      limit=limit,
                                      current_session_only=current_session_only,
                                  )
                              

                              handle_command async

                              handle_command(
                                  command_str: str, output: OutputWriter, metadata: dict[str, Any] | None = None
                              )
                              

                              Handle a slash command.

                              Parameters:

                              Name Type Description Default
                              command_str str

                              Command string without leading slash

                              required
                              output OutputWriter

                              Output writer implementation

                              required
                              metadata dict[str, Any] | None

                              Optional interface-specific metadata

                              None
                              Source code in src/llmling_agent/chat_session/base.py
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              async def handle_command(
                                  self,
                                  command_str: str,
                                  output: OutputWriter,
                                  metadata: dict[str, Any] | None = None,
                              ):
                                  """Handle a slash command.
                              
                                  Args:
                                      command_str: Command string without leading slash
                                      output: Output writer implementation
                                      metadata: Optional interface-specific metadata
                                  """
                                  self._ensure_initialized()
                                  meta = metadata or {}
                                  ctx = self.commands.create_context(self, output_writer=output, metadata=meta)
                                  await self.commands.execute_command(command_str, ctx)
                              

                              initialize async

                              initialize()
                              

                              Initialize async resources and load data.

                              Source code in src/llmling_agent/chat_session/base.py
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              async def initialize(self):
                                  """Initialize async resources and load data."""
                                  if self._initialized:
                                      return
                              
                                  # Load command history
                                  await self.commands.initialize()
                                  for cmd in get_commands():
                                      self.commands.register_command(cmd)
                              
                                  self._initialized = True
                                  logger.debug("Initialized chat session for agent %r", self._agent.name)
                              

                              reset async

                              reset()
                              

                              Reset session state.

                              Source code in src/llmling_agent/chat_session/base.py
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              async def reset(self):
                                  """Reset session state."""
                                  old_tools = self.tools.list_tools()
                                  self._agent.conversation.clear()
                                  self.tools.reset_states()
                                  new_tools = self.tools.list_tools()
                              
                                  event = self.SessionReset(
                                      session_id=str(self._agent.conversation.id),
                                      previous_tools=old_tools,
                                      new_tools=new_tools,
                                  )
                                  self.session_reset.emit(event)
                              

                              send_message async

                              send_message(
                                  content: str,
                                  *,
                                  stream: Literal[False] = False,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[str]
                              
                              send_message(
                                  content: str,
                                  *,
                                  stream: Literal[True],
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> AsyncIterator[ChatMessage[str]]
                              
                              send_message(
                                  content: str,
                                  *,
                                  stream: bool = False,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[str] | AsyncIterator[ChatMessage[str]]
                              

                              Send a message and get response(s).

                              Source code in src/llmling_agent/chat_session/base.py
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              292
                              293
                              294
                              295
                              296
                              297
                              298
                              299
                              async def send_message(
                                  self,
                                  content: str,
                                  *,
                                  stream: bool = False,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[str] | AsyncIterator[ChatMessage[str]]:
                                  """Send a message and get response(s)."""
                                  self._ensure_initialized()
                                  if not content.strip():
                                      msg = "Message cannot be empty"
                                      raise ValueError(msg)
                              
                                  if content.startswith("/"):
                                      return await self.send_slash_command(
                                          content,
                                          output=output,
                                          metadata=metadata,
                                      )
                                  try:
                                      if stream:
                                          return self._stream_message(content)
                                      return await self._send_normal(content)
                              
                                  except Exception as e:
                                      logger.exception("Error processing message")
                                      msg = f"Error processing message: {e}"
                                      raise ChatSessionConfigError(msg) from e
                              

                              AgentRouter

                              Base class for routing messages between agents.

                              Source code in src/llmling_agent/delegation/router.py
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              class AgentRouter:
                                  """Base class for routing messages between agents."""
                              
                                  async def decide(self, message: Any) -> Decision:
                                      """Make routing decision for message."""
                                      raise NotImplementedError
                              
                                  def get_wait_decision(
                                      self, target: str, reason: str, talk_back: bool = False
                                  ) -> Decision:
                                      """Create decision to route and wait for response."""
                                      return AwaitResponseDecision(
                                          target_agent=target, reason=reason, talk_back=talk_back
                                      )
                              
                                  def get_route_decision(self, target: str, reason: str) -> Decision:
                                      """Create decision to route without waiting."""
                                      return RouteDecision(target_agent=target, reason=reason)
                              
                                  def get_end_decision(self, reason: str) -> Decision:
                                      """Create decision to end routing."""
                                      return EndDecision(reason=reason)
                              

                              decide async

                              decide(message: Any) -> Decision
                              

                              Make routing decision for message.

                              Source code in src/llmling_agent/delegation/router.py
                              107
                              108
                              109
                              async def decide(self, message: Any) -> Decision:
                                  """Make routing decision for message."""
                                  raise NotImplementedError
                              

                              get_end_decision

                              get_end_decision(reason: str) -> Decision
                              

                              Create decision to end routing.

                              Source code in src/llmling_agent/delegation/router.py
                              123
                              124
                              125
                              def get_end_decision(self, reason: str) -> Decision:
                                  """Create decision to end routing."""
                                  return EndDecision(reason=reason)
                              

                              get_route_decision

                              get_route_decision(target: str, reason: str) -> Decision
                              

                              Create decision to route without waiting.

                              Source code in src/llmling_agent/delegation/router.py
                              119
                              120
                              121
                              def get_route_decision(self, target: str, reason: str) -> Decision:
                                  """Create decision to route without waiting."""
                                  return RouteDecision(target_agent=target, reason=reason)
                              

                              get_wait_decision

                              get_wait_decision(target: str, reason: str, talk_back: bool = False) -> Decision
                              

                              Create decision to route and wait for response.

                              Source code in src/llmling_agent/delegation/router.py
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              def get_wait_decision(
                                  self, target: str, reason: str, talk_back: bool = False
                              ) -> Decision:
                                  """Create decision to route and wait for response."""
                                  return AwaitResponseDecision(
                                      target_agent=target, reason=reason, talk_back=talk_back
                                  )
                              

                              AgentsManifest

                              Bases: ConfigModel

                              Complete agent configuration manifest defining all available agents.

                              This is the root configuration that: - Defines available response types (both inline and imported) - Configures all agent instances and their settings - Sets up custom role definitions and capabilities - Manages environment configurations

                              A single manifest can define multiple agents that can work independently or collaborate through the orchestrator.

                              Source code in src/llmling_agent/models/agents.py
                              437
                              438
                              439
                              440
                              441
                              442
                              443
                              444
                              445
                              446
                              447
                              448
                              449
                              450
                              451
                              452
                              453
                              454
                              455
                              456
                              457
                              458
                              459
                              460
                              461
                              462
                              463
                              464
                              465
                              466
                              467
                              468
                              469
                              470
                              471
                              472
                              473
                              474
                              475
                              476
                              477
                              478
                              479
                              480
                              481
                              482
                              483
                              484
                              485
                              486
                              487
                              488
                              489
                              490
                              491
                              492
                              493
                              494
                              495
                              496
                              497
                              498
                              499
                              500
                              501
                              502
                              503
                              504
                              505
                              506
                              507
                              508
                              509
                              510
                              511
                              512
                              513
                              514
                              515
                              516
                              517
                              518
                              519
                              520
                              521
                              522
                              523
                              524
                              525
                              526
                              527
                              528
                              529
                              530
                              531
                              532
                              533
                              534
                              535
                              536
                              537
                              538
                              539
                              540
                              541
                              542
                              543
                              544
                              545
                              546
                              547
                              548
                              549
                              550
                              551
                              552
                              553
                              554
                              555
                              556
                              557
                              558
                              559
                              560
                              561
                              562
                              563
                              564
                              565
                              566
                              567
                              568
                              569
                              570
                              571
                              572
                              573
                              574
                              575
                              576
                              577
                              578
                              579
                              580
                              581
                              582
                              583
                              584
                              585
                              586
                              587
                              588
                              589
                              590
                              591
                              592
                              593
                              594
                              595
                              596
                              597
                              598
                              599
                              600
                              601
                              602
                              603
                              604
                              605
                              606
                              607
                              608
                              609
                              610
                              611
                              612
                              613
                              614
                              615
                              616
                              617
                              618
                              619
                              620
                              621
                              622
                              623
                              624
                              625
                              626
                              627
                              628
                              629
                              630
                              631
                              632
                              633
                              634
                              635
                              636
                              637
                              638
                              639
                              640
                              641
                              642
                              643
                              644
                              645
                              646
                              647
                              648
                              649
                              650
                              651
                              652
                              653
                              654
                              655
                              656
                              657
                              658
                              659
                              660
                              661
                              662
                              663
                              664
                              665
                              666
                              667
                              668
                              669
                              670
                              671
                              672
                              673
                              674
                              675
                              676
                              677
                              678
                              679
                              680
                              681
                              682
                              683
                              684
                              685
                              686
                              687
                              688
                              689
                              690
                              691
                              692
                              693
                              694
                              695
                              696
                              697
                              698
                              699
                              700
                              701
                              702
                              class AgentsManifest[TDeps, TResult](ConfigModel):
                                  """Complete agent configuration manifest defining all available agents.
                              
                                  This is the root configuration that:
                                  - Defines available response types (both inline and imported)
                                  - Configures all agent instances and their settings
                                  - Sets up custom role definitions and capabilities
                                  - Manages environment configurations
                              
                                  A single manifest can define multiple agents that can work independently
                                  or collaborate through the orchestrator.
                                  """
                              
                                  agents: dict[str, AgentConfig] = Field(default_factory=dict)
                                  """Mapping of agent IDs to their configurations"""
                              
                                  storage: StorageConfig = Field(default_factory=StorageConfig)
                                  """Storage provider configuration."""
                              
                                  responses: dict[str, ResponseDefinition] = Field(default_factory=dict)
                                  """Mapping of response names to their definitions"""
                              
                                  tasks: dict[str, AgentTask] = Field(default_factory=dict)
                                  """Pre-defined tasks, ready to be used by agents."""
                              
                                  mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                                  """List of MCP server configurations:
                                  - str entries are converted to StdioMCPServer
                                  - MCPServerConfig for full server configuration
                                  """
                                  model_config = ConfigDict(use_attribute_docstrings=True, extra="forbid")
                              
                                  def clone_agent_config(
                                      self,
                                      name: str,
                                      new_name: str | None = None,
                                      *,
                                      template_context: dict[str, Any] | None = None,
                                      **overrides: Any,
                                  ) -> str:
                                      """Create a copy of an agent configuration.
                              
                                      Args:
                                          name: Name of agent to clone
                                          new_name: Optional new name (auto-generated if None)
                                          template_context: Variables for template rendering
                                          **overrides: Configuration overrides for the clone
                              
                                      Returns:
                                          Name of the new agent
                              
                                      Raises:
                                          KeyError: If original agent not found
                                          ValueError: If new name already exists or if overrides invalid
                                      """
                                      if name not in self.agents:
                                          msg = f"Agent {name} not found"
                                          raise KeyError(msg)
                              
                                      actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                                      if actual_name in self.agents:
                                          msg = f"Agent {actual_name} already exists"
                                          raise ValueError(msg)
                              
                                      # Deep copy the configuration
                                      config = self.agents[name].model_copy(deep=True)
                              
                                      # Apply overrides
                                      for key, value in overrides.items():
                                          if not hasattr(config, key):
                                              msg = f"Invalid override: {key}"
                                              raise ValueError(msg)
                                          setattr(config, key, value)
                              
                                      # Handle template rendering if context provided
                                      if template_context:
                                          # Apply name from context if not explicitly overridden
                                          if "name" in template_context and "name" not in overrides:
                                              config.name = template_context["name"]
                              
                                          # Render system prompts
                                          config.system_prompts = config.render_system_prompts(template_context)
                              
                                      self.agents[actual_name] = config
                                      return actual_name
                              
                                  @model_validator(mode="before")
                                  @classmethod
                                  def resolve_inheritance(cls, data: dict) -> dict:
                                      """Resolve agent inheritance chains."""
                                      agents = data.get("agents", {})
                                      resolved: dict[str, dict] = {}
                                      seen: set[str] = set()
                              
                                      def resolve_agent(name: str) -> dict:
                                          if name in resolved:
                                              return resolved[name]
                              
                                          if name in seen:
                                              msg = f"Circular inheritance detected: {name}"
                                              raise ValueError(msg)
                              
                                          seen.add(name)
                                          config = (
                                              agents[name].model_copy()
                                              if hasattr(agents[name], "model_copy")
                                              else agents[name].copy()
                                          )
                                          inherit = (
                                              config.get("inherits") if isinstance(config, dict) else config.inherits
                                          )
                                          if inherit:
                                              if inherit not in agents:
                                                  msg = f"Parent agent {inherit} not found"
                                                  raise ValueError(msg)
                              
                                              # Get resolved parent config
                                              parent = resolve_agent(inherit)
                                              # Merge parent with child (child overrides parent)
                                              merged = parent.copy()
                                              merged.update(config)
                                              config = merged
                              
                                          seen.remove(name)
                                          resolved[name] = config
                                          return config
                              
                                      # Resolve all agents
                                      for name in agents:
                                          resolved[name] = resolve_agent(name)
                              
                                      # Update agents with resolved configs
                                      data["agents"] = resolved
                                      return data
                              
                                  # @model_validator(mode="after")
                                  # def validate_response_types(self) -> AgentsManifest:
                                  #     """Ensure all agent result_types exist in responses or are inline."""
                                  #     for agent_id, agent in self.agents.items():
                                  #         if (
                                  #             isinstance(agent.result_type, str)
                                  #             and agent.result_type not in self.responses
                                  #         ):
                                  #             msg = f"'{agent.result_type=}' for '{agent_id=}' not found in responses"
                                  #             raise ValueError(msg)
                                  #     return self
                              
                                  @classmethod
                                  def from_file(cls, path: StrPath) -> Self:
                                      """Load agent configuration from YAML file.
                              
                                      Args:
                                          path: Path to the configuration file
                              
                                      Returns:
                                          Loaded agent definition
                              
                                      Raises:
                                          ValueError: If loading fails
                                      """
                                      try:
                                          data = yamling.load_yaml_file(path)
                                          # Set identifier as name if not set
                                          for identifier, config in data["agents"].items():
                                              if not config.get("name"):
                                                  config["name"] = identifier
                                          agent_def = cls.model_validate(data)
                                          # Update all agents with the config file path and ensure names
                                          agents = {
                                              name: config.model_copy(update={"config_file_path": str(path)})
                                              for name, config in agent_def.agents.items()
                                          }
                                          return agent_def.model_copy(update={"agents": agents})
                                      except Exception as exc:
                                          msg = f"Failed to load agent config from {path}"
                                          raise ValueError(msg) from exc
                              
                                  async def create_pool(
                                      self,
                                      *,
                                      agents_to_load: list[str] | None = None,
                                      connect_agents: bool = True,
                                      session_id: SessionIdType = None,
                                  ) -> AgentPool:
                                      """Create an agent pool from this manifest.
                              
                                      Args:
                                          agents_to_load: Optional list of agents to initialize
                                          connect_agents: Whether to set up forwarding connections
                                          session_id: Optional session ID for conversation recovery
                              
                                      Returns:
                                          Configured agent pool
                                      """
                                      from llmling_agent.delegation import AgentPool
                              
                                      pool = AgentPool(
                                          manifest=self,
                                          agents_to_load=agents_to_load,
                                          connect_agents=connect_agents,
                                      )
                              
                                      # Initialize agents with knowledge
                                      for name, agent in pool.agents.items():
                                          if (cfg := self.agents.get(name)) and cfg.knowledge:
                                              for source in (
                                                  cfg.knowledge.paths + cfg.knowledge.resources + cfg.knowledge.prompts
                                              ):
                                                  await agent.conversation.load_context_source(source)  # type: ignore
                              
                                      return pool
                              
                                  @asynccontextmanager
                                  async def open_agent(
                                      self,
                                      agent_name: str,
                                      *,
                                      model: str | None = None,
                                      session: SessionIdType | SessionQuery = None,
                                  ) -> AsyncIterator[AnyAgent[TDeps, Any]]:
                                      """Open and configure a specific agent from configuration.
                              
                                      Creates the agent in the context of a single-agent pool.
                              
                                      Args:
                                          agent_name: Name of the agent to load
                                          model: Optional model override
                                          session: Optional ID or SessionQuery to recover a previous state
                              
                                      Example:
                                          manifest = AgentsManifest[Any, str].from_file("agents.yml")
                                          async with manifest.open_agent("my-agent") as agent:
                                              result = await agent.run("Hello!")
                                      """
                                      from llmling_agent import Agent
                                      from llmling_agent.delegation import AgentPool
                              
                                      # Create empty pool just for context
                                      pool = AgentPool(manifest=self, agents_to_load=[], connect_agents=False)
                                      try:
                                          async with Agent[TDeps].open_agent(  # type: ignore
                                              self,
                                              agent_name,
                                              model=model,
                                              session=session,
                                          ) as agent:
                                              if agent.context:
                                                  agent.context.pool = pool
                                              pool.agents[agent_name] = agent
                                              yield agent
                                      finally:
                                          await pool.cleanup()
                              
                                  def get_result_type(self, agent_name: str) -> type[Any] | None:
                                      """Get the resolved result type for an agent.
                              
                                      Returns None if no result type is configured.
                                      """
                                      agent_config = self.agents[agent_name]
                                      if not agent_config.result_type:
                                          return None
                                      logger.debug("Building response model for %r", agent_config.result_type)
                                      if isinstance(agent_config.result_type, str):
                                          response_def = self.responses[agent_config.result_type]
                                          return response_def.create_model()  # type: ignore
                                      return agent_config.result_type.create_model()  # type: ignore
                              

                              agents class-attribute instance-attribute

                              agents: dict[str, AgentConfig] = Field(default_factory=dict)
                              

                              Mapping of agent IDs to their configurations

                              mcp_servers class-attribute instance-attribute

                              mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                              

                              List of MCP server configurations: - str entries are converted to StdioMCPServer - MCPServerConfig for full server configuration

                              responses class-attribute instance-attribute

                              responses: dict[str, ResponseDefinition] = Field(default_factory=dict)
                              

                              Mapping of response names to their definitions

                              storage class-attribute instance-attribute

                              storage: StorageConfig = Field(default_factory=StorageConfig)
                              

                              Storage provider configuration.

                              tasks class-attribute instance-attribute

                              tasks: dict[str, AgentTask] = Field(default_factory=dict)
                              

                              Pre-defined tasks, ready to be used by agents.

                              clone_agent_config

                              clone_agent_config(
                                  name: str,
                                  new_name: str | None = None,
                                  *,
                                  template_context: dict[str, Any] | None = None,
                                  **overrides: Any,
                              ) -> str
                              

                              Create a copy of an agent configuration.

                              Parameters:

                              Name Type Description Default
                              name str

                              Name of agent to clone

                              required
                              new_name str | None

                              Optional new name (auto-generated if None)

                              None
                              template_context dict[str, Any] | None

                              Variables for template rendering

                              None
                              **overrides Any

                              Configuration overrides for the clone

                              {}

                              Returns:

                              Type Description
                              str

                              Name of the new agent

                              Raises:

                              Type Description
                              KeyError

                              If original agent not found

                              ValueError

                              If new name already exists or if overrides invalid

                              Source code in src/llmling_agent/models/agents.py
                              469
                              470
                              471
                              472
                              473
                              474
                              475
                              476
                              477
                              478
                              479
                              480
                              481
                              482
                              483
                              484
                              485
                              486
                              487
                              488
                              489
                              490
                              491
                              492
                              493
                              494
                              495
                              496
                              497
                              498
                              499
                              500
                              501
                              502
                              503
                              504
                              505
                              506
                              507
                              508
                              509
                              510
                              511
                              512
                              513
                              514
                              515
                              516
                              517
                              518
                              519
                              520
                              521
                              def clone_agent_config(
                                  self,
                                  name: str,
                                  new_name: str | None = None,
                                  *,
                                  template_context: dict[str, Any] | None = None,
                                  **overrides: Any,
                              ) -> str:
                                  """Create a copy of an agent configuration.
                              
                                  Args:
                                      name: Name of agent to clone
                                      new_name: Optional new name (auto-generated if None)
                                      template_context: Variables for template rendering
                                      **overrides: Configuration overrides for the clone
                              
                                  Returns:
                                      Name of the new agent
                              
                                  Raises:
                                      KeyError: If original agent not found
                                      ValueError: If new name already exists or if overrides invalid
                                  """
                                  if name not in self.agents:
                                      msg = f"Agent {name} not found"
                                      raise KeyError(msg)
                              
                                  actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                                  if actual_name in self.agents:
                                      msg = f"Agent {actual_name} already exists"
                                      raise ValueError(msg)
                              
                                  # Deep copy the configuration
                                  config = self.agents[name].model_copy(deep=True)
                              
                                  # Apply overrides
                                  for key, value in overrides.items():
                                      if not hasattr(config, key):
                                          msg = f"Invalid override: {key}"
                                          raise ValueError(msg)
                                      setattr(config, key, value)
                              
                                  # Handle template rendering if context provided
                                  if template_context:
                                      # Apply name from context if not explicitly overridden
                                      if "name" in template_context and "name" not in overrides:
                                          config.name = template_context["name"]
                              
                                      # Render system prompts
                                      config.system_prompts = config.render_system_prompts(template_context)
                              
                                  self.agents[actual_name] = config
                                  return actual_name
                              

                              create_pool async

                              create_pool(
                                  *,
                                  agents_to_load: list[str] | None = None,
                                  connect_agents: bool = True,
                                  session_id: SessionIdType = None,
                              ) -> AgentPool
                              

                              Create an agent pool from this manifest.

                              Parameters:

                              Name Type Description Default
                              agents_to_load list[str] | None

                              Optional list of agents to initialize

                              None
                              connect_agents bool

                              Whether to set up forwarding connections

                              True
                              session_id SessionIdType

                              Optional session ID for conversation recovery

                              None

                              Returns:

                              Type Description
                              AgentPool

                              Configured agent pool

                              Source code in src/llmling_agent/models/agents.py
                              614
                              615
                              616
                              617
                              618
                              619
                              620
                              621
                              622
                              623
                              624
                              625
                              626
                              627
                              628
                              629
                              630
                              631
                              632
                              633
                              634
                              635
                              636
                              637
                              638
                              639
                              640
                              641
                              642
                              643
                              644
                              645
                              646
                              647
                              async def create_pool(
                                  self,
                                  *,
                                  agents_to_load: list[str] | None = None,
                                  connect_agents: bool = True,
                                  session_id: SessionIdType = None,
                              ) -> AgentPool:
                                  """Create an agent pool from this manifest.
                              
                                  Args:
                                      agents_to_load: Optional list of agents to initialize
                                      connect_agents: Whether to set up forwarding connections
                                      session_id: Optional session ID for conversation recovery
                              
                                  Returns:
                                      Configured agent pool
                                  """
                                  from llmling_agent.delegation import AgentPool
                              
                                  pool = AgentPool(
                                      manifest=self,
                                      agents_to_load=agents_to_load,
                                      connect_agents=connect_agents,
                                  )
                              
                                  # Initialize agents with knowledge
                                  for name, agent in pool.agents.items():
                                      if (cfg := self.agents.get(name)) and cfg.knowledge:
                                          for source in (
                                              cfg.knowledge.paths + cfg.knowledge.resources + cfg.knowledge.prompts
                                          ):
                                              await agent.conversation.load_context_source(source)  # type: ignore
                              
                                  return pool
                              

                              from_file classmethod

                              from_file(path: StrPath) -> Self
                              

                              Load agent configuration from YAML file.

                              Parameters:

                              Name Type Description Default
                              path StrPath

                              Path to the configuration file

                              required

                              Returns:

                              Type Description
                              Self

                              Loaded agent definition

                              Raises:

                              Type Description
                              ValueError

                              If loading fails

                              Source code in src/llmling_agent/models/agents.py
                              584
                              585
                              586
                              587
                              588
                              589
                              590
                              591
                              592
                              593
                              594
                              595
                              596
                              597
                              598
                              599
                              600
                              601
                              602
                              603
                              604
                              605
                              606
                              607
                              608
                              609
                              610
                              611
                              612
                              @classmethod
                              def from_file(cls, path: StrPath) -> Self:
                                  """Load agent configuration from YAML file.
                              
                                  Args:
                                      path: Path to the configuration file
                              
                                  Returns:
                                      Loaded agent definition
                              
                                  Raises:
                                      ValueError: If loading fails
                                  """
                                  try:
                                      data = yamling.load_yaml_file(path)
                                      # Set identifier as name if not set
                                      for identifier, config in data["agents"].items():
                                          if not config.get("name"):
                                              config["name"] = identifier
                                      agent_def = cls.model_validate(data)
                                      # Update all agents with the config file path and ensure names
                                      agents = {
                                          name: config.model_copy(update={"config_file_path": str(path)})
                                          for name, config in agent_def.agents.items()
                                      }
                                      return agent_def.model_copy(update={"agents": agents})
                                  except Exception as exc:
                                      msg = f"Failed to load agent config from {path}"
                                      raise ValueError(msg) from exc
                              

                              get_result_type

                              get_result_type(agent_name: str) -> type[Any] | None
                              

                              Get the resolved result type for an agent.

                              Returns None if no result type is configured.

                              Source code in src/llmling_agent/models/agents.py
                              690
                              691
                              692
                              693
                              694
                              695
                              696
                              697
                              698
                              699
                              700
                              701
                              702
                              def get_result_type(self, agent_name: str) -> type[Any] | None:
                                  """Get the resolved result type for an agent.
                              
                                  Returns None if no result type is configured.
                                  """
                                  agent_config = self.agents[agent_name]
                                  if not agent_config.result_type:
                                      return None
                                  logger.debug("Building response model for %r", agent_config.result_type)
                                  if isinstance(agent_config.result_type, str):
                                      response_def = self.responses[agent_config.result_type]
                                      return response_def.create_model()  # type: ignore
                                  return agent_config.result_type.create_model()  # type: ignore
                              

                              open_agent async

                              open_agent(
                                  agent_name: str,
                                  *,
                                  model: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                              ) -> AsyncIterator[AnyAgent[TDeps, Any]]
                              

                              Open and configure a specific agent from configuration.

                              Creates the agent in the context of a single-agent pool.

                              Parameters:

                              Name Type Description Default
                              agent_name str

                              Name of the agent to load

                              required
                              model str | None

                              Optional model override

                              None
                              session SessionIdType | SessionQuery

                              Optional ID or SessionQuery to recover a previous state

                              None
                              Example

                              manifest = AgentsManifest[Any, str].from_file("agents.yml") async with manifest.open_agent("my-agent") as agent: result = await agent.run("Hello!")

                              Source code in src/llmling_agent/models/agents.py
                              649
                              650
                              651
                              652
                              653
                              654
                              655
                              656
                              657
                              658
                              659
                              660
                              661
                              662
                              663
                              664
                              665
                              666
                              667
                              668
                              669
                              670
                              671
                              672
                              673
                              674
                              675
                              676
                              677
                              678
                              679
                              680
                              681
                              682
                              683
                              684
                              685
                              686
                              687
                              688
                              @asynccontextmanager
                              async def open_agent(
                                  self,
                                  agent_name: str,
                                  *,
                                  model: str | None = None,
                                  session: SessionIdType | SessionQuery = None,
                              ) -> AsyncIterator[AnyAgent[TDeps, Any]]:
                                  """Open and configure a specific agent from configuration.
                              
                                  Creates the agent in the context of a single-agent pool.
                              
                                  Args:
                                      agent_name: Name of the agent to load
                                      model: Optional model override
                                      session: Optional ID or SessionQuery to recover a previous state
                              
                                  Example:
                                      manifest = AgentsManifest[Any, str].from_file("agents.yml")
                                      async with manifest.open_agent("my-agent") as agent:
                                          result = await agent.run("Hello!")
                                  """
                                  from llmling_agent import Agent
                                  from llmling_agent.delegation import AgentPool
                              
                                  # Create empty pool just for context
                                  pool = AgentPool(manifest=self, agents_to_load=[], connect_agents=False)
                                  try:
                                      async with Agent[TDeps].open_agent(  # type: ignore
                                          self,
                                          agent_name,
                                          model=model,
                                          session=session,
                                      ) as agent:
                                          if agent.context:
                                              agent.context.pool = pool
                                          pool.agents[agent_name] = agent
                                          yield agent
                                  finally:
                                      await pool.cleanup()
                              

                              resolve_inheritance classmethod

                              resolve_inheritance(data: dict) -> dict
                              

                              Resolve agent inheritance chains.

                              Source code in src/llmling_agent/models/agents.py
                              523
                              524
                              525
                              526
                              527
                              528
                              529
                              530
                              531
                              532
                              533
                              534
                              535
                              536
                              537
                              538
                              539
                              540
                              541
                              542
                              543
                              544
                              545
                              546
                              547
                              548
                              549
                              550
                              551
                              552
                              553
                              554
                              555
                              556
                              557
                              558
                              559
                              560
                              561
                              562
                              563
                              564
                              565
                              566
                              567
                              568
                              569
                              570
                              @model_validator(mode="before")
                              @classmethod
                              def resolve_inheritance(cls, data: dict) -> dict:
                                  """Resolve agent inheritance chains."""
                                  agents = data.get("agents", {})
                                  resolved: dict[str, dict] = {}
                                  seen: set[str] = set()
                              
                                  def resolve_agent(name: str) -> dict:
                                      if name in resolved:
                                          return resolved[name]
                              
                                      if name in seen:
                                          msg = f"Circular inheritance detected: {name}"
                                          raise ValueError(msg)
                              
                                      seen.add(name)
                                      config = (
                                          agents[name].model_copy()
                                          if hasattr(agents[name], "model_copy")
                                          else agents[name].copy()
                                      )
                                      inherit = (
                                          config.get("inherits") if isinstance(config, dict) else config.inherits
                                      )
                                      if inherit:
                                          if inherit not in agents:
                                              msg = f"Parent agent {inherit} not found"
                                              raise ValueError(msg)
                              
                                          # Get resolved parent config
                                          parent = resolve_agent(inherit)
                                          # Merge parent with child (child overrides parent)
                                          merged = parent.copy()
                                          merged.update(config)
                                          config = merged
                              
                                      seen.remove(name)
                                      resolved[name] = config
                                      return config
                              
                                  # Resolve all agents
                                  for name in agents:
                                      resolved[name] = resolve_agent(name)
                              
                                  # Update agents with resolved configs
                                  data["agents"] = resolved
                                  return data
                              

                              AwaitResponseDecision

                              Bases: Decision

                              Forward message and wait for response.

                              Source code in src/llmling_agent/delegation/router.py
                              64
                              65
                              66
                              67
                              68
                              69
                              70
                              71
                              72
                              73
                              74
                              75
                              76
                              77
                              78
                              79
                              80
                              81
                              82
                              83
                              84
                              85
                              86
                              class AwaitResponseDecision(Decision):
                                  """Forward message and wait for response."""
                              
                                  type: Literal["await_response"] = Field("await_response", init=False)
                                  """Type discriminator for await decisions."""
                              
                                  target_agent: str
                                  """Name of the agent to forward the message to."""
                              
                                  talk_back: bool = False
                                  """Whether to send response back to original agent."""
                              
                                  async def execute(
                                      self,
                                      message: ChatMessage[Any],
                                      source_agent: AnyAgent[Any, Any],
                                      pool: AgentPool,
                                  ):
                                      """Forward message and wait for response."""
                                      target = pool.get_agent(self.target_agent)
                                      response = await target.run(str(message))
                                      if self.talk_back:
                                          source_agent.outbox.emit(response, None)
                              

                              talk_back class-attribute instance-attribute

                              talk_back: bool = False
                              

                              Whether to send response back to original agent.

                              target_agent instance-attribute

                              target_agent: str
                              

                              Name of the agent to forward the message to.

                              type class-attribute instance-attribute

                              type: Literal['await_response'] = Field('await_response', init=False)
                              

                              Type discriminator for await decisions.

                              execute async

                              execute(message: ChatMessage[Any], source_agent: AnyAgent[Any, Any], pool: AgentPool)
                              

                              Forward message and wait for response.

                              Source code in src/llmling_agent/delegation/router.py
                              76
                              77
                              78
                              79
                              80
                              81
                              82
                              83
                              84
                              85
                              86
                              async def execute(
                                  self,
                                  message: ChatMessage[Any],
                                  source_agent: AnyAgent[Any, Any],
                                  pool: AgentPool,
                              ):
                                  """Forward message and wait for response."""
                                  target = pool.get_agent(self.target_agent)
                                  response = await target.run(str(message))
                                  if self.talk_back:
                                      source_agent.outbox.emit(response, None)
                              

                              CallbackRouter

                              Bases: AgentRouter

                              Router using callback function for decisions.

                              Source code in src/llmling_agent/delegation/router.py
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              class CallbackRouter[TMessage](AgentRouter):
                                  """Router using callback function for decisions."""
                              
                                  def __init__(
                                      self,
                                      pool: AgentPool,
                                      decision_callback: DecisionCallback[TMessage],
                                  ):
                                      self.pool = pool
                                      self.decision_callback = decision_callback
                              
                                  async def decide(self, message: TMessage) -> Decision:
                                      """Execute callback and handle sync/async appropriately."""
                                      result = self.decision_callback(message, self.pool, self)
                                      if inspect.isawaitable(result):
                                          return await result
                                      return result
                              

                              decide async

                              decide(message: TMessage) -> Decision
                              

                              Execute callback and handle sync/async appropriately.

                              Source code in src/llmling_agent/delegation/router.py
                              139
                              140
                              141
                              142
                              143
                              144
                              async def decide(self, message: TMessage) -> Decision:
                                  """Execute callback and handle sync/async appropriately."""
                                  result = self.decision_callback(message, self.pool, self)
                                  if inspect.isawaitable(result):
                                      return await result
                                  return result
                              

                              ChatMessage dataclass

                              Common message format for all UI types.

                              Generically typed with: ChatMessage[Type of Content] The type can either be str or a BaseModel subclass.

                              Source code in src/llmling_agent/models/messages.py
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              @dataclass
                              class ChatMessage[TContent]:
                                  """Common message format for all UI types.
                              
                                  Generically typed with: ChatMessage[Type of Content]
                                  The type can either be str or a BaseModel subclass.
                                  """
                              
                                  content: TContent
                                  """Message content, typed as TContent (either str or BaseModel)."""
                              
                                  role: MessageRole
                                  """Role of the message sender (user/assistant/system)."""
                              
                                  model: str | None = None
                                  """Name of the model that generated this message."""
                              
                                  metadata: JsonObject = field(default_factory=dict)
                                  """Additional metadata about the message."""
                              
                                  timestamp: datetime = field(default_factory=datetime.now)
                                  """When this message was created."""
                              
                                  cost_info: TokenCost | None = None
                                  """Token usage and costs for this specific message if available."""
                              
                                  message_id: str = field(default_factory=lambda: str(uuid4()))
                                  """Unique identifier for this message."""
                              
                                  response_time: float | None = None
                                  """Time it took the LLM to respond."""
                              
                                  tool_calls: list[ToolCallInfo] = field(default_factory=list)
                                  """List of tool calls made during message generation."""
                              
                                  name: str | None = None
                                  """Display name for the message sender in UI."""
                              
                                  forwarded_from: list[str] = field(default_factory=list)
                                  """List of agent names (the chain) that forwarded this message to the sender."""
                              
                                  def to_text_message(self) -> ChatMessage[str]:
                                      """Convert this message to a text-only version."""
                                      return dataclasses.replace(self, content=str(self.content))  # type: ignore
                              
                                  def _get_content_str(self) -> str:
                                      """Get string representation of content."""
                                      match self.content:
                                          case str():
                                              return self.content
                                          case BaseModel():
                                              return self.content.model_dump_json(indent=2)
                                          case _:
                                              msg = f"Unexpected content type: {type(self.content)}"
                                              raise ValueError(msg)
                              
                                  def to_gradio_format(self) -> tuple[str | None, str | None]:
                                      """Convert to Gradio chatbot format."""
                                      content_str = self._get_content_str()
                                      match self.role:
                                          case "user":
                                              return (content_str, None)
                                          case "assistant":
                                              return (None, content_str)
                                          case "system":
                                              return (None, f"System: {content_str}")
                              
                                  @property
                                  def data(self) -> TContent:
                                      """Get content as typed data. Provides compat to RunResult."""
                                      return self.content
                              
                                  def format(
                                      self,
                                      style: Literal["simple", "detailed", "markdown"] = "simple",
                                      *,
                                      show_metadata: bool = False,
                                      show_costs: bool = False,
                                  ) -> str:
                                      """Format message with configurable style."""
                                      match style:
                                          case "simple":
                                              return self._format_simple()
                                          case "detailed":
                                              return self._format_detailed(show_metadata, show_costs)
                                          case "markdown":
                                              return self._format_markdown(show_metadata, show_costs)
                                          case _:
                                              msg = f"Invalid style: {style}"
                                              raise ValueError(msg)
                              
                                  def _format_simple(self) -> str:
                                      """Basic format: sender and message."""
                                      sender = self.name or self.role.title()
                                      return f"{sender}: {self.content}"
                              
                                  def _format_detailed(self, show_metadata: bool, show_costs: bool) -> str:
                                      """Detailed format with optional metadata and costs."""
                                      ts = self.timestamp.strftime("%Y-%m-%d %H:%M:%S")
                                      name = self.name or self.role.title()
                                      parts = [f"From: {name}", f"Time: {ts}", "-" * 40, f"{self.content}", "-" * 40]
                              
                                      if show_costs and self.cost_info:
                                          parts.extend([
                                              f"Tokens: {self.cost_info.token_usage['total']:,}",
                                              f"Cost: ${self.cost_info.total_cost:.4f}",
                                          ])
                                          if self.response_time:
                                              parts.append(f"Response time: {self.response_time:.2f}s")
                              
                                      if show_metadata and self.metadata:
                                          parts.append("Metadata:")
                                          parts.extend(f"  {k}: {v}" for k, v in self.metadata.items())
                                      if self.forwarded_from:
                                          forwarded_from = " -> ".join(self.forwarded_from)
                                          parts.append(f"Forwarded via: {forwarded_from}")
                              
                                      return "\n".join(parts)
                              
                                  def _format_markdown(self, show_metadata: bool, show_costs: bool) -> str:
                                      """Markdown format for rich display."""
                                      name = self.name or self.role.title()
                                      timestamp = self.timestamp.strftime("%Y-%m-%d %H:%M:%S")
                                      parts = [f"## {name}", f"*{timestamp}*", "", str(self.content), ""]
                              
                                      if show_costs and self.cost_info:
                                          parts.extend([
                                              "---",
                                              "**Stats:**",
                                              f"- Tokens: {self.cost_info.token_usage['total']:,}",
                                              f"- Cost: ${self.cost_info.total_cost:.4f}",
                                          ])
                                          if self.response_time:
                                              parts.append(f"- Response time: {self.response_time:.2f}s")
                              
                                      if show_metadata and self.metadata:
                                          meta = yamling.dump_yaml(self.metadata)
                                          parts.extend(["", "**Metadata:**", "```", meta, "```"])
                              
                                      if self.forwarded_from:
                                          parts.append(f"\n*Forwarded via: {' → '.join(self.forwarded_from)}*")
                              
                                      return "\n".join(parts)
                              

                              content instance-attribute

                              content: TContent
                              

                              Message content, typed as TContent (either str or BaseModel).

                              cost_info class-attribute instance-attribute

                              cost_info: TokenCost | None = None
                              

                              Token usage and costs for this specific message if available.

                              data property

                              data: TContent
                              

                              Get content as typed data. Provides compat to RunResult.

                              forwarded_from class-attribute instance-attribute

                              forwarded_from: list[str] = field(default_factory=list)
                              

                              List of agent names (the chain) that forwarded this message to the sender.

                              message_id class-attribute instance-attribute

                              message_id: str = field(default_factory=lambda: str(uuid4()))
                              

                              Unique identifier for this message.

                              metadata class-attribute instance-attribute

                              metadata: JsonObject = field(default_factory=dict)
                              

                              Additional metadata about the message.

                              model class-attribute instance-attribute

                              model: str | None = None
                              

                              Name of the model that generated this message.

                              name class-attribute instance-attribute

                              name: str | None = None
                              

                              Display name for the message sender in UI.

                              response_time class-attribute instance-attribute

                              response_time: float | None = None
                              

                              Time it took the LLM to respond.

                              role instance-attribute

                              role: MessageRole
                              

                              Role of the message sender (user/assistant/system).

                              timestamp class-attribute instance-attribute

                              timestamp: datetime = field(default_factory=now)
                              

                              When this message was created.

                              tool_calls class-attribute instance-attribute

                              tool_calls: list[ToolCallInfo] = field(default_factory=list)
                              

                              List of tool calls made during message generation.

                              format

                              format(
                                  style: Literal["simple", "detailed", "markdown"] = "simple",
                                  *,
                                  show_metadata: bool = False,
                                  show_costs: bool = False,
                              ) -> str
                              

                              Format message with configurable style.

                              Source code in src/llmling_agent/models/messages.py
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              def format(
                                  self,
                                  style: Literal["simple", "detailed", "markdown"] = "simple",
                                  *,
                                  show_metadata: bool = False,
                                  show_costs: bool = False,
                              ) -> str:
                                  """Format message with configurable style."""
                                  match style:
                                      case "simple":
                                          return self._format_simple()
                                      case "detailed":
                                          return self._format_detailed(show_metadata, show_costs)
                                      case "markdown":
                                          return self._format_markdown(show_metadata, show_costs)
                                      case _:
                                          msg = f"Invalid style: {style}"
                                          raise ValueError(msg)
                              

                              to_gradio_format

                              to_gradio_format() -> tuple[str | None, str | None]
                              

                              Convert to Gradio chatbot format.

                              Source code in src/llmling_agent/models/messages.py
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              def to_gradio_format(self) -> tuple[str | None, str | None]:
                                  """Convert to Gradio chatbot format."""
                                  content_str = self._get_content_str()
                                  match self.role:
                                      case "user":
                                          return (content_str, None)
                                      case "assistant":
                                          return (None, content_str)
                                      case "system":
                                          return (None, f"System: {content_str}")
                              

                              to_text_message

                              to_text_message() -> ChatMessage[str]
                              

                              Convert this message to a text-only version.

                              Source code in src/llmling_agent/models/messages.py
                              132
                              133
                              134
                              def to_text_message(self) -> ChatMessage[str]:
                                  """Convert this message to a text-only version."""
                                  return dataclasses.replace(self, content=str(self.content))  # type: ignore
                              

                              Decision

                              Bases: BaseModel

                              Base class for all routing decisions.

                              Source code in src/llmling_agent/delegation/router.py
                              23
                              24
                              25
                              26
                              27
                              28
                              29
                              30
                              31
                              32
                              33
                              34
                              35
                              36
                              37
                              38
                              39
                              40
                              41
                              class Decision(BaseModel):
                                  """Base class for all routing decisions."""
                              
                                  type: str = Field(init=False)
                                  """Discriminator field for decision types."""
                              
                                  reason: str
                                  """Reason for this routing decision."""
                              
                                  model_config = ConfigDict(use_attribute_docstrings=True)
                              
                                  async def execute(
                                      self,
                                      message: ChatMessage[Any],
                                      source_agent: AnyAgent[Any, Any],
                                      pool: AgentPool,
                                  ):
                                      """Execute this routing decision."""
                                      raise NotImplementedError
                              

                              reason instance-attribute

                              reason: str
                              

                              Reason for this routing decision.

                              type class-attribute instance-attribute

                              type: str = Field(init=False)
                              

                              Discriminator field for decision types.

                              execute async

                              execute(message: ChatMessage[Any], source_agent: AnyAgent[Any, Any], pool: AgentPool)
                              

                              Execute this routing decision.

                              Source code in src/llmling_agent/delegation/router.py
                              34
                              35
                              36
                              37
                              38
                              39
                              40
                              41
                              async def execute(
                                  self,
                                  message: ChatMessage[Any],
                                  source_agent: AnyAgent[Any, Any],
                                  pool: AgentPool,
                              ):
                                  """Execute this routing decision."""
                                  raise NotImplementedError
                              

                              EndDecision

                              Bases: Decision

                              End the conversation.

                              Source code in src/llmling_agent/delegation/router.py
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              class EndDecision(Decision):
                                  """End the conversation."""
                              
                                  type: Literal["end"] = Field("end", init=False)
                                  """Type discriminator for end decisions."""
                              
                                  async def execute(
                                      self,
                                      message: ChatMessage[Any],
                                      source_agent: AnyAgent[Any, Any],
                                      pool: AgentPool,
                                  ):
                                      """End the conversation."""
                              

                              type class-attribute instance-attribute

                              type: Literal['end'] = Field('end', init=False)
                              

                              Type discriminator for end decisions.

                              execute async

                              execute(message: ChatMessage[Any], source_agent: AnyAgent[Any, Any], pool: AgentPool)
                              

                              End the conversation.

                              Source code in src/llmling_agent/delegation/router.py
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              async def execute(
                                  self,
                                  message: ChatMessage[Any],
                                  source_agent: AnyAgent[Any, Any],
                                  pool: AgentPool,
                              ):
                                  """End the conversation."""
                              

                              RouteDecision

                              Bases: Decision

                              Forward message without waiting for response.

                              Source code in src/llmling_agent/delegation/router.py
                              44
                              45
                              46
                              47
                              48
                              49
                              50
                              51
                              52
                              53
                              54
                              55
                              56
                              57
                              58
                              59
                              60
                              61
                              class RouteDecision(Decision):
                                  """Forward message without waiting for response."""
                              
                                  type: Literal["route"] = Field("route", init=False)
                                  """Type discriminator for routing decisions."""
                              
                                  target_agent: str
                                  """Name of the agent to forward the message to."""
                              
                                  async def execute(
                                      self,
                                      message: ChatMessage[Any],
                                      source_agent: AnyAgent[Any, Any],
                                      pool: AgentPool,
                                  ):
                                      """Forward message and continue."""
                                      target = pool.get_agent(self.target_agent)
                                      target.outbox.emit(message, None)
                              

                              target_agent instance-attribute

                              target_agent: str
                              

                              Name of the agent to forward the message to.

                              type class-attribute instance-attribute

                              type: Literal['route'] = Field('route', init=False)
                              

                              Type discriminator for routing decisions.

                              execute async

                              execute(message: ChatMessage[Any], source_agent: AnyAgent[Any, Any], pool: AgentPool)
                              

                              Forward message and continue.

                              Source code in src/llmling_agent/delegation/router.py
                              53
                              54
                              55
                              56
                              57
                              58
                              59
                              60
                              61
                              async def execute(
                                  self,
                                  message: ChatMessage[Any],
                                  source_agent: AnyAgent[Any, Any],
                                  pool: AgentPool,
                              ):
                                  """Forward message and continue."""
                                  target = pool.get_agent(self.target_agent)
                                  target.outbox.emit(message, None)
                              

                              RuleRouter

                              Bases: AgentRouter

                              Router using predefined rules.

                              Source code in src/llmling_agent/delegation/router.py
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              class RuleRouter(AgentRouter):
                                  """Router using predefined rules."""
                              
                                  def __init__(self, pool: AgentPool, config: RoutingConfig):
                                      self.pool = pool
                                      self.config = config
                              
                                  async def decide(self, message: str) -> Decision:
                                      """Make decision based on configured rules."""
                                      msg = message if self.config.case_sensitive else message.lower()
                              
                                      # Check each rule in priority order
                                      for rule in sorted(self.config.rules, key=lambda r: r.priority):
                                          keyword = rule.keyword if self.config.case_sensitive else rule.keyword.lower()
                              
                                          if keyword not in msg:
                                              continue
                              
                                          # Skip if target doesn't exist
                                          if rule.target not in self.pool.list_agents():
                                              msg = "Target agent %s not available for rule: %s"
                                              logger.debug(msg, rule.target, rule.keyword)
                                              continue
                              
                                          # Skip if capability required but not available
                                          if rule.requires_capability:
                                              agent = self.pool.get_agent(rule.target)
                                              if not agent.context.capabilities.has_capability(
                                                  rule.requires_capability
                                              ):
                                                  msg = "Agent %s missing required capability: %s"
                                                  logger.debug(msg, rule.target, rule.requires_capability)
                                                  continue
                              
                                          # Create appropriate decision using base class methods
                                          if rule.wait_for_response:
                                              return self.get_wait_decision(target=rule.target, reason=rule.reason)
                                          return self.get_route_decision(target=rule.target, reason=rule.reason)
                              
                                      # Use default route if configured
                                      if self.config.default_target:
                                          return self.get_wait_decision(
                                              target=self.config.default_target,
                                              reason=self.config.default_reason,
                                          )
                              
                                      # End if no route found
                                      return self.get_end_decision(reason="No matching rule or default route")
                              

                              decide async

                              decide(message: str) -> Decision
                              

                              Make decision based on configured rules.

                              Source code in src/llmling_agent/delegation/router.py
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              async def decide(self, message: str) -> Decision:
                                  """Make decision based on configured rules."""
                                  msg = message if self.config.case_sensitive else message.lower()
                              
                                  # Check each rule in priority order
                                  for rule in sorted(self.config.rules, key=lambda r: r.priority):
                                      keyword = rule.keyword if self.config.case_sensitive else rule.keyword.lower()
                              
                                      if keyword not in msg:
                                          continue
                              
                                      # Skip if target doesn't exist
                                      if rule.target not in self.pool.list_agents():
                                          msg = "Target agent %s not available for rule: %s"
                                          logger.debug(msg, rule.target, rule.keyword)
                                          continue
                              
                                      # Skip if capability required but not available
                                      if rule.requires_capability:
                                          agent = self.pool.get_agent(rule.target)
                                          if not agent.context.capabilities.has_capability(
                                              rule.requires_capability
                                          ):
                                              msg = "Agent %s missing required capability: %s"
                                              logger.debug(msg, rule.target, rule.requires_capability)
                                              continue
                              
                                      # Create appropriate decision using base class methods
                                      if rule.wait_for_response:
                                          return self.get_wait_decision(target=rule.target, reason=rule.reason)
                                      return self.get_route_decision(target=rule.target, reason=rule.reason)
                              
                                  # Use default route if configured
                                  if self.config.default_target:
                                      return self.get_wait_decision(
                                          target=self.config.default_target,
                                          reason=self.config.default_reason,
                                      )
                              
                                  # End if no route found
                                  return self.get_end_decision(reason="No matching rule or default route")
                              

                              SlashedAgent

                              Wraps an agent with slash command support.

                              Source code in src/llmling_agent/agent/slashed_agent.py
                               71
                               72
                               73
                               74
                               75
                               76
                               77
                               78
                               79
                               80
                               81
                               82
                               83
                               84
                               85
                               86
                               87
                               88
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              208
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              229
                              230
                              231
                              232
                              233
                              234
                              235
                              236
                              237
                              238
                              239
                              240
                              241
                              242
                              243
                              244
                              245
                              246
                              247
                              248
                              249
                              250
                              251
                              252
                              253
                              254
                              255
                              256
                              257
                              258
                              259
                              260
                              261
                              262
                              263
                              264
                              265
                              266
                              267
                              268
                              269
                              270
                              271
                              272
                              273
                              274
                              275
                              276
                              277
                              278
                              279
                              280
                              281
                              282
                              283
                              284
                              285
                              286
                              287
                              288
                              289
                              290
                              291
                              292
                              293
                              294
                              295
                              296
                              297
                              298
                              299
                              class SlashedAgent[TDeps, TContext]:
                                  """Wraps an agent with slash command support."""
                              
                                  message_output = Signal(AgentOutput)
                                  streamed_output = Signal(AgentOutput)
                                  streaming_started = Signal(str)  # message_id
                                  streaming_stopped = Signal(str)  # message_id
                              
                                  def __init__(
                                      self,
                                      agent: AnyAgent[TDeps, Any],
                                      *,
                                      command_context: TContext | None = None,
                                      command_history_path: str | None = None,
                                      output: DefaultOutputWriter | None = None,
                                  ):
                                      self.agent = agent
                                      assert self.agent.context, "Agent must have a context!"
                                      assert self.agent.context.pool, "Agent must have a pool!"
                              
                                      self.commands = CommandStore(
                                          history_file=command_history_path,
                                          enable_system_commands=True,
                                      )
                                      self.commands._initialize_sync()
                                      self._current_stream_id: str | None = None
                                      self.command_context: TContext = command_context or self  # type: ignore
                                      self.output = output or DefaultOutputWriter()
                                      # Connect to agent's signals
                                      agent.message_received.connect(self._handle_message_received)
                                      agent.message_sent.connect(self._handle_message_sent)
                                      agent.tool_used.connect(self._handle_tool_used)
                                      self.commands.command_executed.connect(self._handle_command_executed)
                                      self.commands.output.connect(self.streamed_output)
                                      agent.chunk_streamed.connect(self.streamed_output)
                              
                                  @overload
                                  async def run[TMethodResult](
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[TMethodResult],
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[TMethodResult]: ...
                              
                                  @overload
                                  async def run(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[str]: ...
                              
                                  async def run(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[Any] | None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[Any]:
                                      """Run with slash command support."""
                                      # First execute all commands sequentially
                                      remaining_prompts = []
                                      for p in prompt:
                                          if isinstance(p, str) and p.startswith("/"):
                                              await self.handle_command(
                                                  p[1:],
                                                  output=output or self.output,
                                                  metadata=metadata,
                                              )
                                          else:
                                              remaining_prompts.append(p)
                              
                                      # Then pass remaining prompts to agent
                                      return await self.agent.run(
                                          *remaining_prompts, result_type=result_type, deps=deps, model=model
                                      )
                              
                                  @overload
                                  def run_stream[TMethodResult](
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[TMethodResult],
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> AbstractAsyncContextManager[
                                      StreamedRunResult[AgentContext[TDeps], TMethodResult]
                                  ]: ...
                              
                                  @overload
                                  def run_stream(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> AbstractAsyncContextManager[StreamedRunResult[AgentContext[TDeps], str]]: ...
                              
                                  @asynccontextmanager
                                  async def run_stream(
                                      self,
                                      *prompt: AnyPromptType,
                                      result_type: type[Any] | None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> AsyncIterator[StreamedRunResult[AgentContext[TDeps], Any]]:
                                      """Stream responses with slash command support."""
                                      # First execute all commands sequentially
                                      remaining_prompts: list[AnyPromptType] = []
                                      for p in prompt:
                                          if isinstance(p, str) and p.startswith("/"):
                                              await self.handle_command(
                                                  p[1:],
                                                  output=output or self.output,
                                                  metadata=metadata,
                                              )
                                          else:
                                              remaining_prompts.append(p)
                              
                                      # Then yield from agent's stream
                                      async with self.agent.run_stream(
                                          *remaining_prompts, result_type=result_type, deps=deps, model=model
                                      ) as stream:
                                          yield stream
                              
                                  async def handle_command(
                                      self,
                                      command: str,
                                      output: OutputWriter | None = None,
                                      metadata: dict[str, Any] | None = None,
                                  ) -> ChatMessage[str]:
                                      """Handle a slash command."""
                                      try:
                                          await self.commands.execute_command_with_context(
                                              command,
                                              context=self.command_context,
                                              output_writer=output or self.output,
                                              metadata=metadata,
                                          )
                                          return ChatMessage(content="", role="system")
                                      except ExitCommandError:
                                          raise
                                      except Exception as e:  # noqa: BLE001
                                          msg = f"Command error: {e}"
                                          return ChatMessage(content=msg, role="system")
                              
                                  @property
                                  def tools(self) -> ToolManager:
                                      """Access to tool management."""
                                      return self.agent.tools
                              
                                  @property
                                  def conversation(self) -> ConversationManager:
                                      """Access to conversation management."""
                                      return self.agent.conversation
                              
                                  @property
                                  def provider(self) -> AgentProvider[TDeps]:
                                      """Access to the underlying provider."""
                                      return self.agent._provider
                              
                                  @property
                                  def pool(self) -> AgentPool:
                                      """Get agent's pool from context."""
                                      assert self.agent.context.pool
                                      return self.agent.context.pool
                              
                                  @property
                                  def model_name(self) -> str | None:
                                      """Get current model name."""
                                      return self.agent.model_name
                              
                                  @property
                                  def context(self) -> AgentContext[TDeps]:
                                      """Access to agent context."""
                                      return self.agent.context
                              
                                  def _handle_message_received(self, message: ChatMessage[str]):
                                      meta = {"role": message.role}
                                      output = AgentOutput(type="message_received", content=message, metadata=meta)
                                      self.message_output.emit(output)
                              
                                  def _handle_message_sent(self, message: ChatMessage[Any]):
                                      cost = message.cost_info.total_cost if message.cost_info else None
                                      metadata = {"role": message.role, "model": message.model, "cost": cost}
                                      output = AgentOutput(type="message_sent", content=message, metadata=metadata)
                                      self.message_output.emit(output)
                              
                                  def _handle_tool_used(self, tool_call: ToolCallInfo):
                                      metadata = {"tool_name": tool_call.tool_name}
                                      output = AgentOutput("tool_called", content=tool_call, metadata=metadata)
                                      self.message_output.emit(output)
                              
                                  # Could also connect to Slashed's command signals
                                  def _handle_command_executed(self, event: CommandExecutedEvent):
                                      """Handle command execution events."""
                                      error = str(event.error) if event.error else None
                                      meta = {"success": event.success, "error": error, "context": event.context}
                                      output = AgentOutput("command_executed", content=event.command, metadata=meta)
                                      self.message_output.emit(output)
                              
                                  def _handle_chunk_streamed(self, chunk: str, message_id: str):
                                      """Handle streaming chunks."""
                                      # If this is a new streaming session
                                      if message_id != self._current_stream_id:
                                          self._current_stream_id = message_id
                                          self.streaming_started.emit(message_id)
                              
                                      if chunk:  # Only emit non-empty chunks
                                          output = AgentOutput(
                                              type="stream", content=chunk, metadata={"message_id": message_id}
                                          )
                                          self.streamed_output.emit(output)
                                      else:  # Empty chunk signals end of stream
                                          self.streaming_stopped.emit(message_id)
                                          self._current_stream_id = None
                              

                              context property

                              context: AgentContext[TDeps]
                              

                              Access to agent context.

                              conversation property

                              conversation: ConversationManager
                              

                              Access to conversation management.

                              model_name property

                              model_name: str | None
                              

                              Get current model name.

                              pool property

                              pool: AgentPool
                              

                              Get agent's pool from context.

                              provider property

                              provider: AgentProvider[TDeps]
                              

                              Access to the underlying provider.

                              tools property

                              tools: ToolManager
                              

                              Access to tool management.

                              handle_command async

                              handle_command(
                                  command: str,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[str]
                              

                              Handle a slash command.

                              Source code in src/llmling_agent/agent/slashed_agent.py
                              209
                              210
                              211
                              212
                              213
                              214
                              215
                              216
                              217
                              218
                              219
                              220
                              221
                              222
                              223
                              224
                              225
                              226
                              227
                              228
                              async def handle_command(
                                  self,
                                  command: str,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[str]:
                                  """Handle a slash command."""
                                  try:
                                      await self.commands.execute_command_with_context(
                                          command,
                                          context=self.command_context,
                                          output_writer=output or self.output,
                                          metadata=metadata,
                                      )
                                      return ChatMessage(content="", role="system")
                                  except ExitCommandError:
                                      raise
                                  except Exception as e:  # noqa: BLE001
                                      msg = f"Command error: {e}"
                                      return ChatMessage(content=msg, role="system")
                              

                              run async

                              run(
                                  *prompt: AnyPromptType,
                                  result_type: type[TMethodResult],
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[TMethodResult]
                              
                              run(
                                  *prompt: AnyPromptType,
                                  result_type: None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[str]
                              
                              run(
                                  *prompt: AnyPromptType,
                                  result_type: type[Any] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[Any]
                              

                              Run with slash command support.

                              Source code in src/llmling_agent/agent/slashed_agent.py
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              async def run(
                                  self,
                                  *prompt: AnyPromptType,
                                  result_type: type[Any] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> ChatMessage[Any]:
                                  """Run with slash command support."""
                                  # First execute all commands sequentially
                                  remaining_prompts = []
                                  for p in prompt:
                                      if isinstance(p, str) and p.startswith("/"):
                                          await self.handle_command(
                                              p[1:],
                                              output=output or self.output,
                                              metadata=metadata,
                                          )
                                      else:
                                          remaining_prompts.append(p)
                              
                                  # Then pass remaining prompts to agent
                                  return await self.agent.run(
                                      *remaining_prompts, result_type=result_type, deps=deps, model=model
                                  )
                              

                              run_stream async

                              run_stream(
                                  *prompt: AnyPromptType,
                                  result_type: type[TMethodResult],
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> AbstractAsyncContextManager[StreamedRunResult[AgentContext[TDeps], TMethodResult]]
                              
                              run_stream(
                                  *prompt: AnyPromptType,
                                  result_type: None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> AbstractAsyncContextManager[StreamedRunResult[AgentContext[TDeps], str]]
                              
                              run_stream(
                                  *prompt: AnyPromptType,
                                  result_type: type[Any] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> AsyncIterator[StreamedRunResult[AgentContext[TDeps], Any]]
                              

                              Stream responses with slash command support.

                              Source code in src/llmling_agent/agent/slashed_agent.py
                              180
                              181
                              182
                              183
                              184
                              185
                              186
                              187
                              188
                              189
                              190
                              191
                              192
                              193
                              194
                              195
                              196
                              197
                              198
                              199
                              200
                              201
                              202
                              203
                              204
                              205
                              206
                              207
                              @asynccontextmanager
                              async def run_stream(
                                  self,
                                  *prompt: AnyPromptType,
                                  result_type: type[Any] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                                  output: OutputWriter | None = None,
                                  metadata: dict[str, Any] | None = None,
                              ) -> AsyncIterator[StreamedRunResult[AgentContext[TDeps], Any]]:
                                  """Stream responses with slash command support."""
                                  # First execute all commands sequentially
                                  remaining_prompts: list[AnyPromptType] = []
                                  for p in prompt:
                                      if isinstance(p, str) and p.startswith("/"):
                                          await self.handle_command(
                                              p[1:],
                                              output=output or self.output,
                                              metadata=metadata,
                                          )
                                      else:
                                          remaining_prompts.append(p)
                              
                                  # Then yield from agent's stream
                                  async with self.agent.run_stream(
                                      *remaining_prompts, result_type=result_type, deps=deps, model=model
                                  ) as stream:
                                      yield stream
                              

                              StructuredAgent

                              Wrapper for Agent that enforces a specific result type.

                              This wrapper ensures the agent always returns results of the specified type. The type can be provided as: - A Python type for validation - A response definition name from the manifest - A complete response definition instance

                              Source code in src/llmling_agent/agent/structured.py
                               33
                               34
                               35
                               36
                               37
                               38
                               39
                               40
                               41
                               42
                               43
                               44
                               45
                               46
                               47
                               48
                               49
                               50
                               51
                               52
                               53
                               54
                               55
                               56
                               57
                               58
                               59
                               60
                               61
                               62
                               63
                               64
                               65
                               66
                               67
                               68
                               69
                               70
                               71
                               72
                               73
                               74
                               75
                               76
                               77
                               78
                               79
                               80
                               81
                               82
                               83
                               84
                               85
                               86
                               87
                               88
                               89
                               90
                               91
                               92
                               93
                               94
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              103
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              126
                              127
                              128
                              129
                              130
                              131
                              132
                              133
                              134
                              135
                              136
                              137
                              138
                              139
                              140
                              141
                              142
                              143
                              144
                              145
                              146
                              147
                              148
                              149
                              150
                              151
                              152
                              153
                              154
                              155
                              156
                              157
                              158
                              159
                              160
                              161
                              162
                              163
                              164
                              165
                              166
                              167
                              168
                              169
                              170
                              171
                              172
                              173
                              174
                              175
                              176
                              177
                              178
                              179
                              180
                              181
                              182
                              183
                              184
                              class StructuredAgent[TDeps, TResult]:
                                  """Wrapper for Agent that enforces a specific result type.
                              
                                  This wrapper ensures the agent always returns results of the specified type.
                                  The type can be provided as:
                                  - A Python type for validation
                                  - A response definition name from the manifest
                                  - A complete response definition instance
                                  """
                              
                                  def __init__(
                                      self,
                                      agent: AnyAgent[TDeps, TResult],
                                      result_type: type[TResult] | str | ResponseDefinition,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ):
                                      """Initialize structured agent wrapper.
                              
                                      Args:
                                          agent: Base agent to wrap
                                          result_type: Expected result type:
                                              - BaseModel / dataclasses
                                              - Name of response definition in manifest
                                              - Complete response definition instance
                                          tool_name: Optional override for tool name
                                          tool_description: Optional override for tool description
                              
                                      Raises:
                                          ValueError: If named response type not found in manifest
                                      """
                                      logger.debug("StructuredAgent.run result_type = %s", result_type)
                                      if isinstance(agent, StructuredAgent):
                                          self._agent: Agent[TDeps] = agent._agent
                                      else:
                                          self._agent = agent
                                      self._result_type = to_type(result_type)
                                      agent.set_result_type(result_type)
                              
                                      match result_type:
                                          case type() | str():
                                              # For types and named definitions, use overrides if provided
                                              self._agent.set_result_type(
                                                  result_type,
                                                  tool_name=tool_name,
                                                  tool_description=tool_description,
                                              )
                                          case BaseResponseDefinition():
                                              # For response definitions, use as-is
                                              # (overrides don't apply to complete definitions)
                                              self._agent.set_result_type(result_type)
                              
                                  async def __aenter__(self) -> Self:
                                      """Enter async context and set up MCP servers.
                              
                                      Called when agent enters its async context. Sets up any configured
                                      MCP servers and their tools.
                                      """
                                      await self._agent.__aenter__()
                                      return self
                              
                                  async def __aexit__(
                                      self,
                                      exc_type: type[BaseException] | None,
                                      exc_val: BaseException | None,
                                      exc_tb: TracebackType | None,
                                  ):
                                      """Exit async context."""
                                      await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                              
                                  async def run(
                                      self,
                                      *prompt: AnyPromptType | TResult,
                                      result_type: type[TResult] | None = None,
                                      deps: TDeps | None = None,
                                      model: ModelType = None,
                                  ) -> ChatMessage[TResult]:
                                      """Run with fixed result type.
                              
                                      Args:
                                          prompt: Any prompt-compatible object or structured objects of type TResult
                                          result_type: Expected result type:
                                              - BaseModel / dataclasses
                                              - Name of response definition in manifest
                                              - Complete response definition instance
                                          deps: Optional dependencies for the agent
                                          message_history: Optional previous messages for context
                                          model: Optional model override
                                          usage: Optional usage tracking
                                      """
                                      typ = result_type or self._result_type
                                      return await self._agent.run(*prompt, result_type=typ, deps=deps, model=model)
                              
                                  def __repr__(self) -> str:
                                      type_name = getattr(self._result_type, "__name__", str(self._result_type))
                                      return f"StructuredAgent({self._agent!r}, result_type={type_name})"
                              
                                  def __prompt__(self) -> str:
                                      type_name = getattr(self._result_type, "__name__", str(self._result_type))
                                      base_info = self._agent.__prompt__()
                                      return f"{base_info}\nStructured output type: {type_name}"
                              
                                  def __getattr__(self, name: str) -> Any:
                                      return getattr(self._agent, name)
                              
                                  @property
                                  def context(self) -> AgentContext[TDeps]:
                                      return self._agent.context
                              
                                  @context.setter
                                  def context(self, value: Any):
                                      self._agent.context = value
                              
                                  @property
                                  def tools(self) -> ToolManager:
                                      return self._agent.tools
                              
                                  @overload
                                  def to_structured(
                                      self,
                                      result_type: None,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ) -> Agent[TDeps]: ...
                              
                                  @overload
                                  def to_structured[TNewResult](
                                      self,
                                      result_type: type[TNewResult] | str | ResponseDefinition,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ) -> StructuredAgent[TDeps, TNewResult]: ...
                              
                                  def to_structured[TNewResult](
                                      self,
                                      result_type: type[TNewResult] | str | ResponseDefinition | None,
                                      *,
                                      tool_name: str | None = None,
                                      tool_description: str | None = None,
                                  ) -> Agent[TDeps] | StructuredAgent[TDeps, TNewResult]:
                                      if result_type is None:
                                          return self._agent
                              
                                      return StructuredAgent(
                                          self._agent,
                                          result_type=result_type,
                                          tool_name=tool_name,
                                          tool_description=tool_description,
                                      )
                              

                              __aenter__ async

                              __aenter__() -> Self
                              

                              Enter async context and set up MCP servers.

                              Called when agent enters its async context. Sets up any configured MCP servers and their tools.

                              Source code in src/llmling_agent/agent/structured.py
                              86
                              87
                              88
                              89
                              90
                              91
                              92
                              93
                              async def __aenter__(self) -> Self:
                                  """Enter async context and set up MCP servers.
                              
                                  Called when agent enters its async context. Sets up any configured
                                  MCP servers and their tools.
                                  """
                                  await self._agent.__aenter__()
                                  return self
                              

                              __aexit__ async

                              __aexit__(
                                  exc_type: type[BaseException] | None,
                                  exc_val: BaseException | None,
                                  exc_tb: TracebackType | None,
                              )
                              

                              Exit async context.

                              Source code in src/llmling_agent/agent/structured.py
                               95
                               96
                               97
                               98
                               99
                              100
                              101
                              102
                              async def __aexit__(
                                  self,
                                  exc_type: type[BaseException] | None,
                                  exc_val: BaseException | None,
                                  exc_tb: TracebackType | None,
                              ):
                                  """Exit async context."""
                                  await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                              

                              __init__

                              __init__(
                                  agent: AnyAgent[TDeps, TResult],
                                  result_type: type[TResult] | str | ResponseDefinition,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              )
                              

                              Initialize structured agent wrapper.

                              Parameters:

                              Name Type Description Default
                              agent AnyAgent[TDeps, TResult]

                              Base agent to wrap

                              required
                              result_type type[TResult] | str | ResponseDefinition

                              Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                              required
                              tool_name str | None

                              Optional override for tool name

                              None
                              tool_description str | None

                              Optional override for tool description

                              None

                              Raises:

                              Type Description
                              ValueError

                              If named response type not found in manifest

                              Source code in src/llmling_agent/agent/structured.py
                              43
                              44
                              45
                              46
                              47
                              48
                              49
                              50
                              51
                              52
                              53
                              54
                              55
                              56
                              57
                              58
                              59
                              60
                              61
                              62
                              63
                              64
                              65
                              66
                              67
                              68
                              69
                              70
                              71
                              72
                              73
                              74
                              75
                              76
                              77
                              78
                              79
                              80
                              81
                              82
                              83
                              84
                              def __init__(
                                  self,
                                  agent: AnyAgent[TDeps, TResult],
                                  result_type: type[TResult] | str | ResponseDefinition,
                                  *,
                                  tool_name: str | None = None,
                                  tool_description: str | None = None,
                              ):
                                  """Initialize structured agent wrapper.
                              
                                  Args:
                                      agent: Base agent to wrap
                                      result_type: Expected result type:
                                          - BaseModel / dataclasses
                                          - Name of response definition in manifest
                                          - Complete response definition instance
                                      tool_name: Optional override for tool name
                                      tool_description: Optional override for tool description
                              
                                  Raises:
                                      ValueError: If named response type not found in manifest
                                  """
                                  logger.debug("StructuredAgent.run result_type = %s", result_type)
                                  if isinstance(agent, StructuredAgent):
                                      self._agent: Agent[TDeps] = agent._agent
                                  else:
                                      self._agent = agent
                                  self._result_type = to_type(result_type)
                                  agent.set_result_type(result_type)
                              
                                  match result_type:
                                      case type() | str():
                                          # For types and named definitions, use overrides if provided
                                          self._agent.set_result_type(
                                              result_type,
                                              tool_name=tool_name,
                                              tool_description=tool_description,
                                          )
                                      case BaseResponseDefinition():
                                          # For response definitions, use as-is
                                          # (overrides don't apply to complete definitions)
                                          self._agent.set_result_type(result_type)
                              

                              run async

                              run(
                                  *prompt: AnyPromptType | TResult,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                              ) -> ChatMessage[TResult]
                              

                              Run with fixed result type.

                              Parameters:

                              Name Type Description Default
                              prompt AnyPromptType | TResult

                              Any prompt-compatible object or structured objects of type TResult

                              ()
                              result_type type[TResult] | None

                              Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                              None
                              deps TDeps | None

                              Optional dependencies for the agent

                              None
                              message_history

                              Optional previous messages for context

                              required
                              model ModelType

                              Optional model override

                              None
                              usage

                              Optional usage tracking

                              required
                              Source code in src/llmling_agent/agent/structured.py
                              104
                              105
                              106
                              107
                              108
                              109
                              110
                              111
                              112
                              113
                              114
                              115
                              116
                              117
                              118
                              119
                              120
                              121
                              122
                              123
                              124
                              125
                              async def run(
                                  self,
                                  *prompt: AnyPromptType | TResult,
                                  result_type: type[TResult] | None = None,
                                  deps: TDeps | None = None,
                                  model: ModelType = None,
                              ) -> ChatMessage[TResult]:
                                  """Run with fixed result type.
                              
                                  Args:
                                      prompt: Any prompt-compatible object or structured objects of type TResult
                                      result_type: Expected result type:
                                          - BaseModel / dataclasses
                                          - Name of response definition in manifest
                                          - Complete response definition instance
                                      deps: Optional dependencies for the agent
                                      message_history: Optional previous messages for context
                                      model: Optional model override
                                      usage: Optional usage tracking
                                  """
                                  typ = result_type or self._result_type
                                  return await self._agent.run(*prompt, result_type=typ, deps=deps, model=model)
                              

                              SystemPrompt

                              Bases: BaseModel

                              System prompt configuration for agent behavior control.

                              Defines prompts that set up the agent's behavior and context. Supports multiple types: - Static text prompts - Dynamic function-based prompts - Template prompts with variable substitution

                              Source code in src/llmling_agent/models/prompts.py
                              10
                              11
                              12
                              13
                              14
                              15
                              16
                              17
                              18
                              19
                              20
                              21
                              22
                              23
                              24
                              25
                              26
                              class SystemPrompt(BaseModel):
                                  """System prompt configuration for agent behavior control.
                              
                                  Defines prompts that set up the agent's behavior and context.
                                  Supports multiple types:
                                  - Static text prompts
                                  - Dynamic function-based prompts
                                  - Template prompts with variable substitution
                                  """
                              
                                  type: Literal["text", "function", "template"]
                                  """Type of system prompt: static text, function call, or template"""
                              
                                  value: str
                                  """The prompt text, function path, or template string"""
                              
                                  model_config = ConfigDict(frozen=True, use_attribute_docstrings=True)
                              

                              type instance-attribute

                              type: Literal['text', 'function', 'template']
                              

                              Type of system prompt: static text, function call, or template

                              value instance-attribute

                              value: str
                              

                              The prompt text, function path, or template string

                              interactive_controller async

                              interactive_controller(
                                  message: str, pool: AgentPool, agent_router: AgentRouter
                              ) -> Decision
                              

                              Interactive conversation control through console input.

                              Source code in src/llmling_agent/delegation/controllers.py
                              22
                              23
                              24
                              25
                              26
                              27
                              28
                              29
                              30
                              31
                              32
                              33
                              34
                              35
                              36
                              37
                              38
                              39
                              40
                              41
                              42
                              43
                              44
                              45
                              46
                              47
                              48
                              49
                              50
                              51
                              52
                              53
                              54
                              55
                              56
                              57
                              58
                              async def interactive_controller(
                                  message: str, pool: "AgentPool", agent_router: AgentRouter
                              ) -> Decision:
                                  """Interactive conversation control through console input."""
                                  print(f"\nMessage: {message}")
                                  print("\nWhat would you like to do?")
                                  print("1. Forward message (no wait)")
                                  print("2. Route and wait for response")
                                  print("3. End conversation")
                              
                                  try:
                                      match choice := int(input("> ")):
                                          case 1 | 2:  # Route or TalkBack
                                              print("\nAvailable agents:")
                                              # Use pool's list_agents instead of passed list
                                              agents = pool.list_agents()
                                              for i, name in enumerate(agents, 1):
                                                  agent = pool.get_agent(name)
                                                  print(f"{i}. {name} ({agent.description or 'No description'})")
                              
                                              agent_idx = int(input("Select agent: ")) - 1
                                              target = agents[agent_idx]
                                              reason = input("Reason: ")
                              
                                              if choice == 1:
                                                  return RouteDecision(target_agent=target, reason=reason)
                                              return AwaitResponseDecision(target_agent=target, reason=reason)
                              
                                          case 3:  # End
                                              reason = input("Reason for ending: ")
                                              return EndDecision(reason=reason)
                              
                                          case _:
                                              return EndDecision(reason="Invalid choice")
                              
                                  except (ValueError, IndexError):
                                      return EndDecision(reason="Invalid input")